[Webinar] Streamline your web hosting managementRegister Today

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 692
  • Last Modified:

How to fetch live data from website ?

How to  fetch  live data from website  ( without RSS feed / Webservice )?  

Do you  use any API ?

Example:  Live cricket scores


I want to write a standalone java program which will fetch  live data from  site. How do I start ?  What are the things I require ?

Could you throw some light .
0
cofactor
Asked:
cofactor
1 Solution
 
IanThCommented:
you need a database so the web site works and you can view the data remotely

Do you use any api

that depends on your setup I expect it will be linux and you can install mysql its usually installed anyway although it can be installed in windows

for remote view
see
http://www.softpedia.com/get/Internet/Servers/Database-Utils/Remote-MySQL-Viewer.shtml
0
 
Gurvinder Pal SinghCommented:
you can invoke that URL using HTTPURLConnection
http://docs.oracle.com/javase/tutorial/networking/urls/readingWriting.html

once you have got the content of that URL, then you can parse it using Xpath
http://www.rgagnon.com/javadetails/java-0550.html
http://htmlparser.sourceforge.net/
0
 
Gurvinder Pal SinghCommented:
you may have to use jTidy to purify HTML
http://jtidy.sourceforge.net/
0
The new generation of project management tools

With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.

 
cofactorAuthor Commented:
@gurvinder372
>>>you can invoke that URL using HTTPURLConnection

Is not that involve a  loop ?  something like this ...

while(true)
{
//HTTPURLConnection ...get data from site
//parse and get  required data
  Thread.sleep(30 secs)
}


OR   there  is some other smart and better approach exists I'm missing here ?


I  know a good package which  is capable of doing this kind of work neatly.  
http://httpunit.sourceforge.net/doc/faq.html

But I'm not sure whether you'll be using a loop to hit server every 30 secs interval to fetch latest updates  or  there is  better  a way ?


Please post you  comments
0
 
Gurvinder Pal SinghCommented:
smarter approach would be to use web-services exposed by that site to use their data.

for example, google services, or facebook services, or twitter services
0
 
for_yanCommented:
This is a simple working example of implementation of getting RSS feed form
the list of sites

It uses ROME API, let me find the appropriate link

import java.io.*;
import java.net.URL;
import java.util.ArrayList;
import java.util.Date;
import java.util.Iterator;
import java.util.StringTokenizer;

import com.sun.syndication.feed.synd.SyndEntry;
import com.sun.syndication.feed.synd.SyndFeed;
import com.sun.syndication.io.SyndFeedInput;
import com.sun.syndication.io.XmlReader;

public class Reader {

 static String [] urls =  {"http://rss.cnn.com/rss/cnn_topstories.rss", "http://www.nytimes.com/services/xml/rss/nyt/World.xml",
 "http://www.nytimes.com/services/xml/rss/nyt/US.xml","http://newsrack.in/crawled.feeds/frontline.rss.xml"
};
public static void main(String[] args) throws Exception {                              

//URL url = new URL("http://viralpatel.net/blogs/feed");

  //  URL url = new URL("http://rss.cnn.com/rss/cnn_topstories.rss");
XmlReader reader = null;

try {

    while(true) {

long now = (new java.util.Date()).getTime();

    System.out.println("checking at " + (new Date(now)).toString());    
long week_before = now - (24L*3600L*7L*1000L);


ArrayList list = new ArrayList();

DataInputStream in = new DataInputStream(new FileInputStream("C:\\temp\\test\\visited.txt"));
PrintStream psout = new PrintStream(new FileOutputStream("C:\\temp\\test\\visited1.txt"));

String buff = null;

while((buff=in.readLine()) != null){
StringTokenizer t = new StringTokenizer(buff);
String ttime = t.nextToken();
if(Long.parseLong(ttime) <week_before)continue;
String llink = t.nextToken().trim();
list.add(llink);
psout.println(ttime + " " + llink);

}

          for (int jj=0; jj<urls.length; jj++){
            URL  url = new URL(urls[jj]);
reader = new XmlReader(url);
SyndFeed feed = new SyndFeedInput().build(reader);
//System.out.println("Feed Title: "+ feed.getAuthor());

for (Iterator i = feed.getEntries().iterator(); i.hasNext();) {
SyndEntry entry = (SyndEntry) i.next();
    String title = entry.getTitle();
     String link = entry.getUri().trim();
     if(list.contains(link))continue;
     Date date = entry.getPublishedDate();
// Problem here -->         **     SyndEntry source = item.getSource();
     String description;
     if (entry.getDescription()== null){
       description = "";
     } else {
       description = entry.getDescription().getValue();
     }
     String cleanDescription = description.replaceAll("\\<.*?>","").replaceAll("\\s+", " ");
        System.out.println(title);
System.out.println(link);
    System.out.println(cleanDescription);



//System.out.println(entry.getTitle());
//System.out.println(entry.getContents());
    System.out.println("");
     System.out.println("");
     psout.println(now + " " + link);
}

}

in.close();
psout.close();
File f0 = new File("C:\\temp\\test\\visited.txt");
File f1 = new File("C:\\temp\\test\\visited1.txt");
f0.delete();
f1.renameTo(f0);

Thread.currentThread().sleep(600000);

    }


} finally {
if (reader != null)
reader.close();
}
}
}
                                            

Open in new window

0
 
for_yanCommented:
0

Featured Post

The 14th Annual Expert Award Winners

The results are in! Meet the top members of our 2017 Expert Awards. Congratulations to all who qualified!

Tackle projects and never again get stuck behind a technical roadblock.
Join Now