Improve company productivity with a Business Account.Sign Up

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 223
  • Last Modified:

read,collect and analyze the data/information from any website

Hi

I'm looking for away to read,collect and analyze  the data/information from any website

for example: from amazon website I need the list of books and authors for specific subject? or from blogs website get a list of all titles and dates or from pizza website get all type of pizza and the prices

and extract it in XML file or even DB table or to any format

any one know any API do that or Java code do it...

In fact, I need to gather and analysis the data from websites and insert it in another database

thanks
0
nmokhayesh
Asked:
nmokhayesh
  • 7
  • 6
  • 4
1 Solution
 
nmokhayeshAuthor Commented:
Can I do it using webbots
http://www.schrenk.com/nostarch/webbots/

any example using JAVA
0
 
CEHJCommented:
Use a high level API that supports xml, such as HttpUnit
0
 
nmokhayeshAuthor Commented:
Thanks CEHJ
HttpUnit used for testing
what I need is to collect some data from web page and analyze it to be inserted in DB
any suggestion please

thanks
0
Easily Design & Build Your Next Website

Squarespace’s all-in-one platform gives you everything you need to express yourself creatively online, whether it is with a domain, website, or online store. Get started with your free trial today, and when ready, take 10% off your first purchase with offer code 'EXPERTS'.

 
CEHJCommented:
You can use HttpUnit for all sorts of purposes. Importantly in your case, it can produce an xhtml dom which you can then use (possibly with XPath) to get the data you need
0
 
objectsCommented:
many sites, such as amazon provide an api so you don't need to scrape the pages
http://aws.amazon.com/

for scraping try webharvest
http://web-harvest.sourceforge.net/
0
 
objectsCommented:
you really don't want to be reinventing the wheel :)
0
 
nmokhayeshAuthor Commented:
0
 
CEHJCommented:
Crawlers are for downloading websites. You then need to analyse them
0
 
nmokhayeshAuthor Commented:
Hi,

What I mean be analyzing the data is that :

for instance , if I select specific university website and I need to grape the faculties data for that university only such as faculty name , address, schools, and professors information in to text file

based on your experiance is one of the previous approaches will do the job without doing a lot of coding?

thanks    
0
 
CEHJCommented:
The least coding will be achieved by using the highest level api. You won't get much higher than HttpUnit in Java
0
 
nmokhayeshAuthor Commented:
Ok
can you just give me some examples to do the task  because I do not know much about HttpUnit :(

thanks
0
 
CEHJCommented:
0
 
objectsCommented:
> can you just give me some examples to do the task  because I do not know much about HttpUnit :(


would be a lot of work with httpunit, its intended primarily just for testing
did you try webharvest, will make the job far easier
0
 
nmokhayeshAuthor Commented:
Dears,

really I appreciate your advice, but I'm confused now! I will rephrase my task in very simple example then I need your suggestion and best way to do it in a short time.

For example :  I need to get the university information for specific university (so this should work only with this university since very university website has its own design). So this API should get/grape the list of faculties and under each faculty list of schools/department and then under each department list of professors and their information (Tel, Fax , research interest , courses, and personal website).
The out put is XML or text file

so based in the example which technique is easy and fast to do the job in a short time without lean new things
is webbots, spiders or  httpunit or webharvest or Java webbots ?

I would appreciate if you give me similar example

please advice

thanks
0
 
CEHJCommented:
>>so based in the example which technique is easy and fast to do the job in a short time

My answer is as above: HttpUnit
0
 
objectsCommented:
"Web-Harvest is Open Source Web Data Extraction tool written in Java. It offers a way to collect desired Web pages and extract useful data from them. In order to do that, it leverages well established techniques and technologies for text/xml manipulation such as XSLT, XQuery and Regular Expressions. Web-Harvest mainly focuses on HTML/XML based web sites which still make vast majority of the Web content. On the other hand, it could be easily supplemented by custom Java libraries in order to augment its extraction capabilities."

Does exactly what you want
0
 
nmokhayeshAuthor Commented:
it help
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Easily Design & Build Your Next Website

Squarespace’s all-in-one platform gives you everything you need to express yourself creatively online, whether it is with a domain, website, or online store. Get started with your free trial today, and when ready, take 10% off your first purchase with offer code 'EXPERTS'.

  • 7
  • 6
  • 4
Tackle projects and never again get stuck behind a technical roadblock.
Join Now