Link to home
Start Free TrialLog in
Avatar of fahadalam
fahadalam

asked on

Creating script like cutestat

Hi,

I am looking for advanced information to develop a script similar to cutestat. I want to know how should I proceed? and how should I crawl pages / websites ? should I do it via php curl and store the html output or part of it into database or what else could be the best practice ?

I shall be doing custom digging and regex patterns into database later on but initially I just want to start with 1 million domains and want to know what's the fastest way to get html of all those million domains / sites ?

is php efficient enough ? or I have to use any other crawler?

regards
Avatar of Carlos Llanos
Carlos Llanos
Flag of United States of America image

Take a look at this page.

http://stackoverflow.com/questions/8316818/login-to-website-using-python/8316989#8316989

My question for you is, do you have the million domain names yet?

You can most definitely (once you have the information) store it into a database, that's easiest for getting stats off and such later.
One more site that might be helpful.

http://www.crawl-anywhere.com/
Is PHP efficient enough?  Yes, PHP powers Facebook, so unless your processing requirements exceed Facebook, you will be OK with PHP.  You can use cURL to read the HTML.  Hopefully your site will be more accurate than cutestat, because that thing is wa-a-ay off base!

Here is where you're going to run into a problem.  Most sites of any importance are not HTML any more.  They use a little HTML for a document framework, but the content is loaded dynamically by JavaScript and AJAX calls to web APIs.  To see what you're up against, make a Google search for anything, then use your browser's "view source."  The source document that can be read with cURL is not what you're seeing on the browser screen.

To understand the technologies we use for web development today, learn about AngularJS or just read this article and look for the demo/ajax_captcha_client.php scripts.
https://www.experts-exchange.com/Web_Development/Web_Languages-Standards/PHP/A_9849-Making-CAPTCHA-Friendlier-with-PHP-Image-Manipulation.html
Avatar of fahadalam
fahadalam

ASKER

Ray you took me wrong, I was actually asking if php/curl is strong enough to make it a crawler/bot. Facebook and any other aren't using php for this purpose, almost everywhere it's stated to use python crawler and php for rest of thing
A Google search for "webcrawler" returns about 1,700,000 results.  One of those will probably meet at least some of your needs.  The issue is not whether you can read the HTML document - that's the easy part, and in 1999 that would be all you needed to do in order to scrape a site and capture the data.  Today, the data is not in the HTML, so whatever crawler you choose has to be able to behave like a web browser, accepting and returning cookies, running JavaScript, following HTTP requests and processing the data, etc.  To me, that says two things.  One, the tool you want is a web browser, not a scraper script and two, the publishers of web content are tired of having their sites scraped and so they are taking steps to prevent it.  Please read the terms of service carefully before you copy and store someone's data - you could find yourself at the wrong end of a legal claim!

Best of luck with your project, ~Ray
@Ray, I understand the need of either crawler or a webbrowser, and yes I am still looking for a crawler at this stage (wont mind leaving those ajax based sites not indexed)

now, would php cater this requirement with curl in multiple instances efficiently?
ASKER CERTIFIED SOLUTION
Avatar of Ray Paseur
Ray Paseur
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial