?
Solved

Creating script like cutestat

Posted on 2015-01-29
8
Medium Priority
?
75 Views
Last Modified: 2016-05-27
Hi,

I am looking for advanced information to develop a script similar to cutestat. I want to know how should I proceed? and how should I crawl pages / websites ? should I do it via php curl and store the html output or part of it into database or what else could be the best practice ?

I shall be doing custom digging and regex patterns into database later on but initially I just want to start with 1 million domains and want to know what's the fastest way to get html of all those million domains / sites ?

is php efficient enough ? or I have to use any other crawler?

regards
0
Comment
Question by:fahadalam
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 3
  • 2
  • 2
8 Comments
 
LVL 13

Expert Comment

by:Andrew Derse
ID: 40578438
Take a look at this page.

http://stackoverflow.com/questions/8316818/login-to-website-using-python/8316989#8316989

My question for you is, do you have the million domain names yet?

You can most definitely (once you have the information) store it into a database, that's easiest for getting stats off and such later.
0
 
LVL 13

Expert Comment

by:Andrew Derse
ID: 40578442
One more site that might be helpful.

http://www.crawl-anywhere.com/
0
 
LVL 111

Expert Comment

by:Ray Paseur
ID: 40579650
Is PHP efficient enough?  Yes, PHP powers Facebook, so unless your processing requirements exceed Facebook, you will be OK with PHP.  You can use cURL to read the HTML.  Hopefully your site will be more accurate than cutestat, because that thing is wa-a-ay off base!

Here is where you're going to run into a problem.  Most sites of any importance are not HTML any more.  They use a little HTML for a document framework, but the content is loaded dynamically by JavaScript and AJAX calls to web APIs.  To see what you're up against, make a Google search for anything, then use your browser's "view source."  The source document that can be read with cURL is not what you're seeing on the browser screen.

To understand the technologies we use for web development today, learn about AngularJS or just read this article and look for the demo/ajax_captcha_client.php scripts.
http://www.experts-exchange.com/Web_Development/Web_Languages-Standards/PHP/A_9849-Making-CAPTCHA-Friendlier-with-PHP-Image-Manipulation.html
0
Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 

Author Comment

by:fahadalam
ID: 40591708
Ray you took me wrong, I was actually asking if php/curl is strong enough to make it a crawler/bot. Facebook and any other aren't using php for this purpose, almost everywhere it's stated to use python crawler and php for rest of thing
0
 
LVL 111

Expert Comment

by:Ray Paseur
ID: 40592698
A Google search for "webcrawler" returns about 1,700,000 results.  One of those will probably meet at least some of your needs.  The issue is not whether you can read the HTML document - that's the easy part, and in 1999 that would be all you needed to do in order to scrape a site and capture the data.  Today, the data is not in the HTML, so whatever crawler you choose has to be able to behave like a web browser, accepting and returning cookies, running JavaScript, following HTTP requests and processing the data, etc.  To me, that says two things.  One, the tool you want is a web browser, not a scraper script and two, the publishers of web content are tired of having their sites scraped and so they are taking steps to prevent it.  Please read the terms of service carefully before you copy and store someone's data - you could find yourself at the wrong end of a legal claim!

Best of luck with your project, ~Ray
0
 

Author Comment

by:fahadalam
ID: 40592879
@Ray, I understand the need of either crawler or a webbrowser, and yes I am still looking for a crawler at this stage (wont mind leaving those ajax based sites not indexed)

now, would php cater this requirement with curl in multiple instances efficiently?
0
 
LVL 111

Accepted Solution

by:
Ray Paseur earned 2000 total points
ID: 40593294
Yes
0

Featured Post

Are You Using the Best Web Development Editor?

The worlds of web hosting and web development are constantly evolving. Every year we see design trends change, coding standards adapt and new frameworks/CMS created. With such a quick pace of change it’s easy to get lost trying to keep up.

See if your editor made the list.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

This post contains step-by-step instructions for setting up alerting in Percona Monitoring and Management (PMM) using Grafana.
Originally, this post was published on Monitis Blog, you can check it here . In business circles, we sometimes hear that today is the “age of the customer.” And so it is. Thanks to the enormous advances over the past few years in consumer techno…
Learn the basics of modules and packages in Python. Every Python file is a module, ending in the suffix: .py: Modules are a collection of functions and variables.: Packages are a collection of modules.: Module functions and variables are accessed us…
In a recent question (https://www.experts-exchange.com/questions/29004105/Run-AutoHotkey-script-directly-from-Notepad.html) here at Experts Exchange, a member asked how to run an AutoHotkey script (.AHK) directly from Notepad++ (aka NPP). This video…
Suggested Courses

765 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question