Retrieve data from external websites using perl or application

I need to get data from a very long list of sites (over 8000) and write the resulting html code from each site to seperate text files.  They have similar urls and content.

I am looking for the best way to do this, be it with perl or with a third party application.  Any help would be greatly appreciated.
IgiwwaAsked:
Who is Participating?

Improve company productivity with a Business Account.Sign Up

x
 
TintinConnect With a Mentor Commented:
Let's make the following assumptions.

1.  The list of sites (URL's) is in a plain text file.
2.  The output text file will have sequential names (as you haven't specified what format)

then

#!/usr/bin/perl
use strict;
use LWP::Simple;
use File::Basename;

my $list = '/path/to/list/of/sites.txt';
my $outputdir = ' /path/to/outputdir';

open LIST, $list or die "Can not open $list $!\n";

while (<LIST>) {
  chomp;
  my $site=$_;
  my $file=$outputdir . basename($site);
  getstore($site,$file);
}


 
0
 
alikoankConnect With a Mentor Commented:
there are already several applications doing this
take a look at XMLTV

http://membled.com/work/apps/xmltv/

or plucker

http://www.plkr.org/
0
 
ahoffmannConnect With a Mentor Commented:
assuming your URLs in a file, one per line:

wget -i file-withURLs
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.