• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 214
  • Last Modified:

Retrieve data from external websites using perl or application

I need to get data from a very long list of sites (over 8000) and write the resulting html code from each site to seperate text files.  They have similar urls and content.

I am looking for the best way to do this, be it with perl or with a third party application.  Any help would be greatly appreciated.
0
Igiwwa
Asked:
Igiwwa
3 Solutions
 
TintinCommented:
Let's make the following assumptions.

1.  The list of sites (URL's) is in a plain text file.
2.  The output text file will have sequential names (as you haven't specified what format)

then

#!/usr/bin/perl
use strict;
use LWP::Simple;
use File::Basename;

my $list = '/path/to/list/of/sites.txt';
my $outputdir = ' /path/to/outputdir';

open LIST, $list or die "Can not open $list $!\n";

while (<LIST>) {
  chomp;
  my $site=$_;
  my $file=$outputdir . basename($site);
  getstore($site,$file);
}


 
0
 
alikoankCommented:
there are already several applications doing this
take a look at XMLTV

http://membled.com/work/apps/xmltv/

or plucker

http://www.plkr.org/
0
 
ahoffmannCommented:
assuming your URLs in a file, one per line:

wget -i file-withURLs
0

Featured Post

Free Tool: Port Scanner

Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now