How to write a crawler to download files?

Hi,

I will have the directory listing of a web server.
If this url is given, I want to download everything in that folder and subfolders.
I need to writ this from the scratch.
I was looking at the web crawler to parse the given URL and extract links.
I am told I need to parse the URL since there is always the directory listing available.
So how do I use the directory listing to download files?
Thanks.
dkim18Asked:
Who is Participating?
 
CEHJConnect With a Mentor Commented:
>>I need to writ this from the scratch.

Why is that? There are web crawlers already written

>>I am told I need to parse the URL since there is always the directory listing available.

Then i assume you're crawling a specific site, since you can't otherwise rely on directory listings being available?
0
 
dkim18Author Commented:
Sorry. My grammar wasn't good.

I was trying to say. The client doesn't want us to use those third party tool
So I was going to look at some of the open source code and copy and use them.

What I meant to say above was I will be given a URL like http://mysite/website101/
In the website101, there is a directory listing.
The directory listing will be always available.
in the website101,
There is a folder A and f1 file, f2 file..etc
A folder has b, c, d and f3 file, f4 file
b folder has some sub folders and files

So I am new to this kind of thing.
Do I still have to the parse the directory listing?
Does the directory listing list those subfolder and files in html file (and  as a hyperlink)
So I still need to parse that directory listing, don't I?
0
 
dkim18Author Commented:
So I want to keep the folder structure and download files.
Website101
website101/a
website101/f1
website101/f2
website101/a/f3
website101/a/b
...etc.
0
 
CEHJCommented:
:)
0
All Courses

From novice to tech pro — start learning today.