Saving web pages and associated images, css files, etc

I would like to archive off snapshots of specific pages and store them on a local fileserver and serve them up locally.  What is the easiest method to achieve this?  I know it's a broad question, but I need a pointer in the right direction.

Thanks,
David
lomidienAsked:
Who is Participating?
 
aozarovCommented:
if you are looking for a utility you can use wget .
If you want to use java then httpunit is a good option see: http://www.httpunit.org/doc/cookbook.html
0
 
StillUnAwareCommented:
The easiest way would be start command line tool like wget, and this can be achieved by using java.lang.ProcessBuilder
0
 
limaidealCommented:
if u really need to do it by your own code, you need to parse the initial( or index) html file, parse it, get all referenced files(javascript, image, css files) and download them separately. Really tedious(is this spell right?) though.
0
 
aozarovCommented:
>> if u really need to do it by your own code, you need to parse the initial( or index) html file,
This is why I suggested httpunit which does it for you :-)
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.