Link to home
Start Free TrialLog in
Avatar of fgict
fgict

asked on

Download Website from Google Cache

I recently suffered a SQL Injection attach on one of my websites which meant a large proportion of my MS-SQL tables were over-written with Malware scripts.
I searched Google to find that c.480 pages are cached.
Is there a quick way of downloading all these cached pages in one go, using software?
I dont fancy having to click on each cached link and go File > Save in my browser
Avatar of bluefezteam
bluefezteam

Bluesquirrel WebWhacker can be unleashed on a website to save it and its assets, try pointing it at the cache?

Alternatively use it on the way back machine here
http://www.archive.org/index.php

It may be logged in there - my sites in there, so I can actually see what it looked like after 3 redesigns; cool stuff.
Avatar of Bernard Savonet
Have you tried contacting Google?
Avatar of fgict

ASKER

Hi fibo

No I have not tried contacting Google.
How would I go about that? what do you think they could provide me?
ASKER CERTIFIED SOLUTION
Avatar of bluefezteam
bluefezteam

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of fgict

ASKER

Hi bluefezteam

I have review your answer re: downloading from google using website downloader software but the big problem I have is:
When I point it to a cached page e.g. http://64.233.183.104/search?q=cache:xxxxx
The links within this page are not cached links but the website links so it will spider all the dynamic pages I have on the site e.g. news.asp and this will return the corrupted content.

I dont know how else to download the cached pages without doing it manually one by one
Hmm try the way back machine on archive.org you may have some luck there.
Is there a way of listing all the pages that you need to save?

For example do you still have the sitemap structure as a google sitemap (XMl doc) - if you do then maybe it's possible to apply that structure to googles cache and create a routine to strip all content from those page links defines in the site map.

you should be able to strip out the text from the pages automatically using some form of PHP/.Net routine applied to a recursive loop controlled by the sitemap

fqict,
what is now the situation?