Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 640
  • Last Modified:

What is the best way to counteract spiders, crawlers, and bots on our website?

Folks,

We're running Windows Small Business Server 2003, and we're having problems with various crawlers sucking up bandwidth (particularly Googlebot, MSNBot, and Yahoo's Inktomisearch).  What are the best ways to counteract their usage?

We've started blocking IP ranges, but that seems to help only a little, and I figure it's not a permanent solution anyways.

We've got robots.txt set properly as well as the Meta tags in the header of each page.

I've read about using traps like a 1 x 1 px transparent bitmap image link to another page that has redirects back into itself with like a 20-second delay.  Is this still a good solution, or have spiders been made smarter?  Any other ways to make bad bots pay for their crimes?

I'm not the main network person here, but I am his b----, so let me know if I can provide any more information.

--J
0
jammerms
Asked:
jammerms
2 Solutions
 
blandyukCommented:
Are you running ASP pages? You could read the "User-Agent" header in the HTTP request. Most spiders specify a link to a page with regards to spidering pages like Google:

http://www.google.com/bot.html

It would look something like:

User-Agent: Mozilla/5.0 (compatible; MSIE 6.0; Windows NT 5.1, http://www.google.com/bot.html)

Once you have compiled a database of spiders, you can simply search for them in the header and simply "Response.End()" so saving bandwidth.

Not an easy method but at least you wouldn't have to worry about finding out all the IP ranges they have, which I can imagine is a lot!
0
 
PugnaciousOneCommented:
Most spiders (not all) respect the robots.txt file as well.  You can create one to disallow specific bots. here's an easy tool.  ( http://www.mcanerin.com/EN/search-engine/robots-txt.asp )
0
 
jammermsAuthor Commented:
PugnaciousOne,
We'be got the robots.txt set.  It seems that Inktomisearch and msnbot are the big culprits.  The googlebots seem to repect robots.txt.

blandyuk,

I'll definitely follow through with this suggestion if I can.  That's an interesting approach.




Keep the good ideas a-comin'.

--J
0
Who's Defending Your Organization from Threats?

Protecting against advanced threats requires an IT dream team – a well-oiled machine of people and solutions working together to defend your organization. Download our resource kit today to learn more about the tools you need to build you IT Dream Team!

 
Rich RumbleSecurity SamuraiCommented:
There are a number of files you can add, or meta tags... no index, no follow, robots.txt  http://www.robotstxt.org/wc/faq.html#prevent all can and are ignored by spiders, maybe not by default, but they can be set to do so. Detection, account locking out (if possible), and IP blocking are the tried and true methods. Our corporation looked into this extensively, it's all about detection and reaction. We lock out accounts of abusers, and block ip's indefinately, and per the contract they've signed, we get paid to allow them back in if.
Here is some interesting approaches also: http://palisade.plynt.com/issues/2006Jul/anti-spidering/
http://www.robotstxt.org/wc/meta-user.html
-rich
0
 
jammermsAuthor Commented:
richrumble,

We've got robots.txt set properly as well as the Meta tags in the header of each page.


That palisade.plynt.com link is really interesting.



Everyone,
I've read about using traps like a 1 x 1 px transparent bitmap image link to another page that has redirects back into itself with like a 20-second delay.  Is this still a good solution, or have spiders been made smarter?  Any other ways to make bad bots pay for their crimes?

Thanks again for the input.
0
 
jammermsAuthor Commented:
richrumble,

I see the part about traps in the Palisade article.  Thanks again for the pointer.




I'll give this over the weekend to see if any new ideas get posted in the meantime.

Thanks,
J
0
 
blandyukCommented:
With regards to the ASP code to get the User-Agent:

Request.ServerVariables("HTTP_USER_AGENT")

You could simply do an "InStr" on the User-Agent for particular strings which associate with bots. If it's greater than 1, Response.End() it. 3 easy one's to block:

http://www.google.com/bot.html
stumbleupon.com
Girafabot;

Here are some User-Agents I've taken from my tracking logs which are clearly bots:

Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; stumbleupon.com 1.926; VNIE5 RefIE5; .NET CLR 1.1.4322)
Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 4.0; Girafabot; girafabot at girafa dot com; http://www.girafa.com)

I'll post some more when I find them.

Obviously your going to have to be careful on what you specify as you could easily block actual users :( If you are specific, you shouldn't have a problem.
0
 
jammermsAuthor Commented:
It turns out we're just doing HTML for our website, so the ASP solutions will have to wait.

I did notice that our robots.txt had a capital R, so I changed it to lowercase to see if that would help.

Thanks for the pointers, people.
0

Featured Post

Receive 1:1 tech help

Solve your biggest tech problems alongside global tech experts with 1:1 help.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now