Techniques to Protect Websites From Bots

David Balaban is a computer security researcher with over 18 years of experience in malware analysis and antivirus software evaluation.
Published:
Edited by: David Draper
Nowadays, almost every website is faced with bot activity.  Let’s take a look at how modern anti-bot products and services work and what difficulties customers may encounter when deploying and managing them.


Web traffic generated by bots is growing steadily. In 2020, it accounted for more than 40% of all data transmitted on the Internet. However, researchers argue that most bots are malicious. These entities can collect sensitive information, manipulate pay-per-click networks, send fraudulent requests to financial systems, or search for passwords and combinations of promo codes. Nowadays, almost every business is faced with bot activity that goes beyond harmless indexing and can cause real damage.

Complex security solutions are on the opposite side of the fence. Their toolkits include special modules geared toward stopping this particular type of dodgy activity. 

Let’s take a look at how such products and services work and what difficulties customers may encounter when deploying and managing them.

The risks stemming from Internet bots

A bot (derived from the word “robot”) is a tool for automating an arbitrary action. In the context of the Internet, these actions can be both useful and harmful. Malicious bots can perform a variety of functions from relatively harmless parsing and simulating human clicks to brute-forcing passwords and stealing credentials.

The malicious actions of such programs vary by industry. For example, in the case of recruitment firms, the impact can involve littering their sites with false data. In the banking sector, these are fraud schemes aimed at making unauthorized requests to interbank gateways. 

In retail, the problem mostly boils down to bonus hunting (looking for promo codes to get discounts). Another example is the imitation of a website visitor’s actions aimed at affecting the rankings of the resource in search results.

Methods to defend against malicious bots

There are three types of solutions that can counter Internet bot campaigns: web application firewalls (WAFs), DDoS protection systems, and specialized anti-bot systems. The third category stands out from the rest due to its highest efficiency.

A customer can either choose an off-the-shelf solution or a cloud service, develop their own tools, or customize existing applications. These mechanisms are only effective as long as they match the growing sophistication of bots that are constantly getting better at bypassing standard anti-bot programs.

One of the traditional tools allowing you to verify a site visitor and make sure it’s a human rather than a robot is CAPTCHA. The acronym stands for “Completely Automated Public Turing test to tell Computers and Humans Apart”. Such systems have evolved from simple variants that require entering letters or numbers to top-notch solutions that use cryptographic methods.

The use of session tokens, generally known as cookies, is another way to identify a real person and differentiate him or her from a bot. The information they contain speaks volumes about the legitimacy of a site visitor.

What is more effective – a separate specialized anti-bot system or a comprehensive solution with bot prevention features onboard? There is no short answer. 

Much depends on the implementation of a specific tool as well as the effort and money invested in its creation.

In fact, you cannot ensure ultimate protection against bots by simply installing a solution. In almost all cases, it’s necessary to customize and fine-tune such systems. The multi-vendor approach is one of the best options, allowing a company to create a highly flexible system. 

The only caveat is that this tactic requires a lot of resources.

On the other hand, WAF, anti-DDoS, and anti-bot systems may be regarded as three conceptually different categories that cannot be combined into a single solution. To fend off DDoS attacks, you need a tool that will protect your infrastructure against a large number of relatively simple bots. 

A WAF has to identify every single intrusion attempt and handle it automatically. A bot protection tool, in its turn, can use data about specific actions within a suspicious session to spot malicious activity.

Deploying anti-bot systems

From a technical perspective, the deployment of such solutions involves two techniques:
  • Forwarding all traffic to a third-party cloud for analysis – either directly or via agents installed on endpoints.
  • Integrating the system with business application logic to detect potentially malicious activity on-premises.

When it comes to the prospects for using artificial intelligence and machine learning techniques in anti-bot solutions, in some cases AI yields good results in detecting differences between the actions of sophisticated bots and real people. However, the effective use of machine learning methods requires storing and processing large amounts of data, which entails additional costs. Meanwhile, simpler statistical analysis methods or heuristics produce similar results in some scenarios.

In the case of an AI-based third-party solution, the customer gets a kind of “black box” with opaque algorithms whose verdicts must be trusted. On the other hand, the development or even simply the independent use of such a system will require significant resources for the initial training of algorithms and keeping them in a “combat-ready” state. That said, there is no doubt that artificial intelligence works wonders for identifying complex, multi-functional bots.

How effective should an anti-bot system be? Are there standards and bot detection rates that vendors adhere to, or are the metrics set individually, depending on the industry and the customer? 

Performance metrics are specific to each project. The main assessment is made by the client – whether they are satisfied with the accuracy of bot detection by a particular system. 

Moreover, in some cases, the client can change the system parameters on their own and try to strike a balance between the efficiency of detecting bots and the proper experience of legitimate visitors.

Allocating the resources to forestall bots is one more important thing to consider. It takes a security specialist a good deal of expertise and a significant amount of time to detect fraudulent bot activity in a channel. That’s why the makers of out-of-the-box solutions provide no guarantee of their efficiency without extra services, support, and constant product adjustments.

In addition to evil bots, there are many legitimate scripts on the Internet that perform useful functions. A few examples are search engine crawlers and modules that generate snippets and preview pages for social networks. 

How do you distinguish between the “right” and “wrong” bots to avoid blocking the services that are important to the visitor and resource owner?

The implementation of an anti-bot system should begin with a study of the target site’s traffic model to understand what bots are visiting it. Based on that information, you can create a list of allowed scripts that will not be blocked by the security system. 

Creating such a whitelist is not the only solution to the problem. Legitimate crawlers provide extensive data that allows them to be identified and validated. This approach works for the vast majority of mass bots created by social networks, search engines, and other trusted services.

Speaking of the economic effect of anti-bot efforts, experts believe this is one of the best-understood information security areas for businesses, as malicious bot activity often causes real financial damage to a company. This foul play is particularly impactful for the finance and retail sectors. There is another facet of the problem, though: it can be difficult to sell an anti-bot solution to a company that cannot accurately calculate the losses from this type of sketchy activity.

Market trends and predictions

In the future, the sophistication of bots that mimic user behavior will grow steadily, and the tools for blocking them will become more transparent to the user. 

The Internet has turned into a full-fledged business environment, and therefore the amount of bot-related traffic is building up every year, accounting for up to half of all the data on the web, according to some estimates.

Specialized solutions will become more affordable and easier to deploy. The enterprise customer market is interested in a service that can be flexibly adjusted to their needs and web resources, given that site architecture can vary significantly from company to company. 

Another area of development for anti-bot solutions is the identification of site visitors and delimiting access to pages without authorization.

The creators of bot protection systems will move towards the modularity of their solutions. It means that they will gradually shift to providing specialized features designed to work with specific operating systems or devices, as well as components focused on specific areas of digital infrastructures.

Conclusion

The task of countering bots is increasingly relevant as more customers understand its importance. Anti-bot systems have high growth potential, as bot activity on the Internet is steadily escalating. So far, anti-bot solutions are not ubiquitous, but reducing the cost and simplifying the process of deploying them can lead to a significant spike in their user audiences.



0
847 Views
David Balaban is a computer security researcher with over 18 years of experience in malware analysis and antivirus software evaluation.

Comments (0)

Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.