Link to home
Start Free TrialLog in
Avatar of SWB-Consulting
SWB-Consulting

asked on

Limit concurrent active requests per user on apache

Is there a way or module that allows to limit the number of concurrent active connections per user (defined based on the Cookie PHPSESSID header) with Apache httpd on the apache level.
I can't do this on the PHP level, since php sessions are locked, so there are actually no concurrently executed page requests - one is executed, all others are waiting until the active session closes.
If there are more then the defined limit number of active requests I'm fine with rejecting any further requests - until the number of concurrent active requests goes below the limit per user.

If there is a way to exclude page requests with certain request URL patterns (eg. jpg, css, js) from this limit that would be ok with me.

The issue we are running into is that sometimes randomly single user's browsers open hundreds of connections that show up as "W" status in the apache status, but did not send a single byte to the browser, I assume thats because of the PHP session locking while the first request is slow or stuck.
Avatar of gheist
gheist
Flag of Belgium image

Please detail more - are you using Linux or Windows? Which apache version? Can we see phpinfo() and apache status with hostnames/IPs hidden?
Avatar of SWB-Consulting
SWB-Consulting

ASKER

Even though, I believe the OS, apache & php version (I have seen this happen with PHP 5.2, 5.3 and 5.4) are pretty irrelevant to the issue and a solution, attached a screenshot of parts of the apache status window with hostnames, urls and IPs blurred out.
The ones with the yellow background are ones are coming from one IP address and all going to the same url. Basically all workers in W status are from requests from that IP address and that browser.
This buildup all happened within a little over 5 minutes.
This randomly happens with different random PHP pages that use PHP sessions. It does not happen with static files.
That's why I'm looking to limit the number of concurrent "active" connections using the same PHP session or rather using the same PHPSESSID cookie in the request header.

Attached also a screenshot of graphs from our server monitoring. On the left Web server stats, on the right db server stats. Other than the massive increase of workers in "sending reply" (W) status between 12:40 and 12:45 there is not much else going on. The little dip in memory utilization is from restarting apache.
Screen-Shot-2015-09-05-at-7.42.35-AM.jpg
Screen-Shot-2015-09-05-at-8.27.02-AM.png
You are using RedHat 6 or clone, and that is completely relevant to the problem.
You must enable (change the line) KeepAlive On in /etc/httpd/conf/httpd.conf and run "apachectl graceful" (dont need to restart apache fully for small change)
After clients will need less connections (and apache workers) and bad clients will not leave (so many) lingering connections behind.
Please save apache status and "netstat -anp | grep httpd" before change and after 1h to see if it fixes the problem.

If that does not bring enough relief we can try modules
mod_evasive would reject (403) requests past some thresholds
mod_qos can operate on cookies or URLs
Both are available for your Red Hat 6 from Fedora EPEL
Neither will address first low-level problem/RedHat deficiency.
Keepalive is already enabled as you can tell from the worker status board (There are some in K status) of the previously attached apache status.

The problem occurs randomly and rarely, sometimes not for weeks, sometimes it happens more than once per day. Sometimes during low traffic time (at night) and sometimes during the day in high traffic times (>20k php page requests per hour)

I have looked through the mod_evasive and mod_qos documentation and was not able to find any reference how to limit the number of concurrent requests per visitor based on session, not just IP
Indeed you found is right about modules... There is none that limits connections pre cookie in request, as request comes 1 packet too late - when connection is accepted.
I think evasive could help - measure for a day "maximum number of connections per IP", then allow few more. In this case the connection over the jar will be short-circuited with 403

If using prefork -> heard of worker
If using worker -> heard of fcgid
If using fcgid -> heard of nginx
Your solution would still be IP based. On some of our installations we use cloudflare, as a result multiple different connections may come from the same IP address, but are actually requests by completely independent  visitors.

What about mod_interval_limit (https://github.com/yokawasa/mod_interval_limit) with mod_usertrack (and memcached)? It seems it allows limiting number of connections within a timeframe per client based on a session cookie
Multiple connections from same IP or same cookie are absolutely normal and acceptable by http standard.
Your module has one contributor. It works for them only, try others. Youy can even liberate cloudflare from all limits.
I know and understand that multiple connections from the same IP and cookie are absolutely normal and acceptable, but I need a solution to keep this within certain limits.
Limiting number of connections per IP address does not seem to be a solution for me.
If this module only has one contributor, are there any other solutions?
ASKER CERTIFIED SOLUTION
Avatar of gheist
gheist
Flag of Belgium image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I will give that a try.
Thank you.
Try to improve efficiency http:#40965035 too. Should not be a problem to handle 100 requests at the time for any PHP setup.
Hi,

The server handles easily 400 PHP requests at a time. This is not a problem of handling PHP requests, since except for the first one, all others are stuck due to the PHP session lock and not using any CPU time.
I read somewhere else it could be a symptom of a slowloris attack as well