Link to home
Start Free TrialLog in
Avatar of Jim Furnier
Jim Furnier

asked on

HAProxy rejecting connections with low resource usage

I'm having issues with my haproxy servers (running Ubuntu 16.04) rejecting new connections (or timing them out) after a certain threshold. The proxy servers are AWS c5.large EC2's with 2 cpus and 4GB of ram. The same configuration is used for both connection types on our site, we have one for websocket connections which typically have between 2K-4K concurrent connections and a request rate of about 10/s. The other is for normal web traffic with nginx as the backend with about 400-500 concurrent connections and a request rate of about 100-150/s. Typical cpu usage for both is about 3-5% on the haproxy process, with 2-3% of the memory used for the websocket proxy (40-60MB) and 1-3% of the memory used for the web proxy (30-40MB).

Per the attached config, the cpus are mapped across both cpus, with one process and two threads running. Both types of traffic are typically 95% (or higher) SSL traffic. I've watched the proxy info using watch -n 1 'echo "show info" | socat unix:/run/haproxy/admin.sock -' to see if I'm hitting any of my limits, which does not seem to be the case.

During high traffic time, and when we start to see issues, is when our websocket concurrent connections gets up to about 5K and web requests rate gets up to 400 requests/s. I mention both servers here because I know the config can handle the high concurrent connections and request rate, but I'm missing some other resource limit being reached. Under normal conditions everything works just fine; however, the issues we see are ERR_CONNECTION_TIMED_OUT (from chrome) type errors. Never do I see any 502 errors. Nor do I see any other process use more cpu or memory on the server. I'm also attaching some other possibly relevant configs, such as setting my limits and sysctl settings.

Any ideas what I might be missing? Am I reading top and ps aux | grep haproxy wrong and seeing the wrong cpu/mem usage? Am I missing some tcp connection limit? The backend servers (nginx/websocket) are being worked, but never seem to be taxed. We've load tested these with much more connections and traffic and are limited by the proxy long before we limit the backend servers.

Thanks a lot.

haproxy.cfg
global
    ulimit-n 300057
    quiet
    maxconn 150000
    maxconnrate 1000
    nbproc 1
    nbthread 2
    cpu-map auto:1/1-2 0-1

    daemon
    stats socket /run/haproxy/admin.sock mode 600 level admin
    stats timeout 2m
    log 127.0.0.1:514 local0
    ca-base /etc/ssl/certs
    crt-base /etc/ssl/private
    ssl-default-bind-options no-sslv3 no-tlsv10
    ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL:!RC4

defaults
    maxconn 150000
    mode http
    log global
    option forwardfor
    timeout client 30s
    timeout server 120s
    timeout connect 10s
    timeout queue 60s
    timeout http-request 20s

frontend default_proxy
    option httplog
    bind :80
    bind :443 ssl crt /etc/haproxy/ssl.pem
    ... acl stuff which may route to a different backend
    ... acl for websocket traffic
    use_backend websocket if websocket_acl
    default_backend default_web

backend default_web
    log global
    option httpclose
    option http-server-close
    option checkcache
    balance roundrobin
    option httpchk HEAD /index.php HTTP/1.1\r\nHost:website.com
    server web1 192.168.1.2:80 check inter 6000 weight 1
    server web2 192.168.1.3:80 check inter 6000 weight 1

backend websocket
    #   no option checkcache
    option httpclose
    option http-server-close
    balance roundrobin
    server websocket-1 192.168.1.4:80 check inter 6000 weight 1
    server websocket-2 192.168.1.5:80 check inter 6000 weight 1

Open in new window


Output from haproxy -vv:
HA-Proxy version 1.8.23-1ppa1~xenial 2019/11/26
Copyright 2000-2019 Willy Tarreau <willy@haproxy.org>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -O2 -fPIE -fstack-protector-strong -Wformat -    Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label
OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.1
Built with transparent proxy support using: IP_TRANSPARENT         IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE2 version : 10.21 2016-01-12
PCRE2 library supports JIT : yes
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
    [SPOE] spoe
    [COMP] compression
    [TRACE] trace

Open in new window


limits.conf:
* soft nofile 120000
* soft nproc 120000

Open in new window


sysctl.conf:
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_syncookies=1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 50000
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 50000
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.core.netdev_max_backlog = 50000
fs.epoll.max_user_instances = 10000

Open in new window


Typical with load with 330 concurrent connections and 80 req/s ps aux | grep haproxy output:
root      8122  4.5  1.2 159052 46200 ?        Ssl  Jan28  40:56 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 29790
root     12893  0.0  0.3  49720 12832 ?        Ss   Jan21   0:00 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 29790

Open in new window

Avatar of David Favor
David Favor
Flag of United States of America image

The answer is likely buried in your HAProxy or NGINX logs.

No guessing about this.

Your logs will likely show enough detail you can debug the problem.

Tip: If you're running a full LEMP Stack (MariaDB or MySQL), 2x CPUs + 4G is likely insufficient for significant traffic.

So... best also to check top (for swapping) + free -hg (memory allocation) + mysqltuner (database memory requirements).

Also "iotop -P -a -d 1" then hit left arrow till disk write I/O is shown.

The above tests, during time of problem, may provide additional debugging info.
Avatar of Jim Furnier
Jim Furnier

ASKER

Hi David,

Thanks for responding. This box is only running haproxy, all other services are on their own boxes.

Nginx isn't showing me any kind of errors and not seeing any "pool seems busy" type errors from php-fpm.

Although no swap has been configured, top and htop both show about 300MB of 3.62GB of memory used. About 2GB is buffered/cached, but that should still leave plenty of free memory.

iotop shows nothing significant writing to disk, which makes sense as the only thing really writing should be log files, and haproxy isn't producing more than 1 line every few minutes.

Haproxy's log isn't showing me much at the moment. I've configured it to send notices to an admin log, but all I see (occasionally) are:
Blocking cacheable cookie in response from instance default_web, server web#

Here's the logging config:
# Collect log with UDP
$ModLoad imudp
$UDPServerAddress 127.0.0.1
$UDPServerRun 514

# Creating separate log files based on the severity
#local0.* /var/log/haproxy-traffic.log
local0.notice /var/log/haproxy-admin.log

Open in new window


Perhaps there's some extra logging I can enable to show me something more useful?

Thanks a lot.

With so many connect request... do sockets have enough time to get closed by the OS. (there is a mandatory wait time of 2 minutes).

( to be able to process FIN - FIN/ACK - ACK  seqences)..   and to both sides. 

It' is a wild guess as there is not a lot to go on.

netstat -ant  | grep TIME_WAIT | wc -l

might show something, maybe also grep on the internal address of the haproxy server.

netstat -ant  | grep TIME_WAIT | grep ServerIP | wc -l


Can the haproxy server still connect sockets to the backend (enough free ephemeral ports).... 

Hi Noci,

Thanks for the reply. The most waiting connections I saw was 75. I usually see about 900-1100 file descriptors used. 1100+ is where I start to see problems and the site just doesn't load, but when it does, it's super speedy. Even added an extra backend server to see if that would help, but didn't change anything.

Anything else I could look at?

Thanks a lot.

I am not looking for connections, but CLOSED connections. Those port numbers will be unusable for 2 minutes.

(esp. a problem on the INSIDE interface.)


So it's not about ACTIVE connections but dying connections. 

If you only ever saw 1100 TIME_WAIT ports.. then it should not be an issue.  (not 1100 ESTABLISHED ports..)

After the close (fd = freed) the Networks stack needs to clear all traffic and that takes 2 minutes, and blocks usage of that portnumber.


IF request rate is 100-150/s  --> (say 150 sessions... open some xfer close)  then you have 18K / 2 minutes. 

Which should be sufficient ( you can view the port range with )  cat /proc/sys/net/ipv4/ip_local_port_range

The default is 30K ports. 

But slightly higher load might give problems.

Yeah I have the port range set to:
1024      65023
If there's nothing else on the box, likely the problem relates to running an old Kernel, as Xenial (16.04) is very old, compared to all the Kernel rewrites available in both 4.15 (Bionic) + 5.5 (Focal).

Since Focal releases 2020-04-23, likely best to first install Focal when it releases, then see if problem magically disappears.

Most of the IP Stack (TCP) tunings developed over the years, were rolled into Bionic (Kernel 4.15), as once I switched to Bionic all the weird connection problems seemed to disappear.

If I were debugging this I'd start with either Bionic or Focal, run HAProxy 2.1, then if any problems occurred, raise HAProxy logging verbosity to debug the problem.

You're running a very old Kernel + very old version of HAProxy, so you might be wrestling with problems that have no fix... or rather have either been fixed in more recent Kernels or more recent versions of HAProxy.

Generally best to start with current Kernel + current HAProxy as your baseline for testing.
As a test you can always lease a KimSufi machine for $5/month, install latest Ubuntu + HAProxy then retest.

If problem magically disappears, you know either a Kernel fix or HAProxy fix or both Kernel + HAProxy fix interactions have resolved the problem you're seeing.
Note: One last consideration. If you're running any iptables UDP rate limiting rules on either your HAProxy machine or target machine, disable all these rules + retest.
Turns out the answer was staring at me in the face the whole time. I had set the maxconnrate to 1,000. However, show info was showing me a lower connection rate of between 10-15, so I didn't think I was hitting that limit. I was only able to sustain a maximum of 500 requests/s (confirmed by my backend servers), with each request requiring one connection to the client, and a second to the backend. Thus, I was using 1,000 connections per second.

I removed this limit and I was able to sustain a higher connection rate.

Thank you for all the assistance.
ASKER CERTIFIED SOLUTION
Avatar of Jim Furnier
Jim Furnier

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial