How Can I Determine the Root Cause of Apache Threads Maxing Out?

I've run in to a rather red-herring-like situation that I'm hoping some of you good folks can shed some light on.  First some background:

This Apache Web Server is being used as a front end for a J2EE application that accepts incoming connections from clients out in the field rather frequently.  Every 60 seconds or so the client will send a message to the J2EE application just to say "Hi, I'm here.".  There also can be remote interaction with these clients as well.

The actual issue I've been chasing down for months is that from the J2EE application's point of view, some clients (a random subset) will stop communicating with the J2EE application.  However viewing a client log from a client that is allegedly not reporting, it shows unknown host errors for the Apache Server.

Taking a thread count of the Apache threads yields that the threads have maxed out at 503.  I'm just getting a thread count with your standard "ps -ef | grep http | wc -l" command.   My results are below.
I've also placed an httpd.conf snippet below as well.

Lastly, we have the following compiled in Apache:


Any help would be much, much appreciated.

Best Regards,

PS: I have changed all of the addresses in the netstat for security purposes, of course.
Netstat Results
Local Address,         Foreign Address,            (state),
local_address_1.www,   foreign_address_1.16194     , CLOSE_WAIT	
local_address_1.www,   foreign_address_2.60599     , CLOSE_WAIT	
local_address_1.443,   foreign_address_2.60839     , CLOSE_WAIT	
local_address_1.443,   foreign_address_3.20613     , CLOSE_WAIT
local_address_1.443,   foreign_address_3.45379     , CLOSE_WAIT
local_address_1.www,   foreign_address_3.45745     , CLOSE_WAIT
local_address_1.www,   foreign_address_3.45775     , CLOSE_WAIT
local_address_1.www,   foreign_address_3.46061     , CLOSE_WAIT
local_address_1.www,   foreign_address_4.11653     , CLOSE_WAIT
local_address_1.443,   foreign_address_4.11923     , CLOSE_WAIT
local_address_1.443,   foreign_address_4.12028     , CLOSE_WAIT
local_address_1.443,   foreign_address_5.35389     , CLOSE_WAIT
local_address_1.443,   foreign_address_6.10749     , CLOSE_WAIT
local_address_1.www,   foreign_address_6.10960     , CLOSE_WAIT
local_address_1.443,   foreign_address_6.11440     , CLOSE_WAIT
local_address_1.443,   foreign_address_7.31565     , CLOSE_WAIT
local_address_1.443,   foreign_address_7.32095     , CLOSE_WAIT
local_address_1.443,   foreign_address_8.27290     , CLOSE_WAIT
local_address_1.443,   foreign_address_8.28009     , CLOSE_WAIT
local_address_1.www,   foreign_address_8.28033     , CLOSE_WAIT
local_address_1.www,   foreign_address_8.3556      , CLOSE_WAIT
local_address_1.www,   foreign_address_9.21388     , CLOSE_WAIT
local_address_1.www,   foreign_address_10.55432    , CLOSE_WAIT
local_address_1.www,   foreign_address_11.55803    , CLOSE_WAIT
local_address_1.www,   foreign_address_12.26118    , CLOSE_WAIT
local_address_1.www,   foreign_address_13.49844    , CLOSE_WAIT
local_address_1.443,   foreign_address_14.s.54361  , CLOSE_WAIT
local_address_1.443,   foreign_address_14.s.55122  , CLOSE_WAIT
local_address_1.443,   foreign_address_15.20497    , CLOSE_WAIT
local_address_1.www,   foreign_address_15.20763    , CLOSE_WAIT
local_address_1.www,   foreign_address_15.45526    , CLOSE_WAIT
local_address_1.443,   foreign_address_16.3557     , CLOSE_WAIT
local_address_1.443,   foreign_address_16.3623     , CLOSE_WAIT
local_address_1.443,   foreign_address_16.3660     , CLOSE_WAIT
local_address_1.443,   foreign_address_16.3691     , CLOSE_WAIT
local_address_1.443,   foreign_address_16.3693     , CLOSE_WAIT
local_address_1.443,   foreign_address_16.3696     , CLOSE_WAIT
local_address_1.443,   foreign_address_16.3705     , CLOSE_WAIT
local_address_1.443,   foreign_address_16.3707     , CLOSE_WAIT
local_address_1.www,   foreign_address_16.3709     , CLOSE_WAIT
local_address_1.www,   foreign_address_16.3712     , CLOSE_WAIT
local_address_1.www,   foreign_address_17.4908     , CLOSE_WAIT
local_address_1.www,   foreign_address_17.54341    , CLOSE_WAIT
local_address_1.www,   foreign_address_18.l.22196  , CLOSE_WAIT
local_address_1.www,   foreign_address_18.l.22356  , CLOSE_WAIT
local_address_1.www,   foreign_address_18.l.46641  , CLOSE_WAIT
local_address_1.www,   foreign_address_19.59834    , CLOSE_WAIT
local_address_1.www,   foreign_address_20.6521     , CLOSE_WAIT
local_address_1.443,   foreign_address_20.6665     , CLOSE_WAIT
local_address_1.443,   foreign_address_20.6701     , CLOSE_WAIT
local_address_1.443,   foreign_address_21.36324    , CLOSE_WAIT
local_address_1.443,   foreign_address_21.36661    , CLOSE_WAIT
local_address_1.443,   foreign_address_22.cp.26188 , CLOSE_WAIT
local_address_1.443,   foreign_address_23.13950    , CLOSE_WAIT
local_address_1.443,   foreign_address_23.14043    , CLOSE_WAIT
local_address_1.www,   foreign_address_23.14109    , CLOSE_WAIT
local_address_1.443,   foreign_address_23.14120    , CLOSE_WAIT
local_address_1.443,   foreign_address_23.14133    , CLOSE_WAIT
local_address_1.443,   foreign_address_23.14150    , CLOSE_WAIT
local_address_1.www,   foreign_address_23.14165    , CLOSE_WAIT
local_address_1.443,   foreign_address_23.14201    , CLOSE_WAIT
local_address_1.443,   foreign_address_23.14226    , CLOSE_WAIT
local_address_1.443,   foreign_address_23.14269    , CLOSE_WAIT
local_address_1.www,   foreign_address_23.14292    , CLOSE_WAIT
local_address_1.www,   foreign_address_23.14322    , CLOSE_WAIT
local_address_1.443,   foreign_address_23.14336    , CLOSE_WAIT
local_address_1.443,   foreign_address_23.14338    , CLOSE_WAIT
local_address_1.443,   foreign_address_24.39104    , CLOSE_WAIT
local_address_1.443,   foreign_address_25.38173    , CLOSE_WAIT
local_address_1.443,   foreign_address_26.58945    , CLOSE_WAIT
local_address_1.443,   foreign_address_26.2175     , CLOSE_WAIT
local_address_1.443,   foreign_address_26.2183     , CLOSE_WAIT
local_address_1.443,   foreign_address_26.2190     , CLOSE_WAIT
local_address_1.443,   foreign_address_26.2224     , CLOSE_WAIT
local_address_1.443,   foreign_address_26.2231     , CLOSE_WAIT
local_address_1.443,   foreign_address_27.2141     , CLOSE_WAIT
local_address_1.443,   foreign_address_27.2144     , CLOSE_WAIT
local_address_1.443,   foreign_address_27.2162     , CLOSE_WAIT
local_address_1.443,   foreign_address_27.2173     , CLOSE_WAIT
local_address_1.443,   foreign_address_27.2176     , CLOSE_WAIT
local_address_1.443,   foreign_address_27.2179     , CLOSE_WAIT
local_address_1.www,   foreign_address_27.2186     , CLOSE_WAIT
local_address_1.443,   foreign_address_27.2190     , CLOSE_WAIT
local_address_1.www,   foreign_address_27.2199     , CLOSE_WAIT
local_address_1.www,   foreign_address_27.2206     , CLOSE_WAIT
local_address_1.www,   foreign_address_27.2209     , CLOSE_WAIT
local_address_1.www,   foreign_address_27.2212     , CLOSE_WAIT
local_address_1.www,   foreign_address_27.2219     , CLOSE_WAIT
local_address_1.www,   foreign_address_27.2221     , CLOSE_WAIT
local_address_1.www,   foreign_address_27.2222     , CLOSE_WAIT
local_address_1.www,   foreign_address_27.2227     , CLOSE_WAIT
local_address_1.443,   foreign_address_27.2231     , CLOSE_WAIT
local_address_1.443,   foreign_address_27.2234     , CLOSE_WAIT
local_address_1.443,   foreign_address_27.51784    , CLOSE_WAIT
local_address_1.443,   foreign_address_28.44566    , CLOSE_WAIT
local_address_1.443,   foreign_address_29.44592    , CLOSE_WAIT
local_address_1.443,   foreign_address_30.4008     , CLOSE_WAIT
local_address_1.443,   foreign_address_30.4016     , CLOSE_WAIT
local_address_1.443,   foreign_address_30.4037     , CLOSE_WAIT
local_address_1.443,   foreign_address_30.4041     , CLOSE_WAIT
local_address_1.443,   foreign_address_30.4044     , CLOSE_WAIT
local_address_1.443,   foreign_address_30.4047     , CLOSE_WAIT
local_address_1.443,   foreign_address_30.4052     , CLOSE_WAIT
local_address_1.443,   foreign_address_30.4058     , CLOSE_WAIT
local_address_1.443,   foreign_address_30.4065     , CLOSE_WAIT
local_address_1.443,   foreign_address_30.4067     , CLOSE_WAIT
local_address_1.443,   foreign_address_30.4068     , CLOSE_WAIT
local_address_1.443,   foreign_address_30.4071     , CLOSE_WAIT
local_address_1.443,   foreign_address_30.4074     , CLOSE_WAIT
local_address_1.443,   foreign_address_30.4078     , CLOSE_WAIT
local_address_1.443,   foreign_address_30.4087     , CLOSE_WAIT
local_address_1.443,   foreign_address_31.14485    , CLOSE_WAIT
local_address_1.443,   foreign_address_32.4196     , CLOSE_WAIT
local_address_1.443,   foreign_address_32.4198     , CLOSE_WAIT
local_address_1.443,   foreign_address_33.15291    , CLOSE_WAIT
local_address_1.443,   foreign_address_33.15597    , CLOSE_WAIT
local_address_1.www,   foreign_address_34.50549    , CLOSE_WAIT
local_address_1.443,   foreign_address_35.20621    , CLOSE_WAIT
local_address_1.443,   foreign_address_36.47187    , CLOSE_WAIT
local_address_1.443,   foreign_address_37.31364    , CLOSE_WAIT
local_address_1.443,   foreign_address_38.39666    , CLOSE_WAIT
local_address_1.443,   foreign_address_5.35950     , ESTABLISHE
local_address_1.443,   foreign_address_8.27605     , ESTABLISHE
local_address_1.443,   foreign_address_38.59375    , ESTABLISHE
local_address_1.443,   foreign_address_39.cp.26337 , ESTABLISHE
local_address_1.443,   foreign_address_23.14369    , ESTABLISHE
local_address_1.443,   foreign_address_27.2237     , ESTABLISHE
local_address_1.443,   foreign_address_40.56387    , LAST_ACK
local_address_1.443,   foreign_address_16.3585     , LAST_ACK
local_address_1.443,   foreign_address_16.3589     , LAST_ACK
local_address_1.www,   foreign_address_16.3625     , LAST_ACK
local_address_1.443,   foreign_address_16.3631     , LAST_ACK
local_address_1.443,   foreign_address_41.13230    , LAST_ACK
local_address_1.www,   foreign_address_20.7237     , LAST_ACK
local_address_1.443,   foreign_address_21.36334    , LAST_ACK
local_address_1.www,   foreign_address_23.13854    , LAST_ACK
local_address_1.443,   foreign_address_42.58060    , LAST_ACK
local_address_1.443,   foreign_address_43.59057    , LAST_ACK
local_address_1.443,   foreign_address_27.2094     , LAST_ACK
local_address_1.443,   foreign_address_27.2103     , LAST_ACK
local_address_1.443,   foreign_address_27.2113     , LAST_ACK
local_address_1.443,   foreign_address_27.2124     , LAST_ACK
local_address_1.443,   foreign_address_27.2156     , LAST_ACK
local_address_1.443,   foreign_address_27.2184     , LAST_ACK
local_address_1.www,   foreign_address_30.3944     , LAST_ACK
local_address_1.www,   foreign_address_30.3983     , LAST_ACK
local_address_1.www,   foreign_address_30.3985     , LAST_ACK
local_address_1.443,   foreign_address_30.4018     , LAST_ACK
local_address_1.443,   foreign_address_30.4032     , LAST_ACK
local_address_1.www,   foreign_address_30.4039     , LAST_ACK
local_address_1.www,   foreign_address_30.4042     , LAST_ACK
local_address_1.www,   foreign_address_30.4043     , LAST_ACK
local_address_1.www,   foreign_address_30.4048     , LAST_ACK
local_address_1.443,   foreign_address_44.4176     , LAST_ACK
local_address_1.443,   foreign_address_45.58708    , LAST_ACK
local_address_1.www,   foreign_address_45.59205    , LAST_ACK
# Timeout: The number of seconds before receives and sends time out.
Timeout 300
# KeepAlive: Whether or not to allow persistent connections (more than
# one request per connection). Set to "Off" to deactivate.
KeepAlive On
# MaxKeepAliveRequests: The maximum number of requests to allow
# during a persistent connection. Set to 0 to allow an unlimited amount.
# We recommend you leave this number high, for maximum performance.
MaxKeepAliveRequests 100
# KeepAliveTimeout: Number of seconds to wait for the next request from the
# same client on the same connection.
KeepAliveTimeout 15
## Server-Pool Size Regulation (MPM specific)
# prefork MPM
# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# MaxClients: maximum number of server processes allowed to start
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule prefork.c>
StartServers         5
MinSpareServers      5
MaxSpareServers     10
ServerLimit	   512
MaxClients         500
MaxRequestsPerChild  500000
# worker MPM
# StartServers: initial number of server processes to start
# MaxClients: maximum number of simultaneous client connections
# MinSpareThreads: minimum number of worker threads which are kept spare
# MaxSpareThreads: maximum number of worker threads which are kept spare
# ThreadsPerChild: constant number of worker threads in each server process
# MaxRequestsPerChild: maximum number of requests a server process serves
<IfModule worker.c>
StartServers         2
MaxClients         150
MinSpareThreads     25
MaxSpareThreads     75 
ThreadsPerChild     25
MaxRequestsPerChild  0
# perchild MPM
# NumServers: constant number of server processes
# StartThreads: initial number of worker threads in each server process
# MinSpareThreads: minimum number of worker threads which are kept spare
# MaxSpareThreads: maximum number of worker threads which are kept spare
# MaxThreadsPerChild: maximum number of worker threads in each server process
# MaxRequestsPerChild: maximum number of connections per server process
<IfModule perchild.c>
NumServers           5
StartThreads         5
MinSpareThreads      5
MaxSpareThreads     10
MaxThreadsPerChild  20
MaxRequestsPerChild  0
# ThreadsPerChild: constant number of worker threads in the server process
# MaxRequestsPerChild: maximum  number of requests a server process serves
<IfModule mpm_winnt.c>
ThreadsPerChild 250
MaxRequestsPerChild  0
# StartThreads: how many threads do we initially spawn?
# MaxClients:   max number of threads we can have (1 thread == 1 client)
# MaxRequestsPerThread: maximum number of requests each thread will process
<IfModule beos.c>
StartThreads               10
MaxClients                 50
MaxRequestsPerThread       10000
# NetWare MPM
# ThreadStackSize: Stack size allocated for each worker thread
# StartThreads: Number of worker threads launched at server startup
# MinSpareThreads: Minimum number of idle threads, to handle request spikes
# MaxSpareThreads: Maximum number of idle threads
# MaxThreads: Maximum number of worker threads alive at the same time
# MaxRequestsPerChild: Maximum  number of requests a thread serves. It is 
#                      recommended that the default value of 0 be set for this
#                      directive on NetWare.  This will allow the thread to 
#                      continue to service requests indefinitely.                          
<IfModule mpm_netware.c>
ThreadStackSize      65536
StartThreads           250
MinSpareThreads         25
MaxSpareThreads        250
MaxThreads            1000
MaxRequestsPerChild      0
# OS/2 MPM
# StartServers: Number of server processes to maintain
# MinSpareThreads: Minimum number of idle threads per process, 
#                  to handle request spikes
# MaxSpareThreads: Maximum number of idle threads per process
# MaxRequestsPerChild: Maximum number of connections per server process
<IfModule mpmt_os2.c>
StartServers           2
MinSpareThreads        5
MaxSpareThreads       10
MaxRequestsPerChild    0
# Listen: Allows you to bind Apache to specific IP addresses and/or
# ports, instead of the default. See also the <VirtualHost>
# directive.
# Change this to Listen on specific IP addresses as shown below to 
# prevent Apache from glomming onto all bound IP addresses (
Listen 80
Listen 443

Open in new window

Who is Participating?
KeepAliveEnabled is a property of the WebLogic plugins, it is for connections from Apache to WebLogic.

I still recommend to turn off KeepAlives, even though it appears that you have then turn off using SetEnvIf.

CLOSE_WAIT happens when the remote client has closed the connection, but the local process hasn't. Keepalive maybe one thing preventing apache from closing the socket, and this counts toward the 500 maximum clients.

Another thing you may do is reduce MaxRequestsPerChild  to a saner value (say 1000) and reduce Timeout to 60 or 120 if you know that none of your requests will take that long.

Let me know how it goes,
GoSox2008Author Commented:
In addition I also wrote a little script that takes a netstat every minute.  In the snippet below you can see that the thread count escalates very, very, quickly.
Fri Oct  3 16:09:00 EDT 2008      20
Fri Oct  3 16:10:00 EDT 2008      26
Fri Oct  3 16:11:00 EDT 2008      22
Fri Oct  3 16:12:00 EDT 2008      22
Fri Oct  3 16:13:00 EDT 2008      23
Fri Oct  3 16:14:00 EDT 2008      21
Fri Oct  3 16:15:00 EDT 2008      22
Fri Oct  3 16:16:00 EDT 2008      22
Fri Oct  3 16:17:00 EDT 2008      20
Fri Oct  3 16:18:00 EDT 2008      21
Fri Oct  3 16:19:00 EDT 2008      21
Fri Oct  3 16:20:00 EDT 2008      39
Fri Oct  3 16:21:00 EDT 2008     107
Fri Oct  3 16:22:00 EDT 2008     166
Fri Oct  3 16:23:00 EDT 2008     225
Fri Oct  3 16:24:00 EDT 2008     296
Fri Oct  3 16:25:00 EDT 2008     351
Fri Oct  3 16:26:00 EDT 2008     404
Fri Oct  3 16:27:00 EDT 2008     464
Fri Oct  3 16:28:00 EDT 2008     502
Fri Oct  3 16:29:00 EDT 2008     503
Fri Oct  3 16:30:00 EDT 2008     502
Fri Oct  3 16:31:00 EDT 2008     503
Fri Oct  3 16:32:00 EDT 2008     503
Fri Oct  3 16:33:00 EDT 2008     503
Fri Oct  3 16:34:01 EDT 2008     502
Fri Oct  3 16:35:00 EDT 2008     503
Fri Oct  3 16:36:00 EDT 2008     503
Fri Oct  3 16:37:00 EDT 2008     503
Fri Oct  3 16:38:00 EDT 2008     502
Fri Oct  3 16:39:00 EDT 2008     502
Fri Oct  3 16:40:00 EDT 2008     503

Open in new window

The question is, how many clients do you expect to service ? Since your client connects at least every 60 seconds, you apache server needs to be tuned.

You are running the prefork MPM, which corresponds to this part of the config file:
<IfModule prefork.c>
StartServers         5
MinSpareServers      5
MaxSpareServers     10
ServerLimit         512
MaxClients         500
MaxRequestsPerChild  500000

There you have a hard limit of 500 child processes. Since you have keepalives enabled with a KeepAliveTimeout of 15 seconds, a rough calculations yields a maximum number of servicable clients of around 2000, assuming simple 'I am here' transactions.

How does this compare with your true number of client applications ?

Cloud Class® Course: Microsoft Exchange Server

The MCTS: Microsoft Exchange Server 2010 certification validates your skills in supporting the maintenance and administration of the Exchange servers in an enterprise environment. Learn everything you need to know with this course.

GoSox2008Author Commented:
Hi Christophe,
I've got about 250 clients.
I see that you have both SSL and non-SSL connections, so that would double the number of connections per client.

For the sake of experimenting, can you disable keepalives ? This is done with
KeepAlive Off
(instead of On).

Restart apache after this change, and monitor again the number of processes running.

Another question, do you have any insight into the client ? Do you know if the client is opening multiple connections to the server, like a regular browser would ?

Leaving aside your specific case, a regular browser often use 4 parallel connections to the server (because a web page has many components, like css, images, etc). So 256 clients connecting at once would use 1024 connections for up at least 15 seconds due to keepalive. Double that if you have a mix of SSL and non-SSL, and you see how easy it is to bust your 500 connections limit.

GoSox2008Author Commented:
Hi Christophe,
I'll elaborate a bit further on the client communication.  So there is a SOAP message post by about 125 clients to a servlet on the J2EE application server about every 60 seconds.  The other 125 clients are different.  So those clients post to a different servlet.  That post is just a standard HTTP 1.1 post.  The first set of clients uses SSL, the second does not.

The client should not be opening multiple connections to the server...that I know of.  I'll have to check on that specifically.  Your explanation above does make sense if there are multiple connections.

I also have attached a snippet from my httpd.conf file that shows the virtual hosts that I'm dealing with.  I figured since you mentioned the KeepAlive settings that this might be of interest to you.

I will turn the KeepAlive to Off on the httpd.conf.  As that takes care of TIME_WAIT states, will that resolve CLOSE_WAIT too?

Thanks for your assistance.  It is much, much appreciated!

Best Regards,
<VirtualHost *:443>
    <!-- SSL Certificate Information Removed -->
    <Location /eMessage>
        WebLogicPort 80
        SetHandler weblogic-handler
        ConnectRetrySecs 2
        ConnectTimeoutSecs 20
        Debug OFF
        #DebugConfigInfo ON
        FileCaching OFF
        KeepAliveEnabled ON
        KeepAliveSecs 300
        MaxPostSize -1
        WLIOTimeoutSecs 900
        WLLogFile /opt/apache/logs/wlproxy_log
        WLSocketTimeoutSecs 5
        SSLRequire (%{SSL_CLIENT_S_DN_CN} eq "CNMyClient")
<VirtualHost *:80>
	<Location /pgetsession>
	    WebLogicPort 80 
	    SetHandler weblogic-handler
	    ConnectRetrySecs 2
	    ConnectTimeoutSecs 20
	    Debug OFF
	    #DebugConfigInfo ON
	    FileCaching OFF
	    KeepAliveEnabled ON
	    KeepAliveSecs 300
	    MaxPostSize -1
	    WLIOTimeoutSecs 900
	    WLLogFile /opt/apache/logs/wlproxy_log
	    WLSocketTimeoutSecs 5
	#   Debug ALL

Open in new window

GoSox2008Author Commented:
Hi Christophe,
Below is the entire httpd.conf.  I've actually set the keepalive to off using the setenvif module now that I think about it.  You can see it below.

Best Regards,
GoSox2008Author Commented:
Hi Christophe,
The good news is the frequency of the threads maxing out has certainly gone down from once every day or two down to once every 5 days.  I've lowered the MaxRequestsPerChild to 1000 and reduced the timeout to 60 as well.  KeepAlive has been turned off everywhere.  What do you suggest next to eliminate this once and for all?

Best Regards,
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.