java - many jvms cause memory problem even though is a lot of free memory

We have a few Java applications.  We are using " -Xms256m -Xmx256m" to start the jvms on a Linux (64 bit) server with 64 GB of RAM.

There is 43GB of free memory after OS is up.  So in theory we would should be able to start more than 86 jvms.

But after we start about 10 jvms, we start getting out of memory errors such as "cannot create GC thread out of system resources".  

If I do a "unix top", we see there is more than 32GB free memory and the free swap space is pretty high too.

So I am wonder what system resources are we running out of?
rmundkowskyAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

tmwsiyCommented:
I would try to set the -Xmn varible to something less than max heap size.

Not sure the exact issue (memory leak, underallocated heap, etc). But This seems to be at least partly to blame.

From here:

http://www.caucho.com/resin-3.0/performance/jvm-tuning.xtp

"There are essentially two GC threads running. One is a very lightweight thread which does "little" collections primarily on the Eden (a.k.a. Young) generation of the heap. The other is the Full GC thread which traverses the entire heap when there is not enough memory left to allocate space for objects which get promoted from the Eden to the older generation(s).

If there is a memory leak or inadequate heap allocated, eventually the older generation will start to run out of room causing the Full GC thread to run (nearly) continuously. Since this process "stops the world", Resin won't be able to respond to requests and they'll start to back up.

The amount allocated for the Eden generation is the value specified with -Xmn. The amount allocated for the older generation is the value of -Xmx minus the -Xmn. Generally, you don't want the Eden to be too big or it will take too long for the GC to look through it for space that can be reclaimed."

0
for_yanCommented:

Look at this presentation - it shows many points on top of memory which can affect the sitaution with many JVMs:
http://www.springsource.com/files/uploads/all/pdf_files/news_event/Inside_the_JVM.pdf
0
rmundkowskyAuthor Commented:
In regards to the first comment,

There is no memory leak. Memory profilers show that JVMs that are started have non-growing memory requirements and they are not using all of their heap space (regardless of if it is Eden,....).

As for "-Xmn", I don't think this matters, because the problem is not internal to any one JVM. In other words, if you look at the memory profiler of each of the 10 running JVMs, they are running just fine with 256MB of memory. But when you try to start up a new JVM, that one throws an error; even though there is a lot of free system memory available still.  And my understanding is that a JVM is fully independent of another JVM.  So, if one JVM is playing fine in its own virtual computer (with its own heap), it will not affect another JVM that is off playing in its own heap.

Now, maybe I misunderstand how Java JVMs work.

We basically run the different JVMs like:

java -Xms256m -Xmx256m foobar1
java -Xms256m -Xmx256m foobar2
java -Xms256m -Xmx256m foobar3
java -Xms256m -Xmx256m foobar4



---------------------------------------------------

As for the second comment, I'll take a look at the slides and see if anything applies
0
Cloud Class® Course: Microsoft Office 2010

This course will introduce you to the interfaces and features of Microsoft Office 2010 Word, Excel, PowerPoint, Outlook, and Access. You will learn about the features that are shared between all products in the Office suite, as well as the new features that are product specific.

tmwsiyCommented:
What version of the jvm are we talking here?
0
rmundkowskyAuthor Commented:
java version "1.6.0_27"
Java(TM) SE Runtime Environment (build 1.6.0_27-b07)
Java HotSpot(TM) 64-Bit Server VM (build 20.2-b06, mixed mode)
0
falterCommented:
Just a hint,
check  the system resources allowed for the user used to start your programm.

try ulimit -a to see the current settings, restrictions on system resources defined on a per user basis,
 
you may run out of file handles, maximum number of open files allowed for one user
or number of processes ...
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
rmundkowskyAuthor Commented:
I'll take a look on Monday. Based on eeading the slides, i am guessing the process heap cannot be created, because a GC error occurred. This might be related to user restrictions.
0
rmundkowskyAuthor Commented:
Well, I have not had time to try different approaches with this problem. I did note this web page ( http://stackoverflow.com/questions/4130312/how-many-threads-can-a-java-vm-support-in-linux ) that may explain the issue as you both partly noted. Seems that the Java thread's stack is outside the heap space. This likely caused the problem. Unfortunately I will not have time to try things out until the next release (months from now). If I find anything I will note it here. Thanks all!
0
rmundkowskyAuthor Commented:
Turns out that this was not a stack issue. For 64 bit systems, most people state stack getting filled up is a non-issue. Granted you can still reduce the stack block size allocated. The problem was related to ulimit settings. Turns out you can get complaints about being out of memory that are actually do to running out of "open files handles" and/or "processes". Increasing these settings for users in (/etc/security/limits.conf) fixed things.
0
rmundkowskyAuthor Commented:
One other comment: processes kicked off via xinetd have a max "number of open files" set to 1024 which is not effected by changes to the PAM settings (/etc/security/limits.conf). And the work around for this is to have xinetd run the process as root and su -c to the user to get the PAM settings to be picked up.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Java

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.