Jaav thread consuming high load on Linux

We have a tomcat application using JDK 1.4 on RHEL 3. Sometimes one of the Java threads utilizes around 30-50 % of the CPU (from TOP command) causing the system load average to go over 1 (normally its 0.1-0.2) and hence results in the Application being very slow sometimes or inaccessible. Its only one thread that is consuming so much CPU (nearly 50 %) while CPU consumption by others does not go over 5%. The functional team says there are not performing any extra activities during the time when the issue occurs. So I need to know what is causing this Java thread to consume so much resources then.

Some findings -

1. The address space (Vmsize) of all the threads is around 1.5 GB (the limitation on Linux is 3 GB).
2. We used 'Kill -3 <pid_of_java_thread>' and generated a thread dump in /tmp directory but its only 16 Kb in size (we except it to be in GB's).

So please provide info on how the Java thread can be analyzed ?
makk2010Asked:
Who is Participating?
 
roemelboemelConnect With a Mentor Commented:
I would also guess that this is caused by the garbage collector.
First I would let run the application with the "-verbose:gc" option where you could see some details about the garbage collector statistics. With other options evenmore details can be showed. You can change some parameters of the gc. Beside the default garbage collector there are also other garbage collectors which could be used (basically you can trade "gc troughput" with "application responsivness").
More details here
0
 
objectsCommented:
sounds like the load is possibly caused by garbage collection

> 2. We used 'Kill -3 <pid_of_java_thread>' and generated a thread dump in /tmp directory but its only 16 Kb in size (we except it to be in GB's).

it contains the start of all threads, its not going to be huge

0
 
makk2010Author Commented:
Apologies for the late reply. The Garbage collector seems to be good but unfortunately I am not a Developer or Programmer to implement it and we have no support from Application team. So we are looking at some tool or command by which we can take a dump of the memory (used by the Java thread) at the time the issue occurs so we can know whether the resource consumption is a problem and if that is the case then add more memory to the server.
0
The Firewall Audit Checklist

Preparing for a firewall audit today is almost impossible.
AlgoSec, together with some of the largest global organizations and auditors, has created a checklist to follow when preparing for your firewall audit. Simplify risk mitigation while staying compliant all of the time!

 
roemelboemelCommented:
With jconsole you could look "inside" the java process and see the status of the java process.
0
 
makk2010Author Commented:
It seems Jconsole is available with JDK 1.5 and higher. But we are having Java 1.4

$ java -version
java version "1.4.2_05"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.2_05-b04)
Java HotSpot(TM) Client VM (build 1.4.2_05-b04, mixed mode)

0
 
roemelboemelConnect With a Mentor Commented:
As far as I remember its somehow possible to add the JMX also to a JVM 1.4 and then run the jconsole (client) on 1.5+. But RHEL 3 and jdk 1.4 is quiet old these days. If you're stuck to 1.4 it's probably really easier to just add some memory (which is very cheap these days) if possible to see if this ameliorates the situation. Other solution for example could be to restart the application periodically during off-business hours.
0
 
makk2010Author Commented:
Just got a hint that it could be due to Garbage objects in the memory and but could not find any tool or utility that could collect Java heap dump perhaps due to the version of JVM being used.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.