Tomcat 6 and JOBSS (4.0.3 & 4.3)

I'm using the the following java classes that come bundled with Tomcat 6.0 to compress the response object prior to sending it to the client:

1. CompressionFilter.java
2. CompressionResponseStream.java
3. CompressionServletResponseWrapper.java

The filter uses a default threshold value of 128 meaning that if the response size is less that 128 bytes don't compress the response and it uses ServletOutPutStream but if the size is greater that 128 bytes then it uses GZIPOutputSteam to compress the response. We can also set the threshold value in web.xml, so the default value will be overriden.

1. When I tested this with Jboss 4.3, it didn't work for all the transactions if the threshold value was set to 128 or 90000 or even 50000. But all transactions worked if the value is set to 100000.

2. When I tested this with JBOSS 4.0.3 by setting the thresold to 50000, it works for all the transactions. I didn't test JBOSS 4.0.3. with the values 128 or 90000. But I don't understand why JBOSS 4.3 does'n work for any threshold value but JBOSS 4.0.3.

does anybody know why there is a strange behaviour with JBOSS 4.3?

(as far as I know JBOSS uses Tomcat as a webContainer)
javaCaravan0Asked:
Who is Participating?
 
rrzConnect With a Mentor Commented:
>java.lang.IllegalStateException: Current state = FLUSHED, new state = CODING_END  
>I'm getting the following error very often    
I am just guessing here. I am not really an expert at this. But it seems to me that the CharsetEncoder is trying to flush when there is nothing to flush. Furthermore, since calling close on writer just closes the underlying stream, you could try not closing it.
    public void finishResponse() {
        try {
                if (stream != null)
                    stream.close();
            }
        } catch (IOException e) {
                     System.out.println("I would add this: IOE thrown in finishResponse method");
        }
    }

Open in new window

 
Maybe an expert could say that this is safe and whether this could lead to a memory leak.
0
 
ramazanyichCommented:
First I will suggest to enable debugging on the filter set debug parameter to 1.
It will produce some output to STDOUT during processing. From there you could see at which part of CompressionFilter it doesn't allow do compressing.
0
 
javaCaravan0Author Commented:
rrz@871311 & Objects in Particualr and anyone else who could help:

I am going to push the above mentioned files into Production next week to compress any response that is greater that 50 KB. I have seen great improvement in the response time. The current setting to 50KB works fine with JBOSS 4.0.3 ( I'll be discussing the issue with JBOSS 4.3 once I'm done with the release to the production using JBOSS 4.0.3).

Here is another question that concerns me:

1.      My application is spread across there nodes for load balancing (i.e. Three dedicated servers). Each node has 8 GM RAM out of which 2 GB RAM is allocated for JVM (Heap size). As of now (AS is PRD environment) I notice that Max   JVM (Heap) memory available on each node = 700 MB which sometines drops (on each node) to as low as 72 MB but then again it jumps to 200+ MB.
I want to know whether this could cause “out of Memory” problem when the application starts copressing the response? I understand that the CPU utilization will increase due to the time is takes to compress  the data but I’m not sure whether copression activity will have any adverse impact on the memory. Please let me know what you guys think? And what should be the remediation plan.
0
Get your problem seen by more experts

Be seen. Boost your question’s priority for more expert views and faster solutions

 
rrzCommented:
I don't know. I am just a JSP expert.
0
 
ramazanyichCommented:
the memory usage is depends on your buffersize. If you define 50 Kb buffersize then it means JVM will allocate 50 Kbyte in memory per incoming request. It will be garbage collected then request is processed, but in general it means  if you have 2Gb heap size you can serve 2 000 000 000/50 000 = 40 000 requests in parallel, but it doesn't take into account some other running threads. I will divide it by factor 4 to keep safe - so 10 000 requests.
0
 
javaCaravan0Author Commented:
ramazanyich:

Thanks for the excellent explanation. I'd like to clear one more doubt. As of now, some of my responses size is 300KB. Using GZIP, the size is reduced to 30 KB (hence exponential improvement in response time).
In this particular scenario since the response 300KB is > 50 KB, JVM will allocate 50 KB in memory.
Now my question is what is happenping in the AS environment (without GZIP in place yet), since the response is 300KB, is JVM not allocating 300KB in the memory per incoming request for this particular transaction? Using GZIP, are we not actually significantly reducing the memory required (due to compression) to store the response? Please explain.
0
 
ramazanyichCommented:
normally not (if your application doesn't use some internall buffering using bytearrayoutputstreams for example). If you use standard servletoutputstream API and fileinput, outputstreaqms then it should not use whole filesize....
0
 
javaCaravan0Author Commented:
.... " If you use standard servletoutputstream API and fileinput, outputstreaqms then it should not use whole filesize.... "

Thank you for the response, I still would like to get more clarification
1. using GZIP  requires a wrapper and the filter to force the container to use GZIPoutputstream ... i.e. the process of generating the response has been customized
2. When not using the GZIP, the process of generating the response is not customized meaning that the container uses servletoutputsteam by default and other methods from the response and other classes implemented by the container.

I couldn't understand exactly your following explanation:

" If you use standard servletoutputstream API and fileinput, outputstreaqms then it should not use whole filesize.... "

can you please elaborate more ... when standard servletoutputstream is used, does the container not keep the response (which in my case is 300 KB for one transation) in the memory and therefore allocate 300KB memory for this transaction? Please explain.
0
 
ramazanyichCommented:
it depends of course on servlet container implementation. But as far as I know in tomcat they do flushing of data. So it doesn't keep response until all data is returned.
by " If you use standard servletoutputstream API and fileinput, outputstreaqms then it should not use whole filesize.... " I mean the behaviour of your application which you deploy (not a CompressionFilter).
I don't know what your application does inside. It is possible that you do some IO operations in your servlet and by some coinsidence you use byteArrayInputStream (which internally uses byte array to store data).

0
 
javaCaravan0Author Commented:
We have JSPs, which are converted to Servlets by the container to process them. We are not using byteArrayInputStream in any of our JSPs.
To make myself clear (please let me know if I understand it right):

When we are using GZIPoutput stream, JVM is allocating 50KB in the memory and the response is kept in the memory till it has received all the response but for a request that is not using  GZIPoutput stream, the memory is still allocated but the memory management is totally in the control of the container which flushes the output to the client whenever it reaches to a certain threshold that is set, managed and controlled by the container itself.

Please correct me if I'm wrong.
0
 
ramazanyichCommented:
yes, your understanding is correct
0
 
javaCaravan0Author Commented:
very Urgent:

I've pushed the GZIP files into PRD and I'm getting the following error very often, it filling our log files very quickly. Any input will be appreciated:

06:31:49,247 ERROR [[eservices]] Servlet.service() for servlet eservices threw exception
java.lang.IllegalStateException: Current state = FLUSHED, new state = CODING_END
 at java.nio.charset.CharsetEncoder.throwIllegalStateException(CharsetEncoder.java:941)
 at java.nio.charset.CharsetEncoder.encode(CharsetEncoder.java:537)
 at sun.nio.cs.StreamEncoder$CharsetSE.flushLeftoverChar(StreamEncoder.java:358)
 at sun.nio.cs.StreamEncoder$CharsetSE.implClose(StreamEncoder.java:414)
 at sun.nio.cs.StreamEncoder.close(StreamEncoder.java:160)
 at java.io.OutputStreamWriter.close(OutputStreamWriter.java:222)
 at java.io.PrintWriter.close(PrintWriter.java:287)
 at compressionFilters.CompressionServletResponseWrapper.finishResponse(CompressionServletResponseWrapper.java:157)
 at compressionFilters.CompressionFilter.doFilter(CompressionFilter.java:193)
 at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
 at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
 at com.getransportation.eservices.mvc.utils.EServicesServletFilter.doFilter(EServicesServletFilter.java:75)
 at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
 at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
 at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:81)
 at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:202)
 at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
 at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
 at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
 at org.jboss.web.tomcat.security.CustomPrincipalValve.invoke(CustomPrincipalValve.java:39)
 at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:159)
 at org.jboss.web.tomcat.tc5.session.ClusteredSessionValve.invoke(ClusteredSessionValve.java:81)
 at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:59)
 at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
 at com.ge.arch.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:69)
 at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
 at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
 at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:856)
 at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:744)
 at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527)
 at org.apache.tomcat.util.net.MasterSlaveWorkerThread.run(MasterSlaveWorkerThread.java:112)
 at java.lang.Thread.run(Thread.java:595)

Open in new window


According to the above exception, the exception is being cause because of the following method in "CompressionServletResponseWrapper.java"

    public void finishResponse() {
        try {
            if (writer != null) {
                writer.close();
            } else {
                if (stream != null)
                    stream.close();
            }
        } catch (IOException e) {
        }
    }

Open in new window


the above method gets called by "CompressionFilter.java" here is code snapshot from "CompressionFilter.java":
          if (response instanceof HttpServletResponse) {
                CompressionServletResponseWrapper wrappedResponse =
                    new CompressionServletResponseWrapper((HttpServletResponse)response);
                wrappedResponse.setDebugLevel(debug);
                wrappedResponse.setCompressionThreshold(compressionThreshold);
                if (debug > 0) {
                    System.out.println("doFilter gets called with compression");
                }
                try {
                    chain.doFilter(request, wrappedResponse);
                } finally {
                    wrappedResponse.finishResponse();
                }
                return;
            }

Open in new window


How can I fix this problem. I'm waiting for the response. Thank you for your help.
0
 
ramazanyichCommented:
couldn't it be this bug http://bugs.sun.com/view_bug.do?bug_id=5005426 ?
which jdk do you use ?
0
 
javaCaravan0Author Commented:
I'm using JDK 1.4 (compile time) and 1.5 run time with JBOSS 4.0.3
0
 
ramazanyichCommented:
Which precise subversion of JDK1.5 do you use ? according to bug report it is fixed in JDK1.5_16 and jdk1.5_17
0
 
javaCaravan0Author Commented:
initially I was going to use java 6 and jboss 4.3 but then the following files fro toncat 6 don't work some tims:

1. CompressionFilter.java
2. CompressionResponseStream.java
3. CompressionServletResponseWrapper.java

so I dedcided to go with the one we are currently using.
0
 
ramazanyichCommented:
I would suggest to download latest jdk1.5 version from java.sun.com and try to use it with your jboss. You can also use jdk1.6 together with jboss4.0.3
0
 
javaCaravan0Author Commented:
ramazanyich:

Thank you for the suggestion. I think this is the most appropriat approach.

I still would like to understand why GZIPoutputsteam is causing this exception. I never saw this exception prior to using

1. CompressionFilter.java
2. CompressionResponseStream.java
3. CompressionServletResponseWrapper.java

from TOMCAT 6.0 in the application.
As you can see from the stack trace above, I can still remove this exception from the log by doing this:
In the stack trace I have these two line:
at compressionFilters.CompressionServletResponseWrapper.finishResponse(CompressionServletResponseWrapper.java:157)
 at compressionFilters.CompressionFilter.doFilter(CompressionFilter.java:193)

Open in new window


to remove the exceptoin form the log file I can go to the finishResponse method:

    public void finishResponse() {
        try {
            if (writer != null) {
                writer.close();
            } else {
                if (stream != null)
                    stream.close();
            }
        } catch (IOException e) {
        }
    }

Open in new window


and cathc the exception:

public void finishResponse() {
        try {
            if (writer != null) {
                writer.close();
            } else {
                if (stream != null)
                    stream.close();
            }
        } catch (IllegalStateException  ise) {
        }
        catch (IOException e) {
        }
    }

Open in new window


in this way I can remove the exception from the log. What do you think?

Do do think this is a flaw in the Tomcat's java files, since the bug exists in the JAVA API (1.4 and 1.5), Tomcat should have caught this unchecked exception in their method.

Please comment.

Thank you


0
 
ramazanyichCommented:
I think you can catch IllegalStateException because stream is closed anyway...
0
 
rrzCommented:
Question:
If IllegalStateException is thrown and caught, will the stream be closed ?
0
 
ramazanyichConnect With a Mentor Commented:
technically speaking indeed catching without graceful closing of stream could lead to unclosed streams which still consume jvm memory...
functionally as state is FLUSHED it means content is streamed to client, so client has got all data and tcp connection will be closed. If tcp connection is closed java thread which is associated with that connection should also finish it work and cleanup resources.
0
 
javaCaravan0Author Commented:
the discussion was excellent, but due the attention was diverted from the original question by me as I needed answers to some of the other issue related to the same question.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.