Skip to main content

Reduce non heap memory size

3 replies [Last post]
st10470
Offline
Joined: 2008-05-23
Points: 0

Working on an 300,000 lines of code project, we found the non-heap memory size to inflate from about 40 Mbytes to 65 Mbytes during a well identified part of the server lifecycle, that involves database access (Solid 4.5 through JDBC) and network acquisition (java.nio).
We would like to reduce this stack memory footprint, if ever possible.
Failing this, we will need to explain it at least.

What are the most likely root causes of the non heap memory to inflate ?

From the JVM spec 2nd Ed, I found that the method frame is the place in the stack where a lot of memory space may be allocated, for a stack frame is created for each method and contains:
- Local variables allocation
- Method operand stack
And other low memory-consumer areas:
- Reference to the runtime constant pool
- Method result return
The only tracks I found are:
- Look for methods atomicity to reduce local variables lifetime
- Avoid runaway recursive methods

This task looks exhausting and likely with a low benefit. What are your views on this ?
Do you know some other tracks to follow to reduce the stack size in the code, architecture, or by JVM tuning ?
Or on the contrary, could I prove that it is impossible, because this is a JVM private part, which is already optimized ?
We use the J2SE with the JRE 1.5.0_08 under Linux 2.6.18.

The application launches about 200 Threads max but these are static, and reducing the stack size with -Xss does not seem to have any effect.
The JVM spec lets the implementors take this parameter into account or not, it seems it is not the case in the one we use.
Thanks in advance for your answers or comments.

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
cweiblen
Offline
Joined: 2006-11-30
Points: 0

Before wasting a lot of time on this, can you quantify how is it affecting the application performance? Are you sure it is the cause of a problem?

Which "non-heap" memory is increasing? code-cache or permanent space? I assume you are using the Server JVM (not client)?

st10470
Offline
Joined: 2008-05-23
Points: 0

There is currently no real problem since the performance requirements (RAM and CPU limits) are fulfilled, but this might change. We are waiting for a "real world" data capture to be replayed and measured for performance, besides some functionalities are not implemented yet... So we try to anticipate bottlenecks before they arrive with home made scenarios.
We are currently working on the measurement of non heap memory. Until now we only had the overall difference between resident memory given by the OS via TOP, and the gcviewer total heap allocation, which I called "non heap memory". The next step is for us to try to distinguish code cache from permanent space.
The JVM is launched with aleady optimized options and monitoring tools:
-server -XX:+PrintGCDetails –Xms75m -Xmx300m –XX:NewSize=32 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC

Thanks and Regards

st10470
Offline
Joined: 2008-05-23
Points: 0

We have now some more information on the distribution of 75 MBytes of non heap memory usage of our application:
- 32 MBytes Permanent Space (measured by "CMS Perm Gen" Memory pool via JMX)
- 6,5 MBytes Code Cache (measured by "Code Cache" Memory pool via JMX)
- 13 MBytes loaded Bytecode (measured via -verbose option)
- 10 MBytes due to JMX monitoring threads (RMI, ...)
- Remaining 13.5 MBytes: unidentified source

1) Is it correct to add PermGen space and byte code ? In other words, does the permanent space size given by JMX include the bytecode size of loaded classes ?
2) The "CMS Perm Gen"'s memory manager is the "GarbageCollectorManager" instance. It seems contradictory, but does it mean that this memory is eligible for garbage collection ?
3) What is the effect of combining '-Xint' and '-server' options ?
4) Along with the JVM options given in the previous post, we tried to add the option "-Xint" while keeping the "-server" option to disable the JIT Compiler, and to understand its weight on memory footprint (we know it's not a performance solution!): which gives a decrease of 25 MBytes of RAM... with no change in the PermGen space, and an empty Code Cache space, as expected.
The strange thing is, after about 1h30, the RAM consumption begins to increase regularly like a memory leakage. The normal RAM profile of the application is perfectly flat even after several hours. Does it hide a real leakage in the application ? Or is the garbage collector slowed down too much in interpreter mode and therefore overwhelmed ?
5) What could we do to decrease the Perm Gen space, if ever advisable ?
6) Given that we load nearly all classes at startup, would it be interesting to differ the loading of modules that are rarely used, by loading them on demand ?
Thanks in advance