Posted by scottf76
on February 24, 2009 at 12:05 PM PST
In my app we are using jdk 1.6_11 (64-bit), 6gb of heap. We are starting to perf test it in such a way that we'd like to induce a OOM error by reducing the memory footprint to the point that it crashes under heavy load. From this we'd like it dump the heap and analyze where the bloating is occurring.
I'd really like to use a 32-bit jvm maxed out with 4gb so that the heap dump sizes are more managable, in terms of copying it around and parsing it. But when I start up our app with 32-bit (1.6_12) and a maxed out footprint it seems to crash under load almost immediately without a heap dump or even a hprof trace.
Before actually implementing this I would have expected the 32-bit jvm with 3.5 GB of heap to be almost comparable to the 64-bit with 6GB due to the overall footprint of 32-bit vs. 64-bit. Unfortunately this is not the case. We are using
-XX:MaxPermSize=192m -Xmx3500m -Xms3500m -XX:+PrintHeapAtSIGBREAK -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC -server
Does anyone have any suggestions here?
OS info ->
$ uname -a
Linux pserv01 2.6.18-8.el5xen #1 SMP Thu Mar 15 19:56:43 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux