Skip to main content

Determining Optimum Heap size

8 replies [Last post]
Joined: 2007-06-12

We are using Linux AMD 64 bit boxes with 8G Ram but running 32-bit JVMs in these boxes. We are running 4 processes with each 1.2G of max heap size and another process with 1G max heap size.

One process(1G process) started throwing OutOfMemoryException suddenly. At this point of time the free memory is only 600 MB but the swap size is 16G.
The out of memory error coming bcoz of unable to create new thread.

My Query is how to determine the effective heap sizes and numbers of processes that can run in available Ram foot print.

Do we need to leave some memory for thread creation or any other specific puroposes?

The error stack trace is

WARNING: could not create thread for request dispatch
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(
at com.sun.jini.thread.ThreadPool.execute(
at com.sun.jini.jeri.internal.mux.MuxServer.dispatchNewRequest(
at com.sun.jini.jeri.internal.mux.MuxServer.handleOpen(
at com.sun.jini.jeri.internal.mux.Mux.handleData(
at com.sun.jini.jeri.internal.mux.Mux.dispatchCurrentMessage(
at com.sun.jini.jeri.internal.mux.Mux.readMessageBody(
at com.sun.jini.jeri.internal.mux.Mux.processIncomingData(
at com.sun.jini.jeri.internal.mux.StreamConnectionIO$
at com.sun.jini.thread.ThreadPool$

Please help me out in resolving this issue.

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Joined: 2003-06-17

To see the number of threads per java vm, use
ps -p PID -o pid,user,%cpu,rss,etime,nlwp,args
There is a shell script to help track it

Joined: 2007-05-01

Might want to first determine if it is on off heap memory issue or a defined limit you are running into.

First of all, what are your limit settings for Threads per process on Linux?

Take a thread dump of the app and see if the number of threads is approaching the limit. If so, you can either reduce the number of threads or increase the limit.

If not, you may need to look into memory issues in your app. Don't forget your new friends: -XX:+HeapDumpOnCtrlBreak -XX:+HeapDumpOnOutOfMemoryError

Please report back what you find.

Joined: 2007-06-12

I'm sorry, I was away.

We tried -XX:+HeapDumpOnOutOfMemoryError but couldn't get much information from it. I couldn't find -XX:+HeapDumpOnCtrlBreak option in HotSpot VM options page. Is this newly introduced? We are using jdk 1.5 update 11.

Is there a way to find out the no of threads running in that JVM? Used jstack but it gave me only 192 threads. How to get the ThreadDump in Linux.

some one suggested to use ulimit -n to increase the no of native threads in Redhat linux. is it the right approach?

Joined: 2007-01-09

> Is there a way to find out the no of threads running
> in that JVM? Used jstack but it gave me only 192
> threads. How to get the ThreadDump in Linux.
Type [b]kill -3 [i]
[/i][/b] and check the STDOUT of the java process (usually the log file of the running application server). You must run it from the same user as the one running the java process.
> some one suggested to use ulimit -n to increase the
> no of native threads in Redhat linux. is it the right
> approach?
Again, I think it has nothing to do with the amount of threads in the system/process but rather with an inability of the jvm to create another native thread.

If this is not too hard, can you try just changing the max size above 1550M (-Xmx1550m) and tell us what was the result.


Joined: 2007-01-09


We've been having this problem occasionally in the our production environment on HP boxes. Both PA-RISC and IA-64 running HPUX11i
The technical reason, at least with our installation was that the permanent generation override/overwrites the lower part of the stack thus disabling the JVM from creating new native threads.
Again in our case, it happens when the permanent size crosses 150MB.

You would want to force JAVA to start with a 3rd or 4th quadrant memory management, and it is achieved by setting the maximum heap size about 1550MB (-Xmx1550m).

Try it and it [i]magically[/i] solves the problem.


Joined: 2007-01-09

Here is the description from HP java pages. The document is part of 1.3.1 release notes. But we've found it relevant also for 1.4 and 5 on all HP platform. So I have no reason to believe that other *nix variations will act differently

[i][b]Application dependent considerations when using large heap size
Thread stacks and C heap are allocated from the same address space as the Java heap, so if you set the Java heap too large, new threads may not start correctly. Or some other part of the runtime or native methods may suddenly fail if the C heap cannot allocate a new page. An application may start up correctly with a 1.7GB heap, but this does not necessarily mean it's going to work correctly.

For example, if you use a 1MB stack size, and have about 80 threads in the process, you will have 80MB for stacks. If you have native libraries, you would probably add another 64MB for C heap. You have now used a total of 144MB of your heap for stacks and C heap, so this address space is not available for Java heap.

Since all programs have varying C heap requirements and have varying numbers of threads, it's difficult to ascertain what the effect will be of running the application at its limit. It's important to understand the real requirements of your application. [/i]

Taken from

Joined: 2005-05-23

The memory to create a therad is not taken from the heap. Your application has a total of 2GB of memory -- 1.2 gb of that is for the heap, and the remainder is for the JVM text and data (the native code of hte JVM), any native libraries your applicatoin uses, NIO mapped data, and thread stacks. It is this out-of-heap memory that prevents you from creating a new thread.

You can either reduce the stack size for your threads (-Xss128k or something), use fewer threads, or use fewer native libraries or NIO buffers if applicable.

Joined: 2007-06-12

I was under the assumption that thread stack size is not part of the heap. I read some where the default stack size in Linux is 1024k, so we provided -XX:ThreadStackSize=512k. But still we had the issue.

So out of the 2GB all these resources uses memory out of the heap and either the heap should be sized according to the threads, or the no of threads should be decided on the available memory.