Skip to main content

Dynamic Upper Memory Limit for the JVM

9 replies [Last post]
mgrev
Offline
Joined: 2003-08-12
Points: 0

Today a Java application can run out of memory even though the OS it's running on have heaps and heaps of memory left.

I'd like this to be fixed. I know it's hard with all the clever Garbage Collection algorithms, but it needs to be done. IMO.

5.0 did great stuff with the "auto-optimize" of JVM memory container settings, and the next natural step is to do the same thing for the container's own limits.

This is what is so good with Java, it's controlled by the JVM and the JVM can decide how much memory the application can utilize.

If you (Sun) just would let the upper memory limit free for the case without any command line arguments it would be great, I can even settle for a "rehash" algorithm for the time being. Plugins, applets, JWS and such can still be imposed memory limits by their container, and that's great. If we got a totally dynamic upper memory limit in Java we would have every option. Now it has everything but the OS default, that is; totally dynamic.

Cheers,
Mikael Grev

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
alexlamsl
Offline
Joined: 2004-09-02
Points: 0

> Another observation (and correct me if I am wrong):
>
> Currently, Java heap can only grow:
> 1) You start with small heap,
> 2) you allocate huge array, and JVM's heap will grow
> as needed.
> 3) now you release that huge array (assign null) and
> run System.gc(), but JVM still consumes too much
> memory.
>
> Your application may never allocate that huge array
> but still consumes that much memory.
>
> JMV should be able not only to increase the heap size
> as needed nut also decrease.
>
> Full dynamic heap resize.

-XX:MaxHeapFreeRatio and -XX:MinHeapFreeRatio

murphee
Offline
Joined: 2003-06-10
Points: 0

Let's back this up with some research from the Jikes folks:
http://www.cs.umass.edu/~emery/pubs/automatic-heapsize-ismm2004.pdf
This is a paper that talks about adaptive Heap sizing.

kcpeppe
Offline
Joined: 2003-06-15
Points: 0

The problem with resizing a process in memeory is that it's difficult to understand if it is safe to return a chunk of memory. In other words, is there active data that has been malloced or not. That said, you do have a good idea if upper memory is in use in Java heap space in an IBM JVM bacause the default GC is a compacting mark and sweep. With compaction complete, it's easy to release memory. The same can be said in Young space with generational gc. This is because everything in Eden gets pushed into one of the survivor spaces and in doing so behaves as if it were a compaction. Same applies when sweeping survivor spaces. It may only apply to old space is a compacting mark and sweep is in use.

That said, there still seems to be much that can be done with GC. Certianly focusing on the overall balance would seem to be a step in the right direction. I've been experimenting with Windows in that I've turned off virtual memory (or as much as you can turn it off). It has resulted in some interesting performance improvements as well as an interesting problem (windows uses swap to reduce a processes working set). That said, virtual memory and GC don't seem to get along. Thus the option in Solaris to "pin" the JVM in memory and use variable page sizes would also appear to be moving in the right direction.

Another interesting direction is the ability to change GC while the VM is running. This was introduced with JRocket. Unfortunately, the change is limited from moving from low pause GC to low over head GC and once the move has been made, it cannot be undone. Certianly having good runtime management over GC and memory sizing would appear to be a useful feature for those running applications that are adverse to tolerating downtime.

gilescope
Offline
Joined: 2005-07-01
Points: 0

While there might be issues releasing the memory aquired, we should at least have the option to allow the jvm to dynamically grow the memory without having to specify a hard limit. Vote here: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6559565

mthornton
Offline
Joined: 2003-06-10
Points: 0

I believe the current GC gain a performance advantage from having the heap in a contiguous area of memory. On 64 bit systems requesting large amounts of address space should present little problem. On current 32 bit Windows unfortunately the address space is horribly fragmented --- there is only 2GB available (usually) and DLLs like SQLUNIRL.DLL (from Microsoft) are sitting at 0x41090000, which is just about in the middle of the user address space. So I have had a system with a Java heap of a mere 221MB then fail to memory map a 254MB file. (The memory mapping is done in C++ component which insists on mapping the file in one piece.)

jodeleit
Offline
Joined: 2004-10-26
Points: 0

..are a big problem for us when processing huge zip files, which results most of the time in a OutOfMemoryError when calling "new java.util.zip.ZipFile(hugeFile)".
Btw: The filed bug is no longer online :(

patrikbeno
Offline
Joined: 2004-10-11
Points: 0

Another observation (and correct me if I am wrong):

Currently, Java heap can only grow:
1) You start with small heap,
2) you allocate huge array, and JVM's heap will grow as needed.
3) now you release that huge array (assign null) and run System.gc(), but JVM still consumes too much memory.

Your application may never allocate that huge array but still consumes that much memory.

JMV should be able not only to increase the heap size as needed nut also decrease.

Full dynamic heap resize.

tim12s
Offline
Joined: 2004-02-02
Points: 0

There is currently a GC option for this to release memory pages back to an OS.

patrikbeno
Offline
Joined: 2004-10-11
Points: 0

I am glad to hear that. which one is it and why it is not enabled by default?