Skip to main content

Eliminate specifying maximum memory limit

60 replies [Last post]
mhunsicker
Offline
Joined: 2005-09-06
Points: 0

I would like to see java dynamically allocate (and free) memory as its needs arise. Most of the applications I create work with files of varying size. Users can make large files or small files. They can also work with multiple files at a time. I have no way of knowing what’s a good maximum memory size. Some users can get by with 256 MB. Others require 1 GB.

I know this poses numerous technical problems for the JVM. But frankly, its embarrassing to have to deal with a customer who says my application threw an out of memory exception when they have 2 GB RAM on their machine, nothing else running, and my app was only consuming 512 MB.

This is the single biggest issue with the JVM. And while there is work-around, it requires the user to crash, lose their work, and change an obscure setting that I don't want to expose to them.

I know this topic is already posted in this forum, but the latest one I found was from 2004.

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
slasha
Offline
Joined: 2004-07-04
Points: 0

mthornton said:
> There is a difference between how much an application
> could use (with lazy programming) and how much they
> [i]need[/i] to use. Given the issues with large
> heaps, garbage collection and virtual memory, I think
> that designing general use applications of this size
> may be unwise.

And I think you are being a bit presumptous about "lazy programming". My day job, for instance, is working on a server-like product that can be run by non-technical people. Because it's server-like, this application wants to eat a lot of RAM. Because it must be run on any machine (that meets some minimum specs) by a non-technical user, we have had much difficulty with this issue. It took two of us "lazy programmers" about 2 years to solve the memory problems inherent in a design made by people who had no understanding of Java's [b]inherent flaws[/b] when dealing with large amounts of memory. Perhaps there are thousands of developers creating programs that you can't even imagine, and some of them (such as I) need this issue to be fixed by Sun. Java is a great language in many ways, but for this basic deficiency to still exist after 10 years is a problem. I will state this again:

Java will [u]never[/u] be successful on the desktop until this issue of a limited Xmx and terrible performance when using virtual memory is resolved.

mthornton
Offline
Joined: 2003-06-10
Points: 0

For me the -Xmx issue is a minor irritant. A bigger issue is the effect -Xms has on performance when you are loading a lot of data into a big heap. For example suppose you are creating 500MB of objects each typically 100 bytes or so in size. Then specifying -Xms500m when starting the JVM will make the process much much faster than if you use the default value. It would be very helpful if the application could give some sort of hint to the JVM at run time so that adjusting -Xms was unnecessary.

olsonje
Offline
Joined: 2005-08-10
Points: 0

I would completely agree with that.

mthornton
Offline
Joined: 2003-06-10
Points: 0

There is no connection between this issue and the grey screen problem. On being restored, an application isn't necessarily held up until all its memory is reloaded. However when it next requires a full garbage collection, that will touch all of the heap and thus force it all to be read back in. This would still be the case with a chunked heap.
The simplest 'cure' for the grey screen problem is to use the switch that stops Windows from clearing a process's working set when it is minimised.
The more general problem is that garbage collection that provokes extensive virtual memory paging is expensive. I don't know what can be done about this (and perhaps it deserves a separate topic).

mthornton
Offline
Joined: 2003-06-10
Points: 0

> mthornton,
> You’re saying that Windows has an issue with giving a
> contiguous memory chunk of more than 1 GB? So you’re
> saying that on a 64-bit Windows machine, the OS would
> be able to provide a larger chunk easier due to
> improved memory management? I’m not sure I agree that
> having a larger pointer will actually solve Window’s
> many memory issues, but that’s another topic.

If you look at the address space map I provided you see that the total free space is 1.8GB and only about 1.3GB is in reasonably large pieces. Thus the largest practical heap achievable with a chunked heap would be about 1.3 or 1.4GB --- i.e. only about 300MB more than I can achieve now. How much engineering effort from Sun is it worth to achieve such a modest increase?

On 64 bit Windows it is not a matter of improved memory management as simply the address space being so much larger. Even Microsoft would have difficulty in fragmenting that space to the extent that a request for several contiguous GB would fail (assuming sufficent physical memory / swap space).

> Then my retort is I want the JVM to no longer require
> a single contiguous chunk of memory.
Why when ridiculous amounts of contiguous address space are available on 64 bit systems? Contiguous address space is only a problem on 32 bit systems because the address space itself is small and then halved by the design of Windows and then further fragmented so we end up with little more than a quarter of what you might have expected from 32 bits.

You could also use a different 32 bit operating system (such as Solaris) and get perhaps 3GB or even more heap.

> Thus, it can
> grab as many 1GB chunks as it wants.
On Windows even with chunks the maximum will usually be rather less than 2GB. The /3GB switch is not usually applicable to desktop environments (I've forgotten which versions of the OS support this).

> Are you saying that agree this a problem, but that
> its lower priority than other issues?
Exactly so. Given infinite resources then of course it ought to be fixed.

> Maybe this stems from
> the fact that all apps I write are desktop apps (not
> server).

My applications are also desktop apps. However large memory gobbling activities don't always fit comfortably in the same process as a graphical interface. A pragmatic solution to this is to split the application over two or more processes each of which can readily use 1GB or so. It isn't always practical to do this but it does work in my case. I can also invoke the secondary processes with the -server option, while the gui component uses the regular client JVM.

mhunsicker
Offline
Joined: 2005-09-06
Points: 0

One of the reasons I wanted this memory limitation fixed is that I knew (or thought) it would require Sun to no longer require a single contiguous memory chunk. It is my understanding that this setup is the main cause of the grey screen. While Mustang got rid of the grey screen symptom, the problem is still there. Perhaps it’s worse now. Correct me if I’m wrong, but if a user switches to a java process that has been moved to virtual memory, the user will see the application, but it’s still not responsive while it has to load its entire memory from disk into main memory. At least the grey screen was (bad) feedback that the program was busy. Now a user clicks on things and nothing happens.

What other issues do you feel are more important than this?

Also, several people have suggested that I just bump up my Xmx setting to the maximum system memory. mthornton suggests that 64-bit OSes will solve the problem. If that’s the case, I go back to original post title. Why can’t sun eliminate the requirement of specifying a maximum memory limit. What about adding an additional flag that means ‘use all the memory you can’.

alexlamsl
Offline
Joined: 2004-09-02
Points: 0

With my own experience (120GB SATA HDD, 1GB DDR400) I dun see swapping as much of the problem even with larger Java applications like NetBeans (which grows to >90MB of memory usage rather quickly when I use it), so from my point of view I cannot see an immediate consensus on the priority issue here. ;)

As with the flag that automatically maximises Xmx on any system, I do agree that it's a good idea - and since it looks like a quick patch to me (do kindly correct me if I'm wrong) it's not a bad idea to implement it after all.

mhunsicker
Offline
Joined: 2005-09-06
Points: 0

mthornton,
You’re saying that Windows has an issue with giving a contiguous memory chunk of more than 1 GB? So you’re saying that on a 64-bit Windows machine, the OS would be able to provide a larger chunk easier due to improved memory management? I’m not sure I agree that having a larger pointer will actually solve Window’s many memory issues, but that’s another topic.

Then my retort is I want the JVM to no longer require a single contiguous chunk of memory. Thus, it can grab as many 1GB chunks as it wants. I’ve been intentionally ignoring the technical side of this issue. Why? Because that’s for Sun to decide how to solve it. I understand this is a huge amount of work, but I feel its worth the effort.

Are you saying that agree this a problem, but that its lower priority than other issues? If so, I can respect that, but I disagree. Maybe this stems from the fact that all apps I write are desktop apps (not server).

jchristi
Offline
Joined: 2005-02-09
Points: 0

Check this out re: requiring a contiguous memory region for the heap

http://www.unixville.com/~moazam/

mhunsicker
Offline
Joined: 2005-09-06
Points: 0

mthornton,
I’m not following you. Are you saying that if we use a 64-bit JVM, we could access more memory therefore this wouldn’t be a problem? If that’s what you’re saying, I don’t see how that fixes anything. You’ve just made it so you can specify a larger number for the maximum. I want to remove the maximum so the VM can allocate and deallocate memory as its needs dictate.

sm1
Offline
Joined: 2003-06-10
Points: 0

It is imperative that a JVM for the desktop be allowed to use the available memory on the host and that the end user do not touch the Java command line.

Is the JVM team working on this? If not, what can we do to help make them work on this?

I proposed a switch that, when used, would disable the preset memory limit for the JVM.

It may be difficult for the JVM staff to do, I don't know, but it must be done. This is a major limitation preventing the increase use of Java for desktop apps.

mthornton
Offline
Joined: 2003-06-10
Points: 0

On 32 bit Windows there are significant difficulties in getting a heap much bigger than about 1GB (regardless of how much physical RAM you have). With substantial work by the JVM team they might get it up to perhap 1.3GB. If your application can get by with say 800MB, then just specify -Xmx800m for everyone --- it most cases it won't noticably affect those who really only need 256MB.

Remember that -Xmx doesn't allocate that amount of memory, it merely reserves address space. It is -Xms that control the amount of memory actually allocated at start up.

If your users might need more than 800MB, then they are in a small and unusual group of users. Thus we can't expect extravagant efforts from Sun engineers, especially as in many cases the use of a 64 bit JVM offers a simpler practical solution.

So on a 32bit JVM, and assuming you are not memory mapping large amounts of data (which is problematic in any case) or using native code which does something similar, then just use the largest value for -Xmx that you can get away with (use an automatic trial and error approach to find out the value). On a 64 bit JVM request perhaps the minimum of the amount of physical RAM or say 2GB. If this is not enough then I venture to suggest that you will have other more pressing problems than determining suitable values for -Xmx.

Personally, despite regularly processing data requiring very large heaps, I find this problem a minor irritation rather than a major limitation. Secondly the best solution for me does not require anything more from Sun.

alexlamsl
Offline
Joined: 2004-09-02
Points: 0

> On 32 bit Windows there are significant difficulties
> in getting a heap much bigger than about 1GB
> (regardless of how much physical RAM you have). With
> substantial work by the JVM team they might get it up
> to perhap 1.3GB. If your application can get by with
> say 800MB, then just specify -Xmx800m for everyone
> --- it most cases it won't noticably affect those who
> really only need 256MB.
>
> Remember that -Xmx doesn't allocate that amount of
> memory, it merely reserves address space. It is -Xms
> that control the amount of memory actually allocated
> at start up.
>
> If your users might need more than 800MB, then they
> are in a small and unusual group of users. Thus we
> can't expect extravagant efforts from Sun engineers,
> especially as in many cases the use of a 64 bit JVM
> offers a simpler practical solution.
>
> So on a 32bit JVM, and assuming you are not memory
> mapping large amounts of data (which is problematic
> in any case) or using native code which does
> something similar, then just use the largest value
> for -Xmx that you can get away with (use an automatic
> trial and error approach to find out the value). On a
> 64 bit JVM request perhaps the minimum of the amount
> of physical RAM or say 2GB. If this is not enough
> then I venture to suggest that you will have other
> more pressing problems than determining suitable
> values for -Xmx.
>
> Personally, despite regularly processing data
> requiring very large heaps, I find this problem a
> minor irritation rather than a major limitation.
> Secondly the best solution for me does not require
> anything more from Sun.

point++ ;)

Even when I'm dealing with 400MB JPEGs I just need to give myself a large enough Xmx - and I can't see many Desktop Application for normal end-users would need to process that kind of data on a regular basis, if ever.

alexlamsl
Offline
Joined: 2004-09-02
Points: 0

> alexlamsl,
> No, that’s not an acceptable compromise. All modern
> rn operating systems use virtual memory. This allows
> you to allocate more memory than there is RAM. I want
> my java application to have access to this.
>

Oh if you are talking about that - if I remember right, Java cannot use more memory than there is RAM; I would really loved to, but it is proven to be impossible when I tried last summer due to some technicalities in the JVM, apparently :(

mthornton
Offline
Joined: 2003-06-10
Points: 0

> Oh if you are talking about that - if I remember
> right, Java cannot use more memory than there is RAM;

Of course it can use more memory than real RAM, it may run like treacle but it does work.

alexlamsl
Offline
Joined: 2004-09-02
Points: 0

really?!

I've got 1GB of RAM, but I can't specify Xmx to 1GB. It just refuse to even start up!

mthornton
Offline
Joined: 2003-06-10
Points: 0

You probably don't have enough contiguous address space to create a 1GB heap. You could add another GB of physical memory and still have the same problem.

alexlamsl
Offline
Joined: 2004-09-02
Points: 0

would you mind elaborate a bit further on the topic? :)

coz I've tried on another machine with 4GB RAM, and this time the upper limit for Xmx is about 1.7GB, which I can understand as the limit of a 32-bit system for that...

mthornton
Offline
Joined: 2003-06-10
Points: 0

We have a utility which uses JNI to enumerate the usage of address space in the process. I have just run a test case and attached the output below. The heap starts at this entry:

At 10010000 size=128MB [6144KB committed, 122MB reserved] type=private protect=executeReadWrite
At 18010000 size=1073MB [FREE]
At 5b0a0000 size=28KB [28KB committed] type=image protect=executeWriteCopy
Module at 5b0a0000, size=28KB C:\WINDOWS\system32\umdmxfrm.dll

So in this case I have 128MB maximum of heap structures reserved by Java (of which only 6MB is actually allocated). Then there is 1073MB of free space which might be used for a bigger heap, and then we see a Windows system DLL at a surprisingly low address which sets the limit on the maximum Java heap I can have. It is possible to rebase these DLLs so that they are loaded at a higher address, but that is a hassle.

At 0 size=64KB [FREE]
At 10000 size=4096B [4096B committed] type=private protect=readWrite
At 11000 size=60KB [FREE]
At 20000 size=4096B [4096B committed] type=private protect=readWrite
At 21000 size=60KB [FREE]
At 30000 size=320KB [100KB committed, 220KB reserved] type=private protect=readWrite (2 allocations)
At 80000 size=12KB [12KB committed] type=mapped protect=readOnly
At 83000 size=52KB [FREE]
At 90000 size=1088KB [72KB committed, 1016KB reserved] type=private protect=readWrite (2 allocations)
At 1a0000 size=64KB [12KB committed, 52KB reserved] type=mapped protect=readWrite
At 1b0000 size=88KB [88KB committed] type=mapped protect=readOnly
Mapped from: \Device\HarddiskVolume2\WINDOWS\SYSTEM32\UNICODE.NLS
At 1c6000 size=40KB [FREE]
At 1d0000 size=244KB [244KB committed] type=mapped protect=readOnly
Mapped from: \Device\HarddiskVolume2\WINDOWS\SYSTEM32\locale.nls
At 20d000 size=12KB [FREE]
At 210000 size=260KB [260KB committed] type=mapped protect=readOnly
Mapped from: \Device\HarddiskVolume2\WINDOWS\SYSTEM32\SORTKEY.NLS
At 251000 size=60KB [FREE]
At 260000 size=24KB [24KB committed] type=mapped protect=readOnly
Mapped from: \Device\HarddiskVolume2\WINDOWS\SYSTEM32\sorttbls.nls
At 266000 size=40KB [FREE]
At 270000 size=12KB [12KB committed] type=mapped protect=readOnly
Mapped from: \Device\HarddiskVolume2\WINDOWS\SYSTEM32\CTYPE.NLS
At 273000 size=52KB [FREE]
At 280000 size=800KB [28KB committed, 772KB reserved] type=mapped protect=executeRead
At 348000 size=32KB [FREE]
At 350000 size=4096B [4096B committed] type=private protect=readWrite
At 351000 size=60KB [FREE]
At 360000 size=4096B [4096B committed] type=private protect=readWrite
At 361000 size=60KB [FREE]
At 370000 size=56KB [56KB committed] type=mapped protect=readWrite
At 37e000 size=8192B [FREE]
At 380000 size=16KB [16KB committed] type=mapped protect=readWrite
Mapped from: \Device\HarddiskVolume2\DOCUME~1\MTHORN~1\LOCALS~1\Temp\hsperfdata_mThornton\2804
At 384000 size=48KB [FREE]
At 390000 size=56KB [56KB committed] type=image protect=executeWriteCopy
Module at 390000, size=56KB C:\Apps\j2sdk1.4.2_04\jre\bin\verify.dll
At 39e000 size=8192B [FREE]
At 3a0000 size=64KB [8192B committed, 56KB reserved] type=private protect=readWrite
At 3b0000 size=100KB [100KB committed] type=image protect=executeWriteCopy
Module at 3b0000, size=100KB C:\Apps\j2sdk1.4.2_04\jre\bin\java.dll
At 3c9000 size=28KB [FREE]
At 3d0000 size=52KB [52KB committed] type=image protect=executeWriteCopy
Module at 3d0000, size=52KB C:\Apps\j2sdk1.4.2_04\jre\bin\zip.dll
At 3dd000 size=12KB [FREE]
At 3e0000 size=120KB [4096B committed, 116KB reserved] type=private protect=executeReadWrite
At 3fe000 size=8192B [FREE]
At 400000 size=24KB [24KB committed] type=image protect=executeWriteCopy
Module at 400000, size=24KB C:\Apps\j2sdk1.4.2_04\bin\java.exe
At 406000 size=40KB [FREE]
At 410000 size=1036KB [1036KB committed] type=mapped protect=readOnly
At 513000 size=52KB [FREE]
At 520000 size=3072KB [668KB committed, 2404KB reserved] type=mapped protect=executeRead
At 820000 size=512KB [4096B committed, 508KB reserved] type=private protect=readWrite
At 8a0000 size=772KB [32KB committed, 740KB reserved] type=private protect=executeReadWrite (2 allocations)
At 961000 size=60KB [FREE]
At 970000 size=1024KB [996KB committed, 28KB reserved] type=private protect=readWrite
At a70000 size=33MB [460KB committed, 32MB reserved] type=private protect=executeReadWrite (2 allocations)
At 2a91000 size=60KB [FREE]
At 2aa0000 size=1792KB [128KB committed, 1664KB reserved] type=private protect=readWrite (7 allocations)
At 2c60000 size=28KB [28KB committed] type=image protect=executeWriteCopy
Module at 2c60000, size=28KB C:\Apps\IntelliJ-4.5.4\bin\breakgen.dll
At 2c67000 size=36KB [FREE]
At 2c70000 size=1344KB [100KB committed, 1244KB reserved] type=private protect=readWrite (3 allocations)
At 2dc0000 size=60KB [60KB committed] type=image protect=executeWriteCopy
Module at 2dc0000, size=60KB C:\Apps\j2sdk1.4.2_04\jre\bin\net.dll
At 2dcf000 size=4096B [FREE]
At 2dd0000 size=2560KB [104KB committed, 2456KB reserved] type=private protect=readWrite (3 allocations)
At 3050000 size=48KB [48KB committed] type=image protect=executeWriteCopy
Module at 3050000, size=48KB C:\development\Projects\bin\jpsapi.dll
At 305c000 size=16KB [FREE]
At 3060000 size=64KB [16KB committed, 48KB reserved] type=private protect=readWrite
At 3070000 size=80MB [FREE]
At 8000000 size=1248KB [1248KB committed] type=image protect=executeWriteCopy
Module at 8000000, size=1248KB C:\Apps\j2sdk1.4.2_04\jre\bin\client\jvm.dll
At 8138000 size=127MB [FREE]
At 10000000 size=28KB [28KB committed] type=image protect=executeWriteCopy
Module at 10000000, size=28KB C:\Apps\j2sdk1.4.2_04\jre\bin\hpi.dll
At 10007000 size=36KB [FREE]
At 10010000 size=128MB [6144KB committed, 122MB reserved] type=private protect=executeReadWrite
At 18010000 size=1073MB [FREE]
At 5b0a0000 size=28KB [28KB committed] type=image protect=executeWriteCopy
Module at 5b0a0000, size=28KB C:\WINDOWS\system32\umdmxfrm.dll
At 5b0a7000 size=29MB [FREE]
At 5cd70000 size=28KB [28KB committed] type=image protect=executeWriteCopy
Module at 5cd70000, size=28KB C:\WINDOWS\system32\serwvdrv.dll
At 5cd77000 size=150MB [FREE]
At 662b0000 size=352KB [352KB committed] type=image protect=executeWriteCopy
Module at 662b0000, size=352KB C:\WINDOWS\system32\hnetcfg.dll
At 66308000 size=184MB [FREE]
At 71a50000 size=252KB [252KB committed] type=image protect=executeWriteCopy
Module at 71a50000, size=252KB C:\WINDOWS\system32\mswsock.dll
At 71a8f000 size=4096B [FREE]
At 71a90000 size=32KB [32KB committed] type=image protect=executeWriteCopy
Module at 71a90000, size=32KB C:\WINDOWS\System32\wshtcpip.dll
At 71a98000 size=32KB [FREE]
At 71aa0000 size=32KB [32KB committed] type=image protect=executeWriteCopy
Module at 71aa0000, size=32KB C:\WINDOWS\system32\WS2HELP.dll
At 71aa8000 size=32KB [FREE]
At 71ab0000 size=92KB [92KB committed] type=image protect=executeWriteCopy
Module at 71ab0000, size=92KB C:\WINDOWS\system32\WS2_32.dll
At 71ac7000 size=81MB [FREE]
At 76b40000 size=180KB [180KB committed] type=image protect=executeWriteCopy
Module at 76b40000, size=180KB C:\WINDOWS\system32\WINMM.dll
At 76b6d000 size=524KB [FREE]
At 76bf0000 size=44KB [44KB committed] type=image protect=executeWriteCopy
Module at 76bf0000, size=44KB C:\WINDOWS\system32\PSAPI.DLL
At 76bfb000 size=17MB [FREE]
At 77c10000 size=352KB [352KB committed] type=image protect=executeWriteCopy
Module at 77c10000, size=352KB C:\WINDOWS\system32\MSVCRT.dll
At 77c68000 size=864KB [FREE]
At 77d40000 size=1196KB [1196KB committed] type=image protect=executeWriteCopy (2 allocations)
Module at 77d40000, size=576KB C:\WINDOWS\system32\USER32.dll
Module at 77dd0000, size=620KB C:\WINDOWS\system32\ADVAPI32.dll
At 77e6b000 size=20KB [FREE]
At 77e70000 size=580KB [580KB committed] type=image protect=executeWriteCopy
Module at 77e70000, size=580KB C:\WINDOWS\system32\RPCRT4.dll
At 77f01000 size=60KB [FREE]
At 77f10000 size=284KB [284KB committed] type=image protect=executeWriteCopy
Module at 77f10000, size=284KB C:\WINDOWS\system32\GDI32.dll
At 77f57000 size=73MB [FREE]
At 7c800000 size=976KB [976KB committed] type=image protect=executeWriteCopy
Module at 7c800000, size=976KB C:\WINDOWS\system32\kernel32.dll
At 7c8f4000 size=48KB [FREE]
At 7c900000 size=704KB [704KB committed] type=image protect=executeWriteCopy
Module at 7c900000, size=704KB C:\WINDOWS\system32\ntdll.dll
At 7c9b0000 size=46MB [FREE]
At 7f6f0000 size=1024KB [28KB committed, 996KB reserved] type=mapped protect=executeRead
At 7f7f0000 size=7936KB [FREE]
At 7ffb0000 size=144KB [144KB committed] type=mapped protect=readOnly
At 7ffd4000 size=8192B [FREE]
At 7ffd6000 size=40KB [40KB committed] type=private protect=readWrite (10 allocations)
At 7ffe0000 size=64KB [4096B committed, 60KB reserved] type=private protect=readOnly
Top address: 7fff0000
TotalFree=1865MB, largest free=1073MB
Committed=18MB, reserved=166MB
Image=6776KB, Mapped=6852KB, Private=170MB

alexlamsl
Offline
Joined: 2004-09-02
Points: 0

Oh now I'm enlightened! :-O

Thanks very much for the thorough explanation :)

and I suppose it is not an issue that the Java team is trying to resolve at all?

mthornton
Offline
Joined: 2003-06-10
Points: 0

In my case allowing a non contiguous heap would buy me a little bit, but the real answer for me is to get rid of the native code I currently have (which itself competes for contiguous address space) and move to a 64 bit JVM.

Given that 'fixing' this problem doesn't achieve as much as you might hope (at least in 32bit JVM), I would rather they spend the time working on some of the other issues associated with really big heaps.

alexlamsl
Offline
Joined: 2004-09-02
Points: 0

true - after struggling on a bit last summer I started to understand the non-urgentness of this issue as well ;)

IMHO current 64-bit CPUs aren't really that well-developed yet (applies to both Intel & AMD), so I think I will just give it some time before migrating.

champt0n
Offline
Joined: 2003-06-15
Points: 0

Some people seem to be missing the point on this issue. Developers may want the ability to set the memory limit. They are not the end users. Most users are not advanced users. They are not worried about micro managing memory. They just want their application to work.

How are any of these technical issues being brought up any worse than having the application crash and losing data? How can we as developers champion Java as being so great when a user can bring it to it's knees simply by creating too big of a file?

Any feature can be abused. And there are issues to overcome if Java were to handle memory dynamically. But none of these arguments mean squat to a user who just lost hours of work because of 'technical' issues and limitations.

mthornton
Offline
Joined: 2003-06-10
Points: 0

> losing data? How can we as developers champion Java
> as being so great when a user can bring it to it's
> knees simply by creating too big of a file?
>

Nothing gives you unlimited memory, there will always be a problem big enough to reach the limit however imposed. Therefore the application should be designed to gracefully handle the consequences of reaching that limit.

kcpeppe
Offline
Joined: 2003-06-15
Points: 0

> > it makes me sad that java doesn't release memory
> back
> > to the OS. :_(
>
>
> I've already said that isn't true.
> http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.htm
> l
>
> Look particularly at the -XX:MaxHeapFreeRatio
> By default this is 70%, which means if there is more
> than 70% free space in the heap, it will be reduced
> in size by releasing memory back to OS.

Maybe I've missed something but I don't see anything that says that memory is released back to the OS. What I do see is that the Java heap size is adjusted which fits with my experience that the JVM doesn't return memory to the OS. IIRC there are only a few special cases (mmap being one) where a process can get smaller.

alexlamsl
Offline
Joined: 2004-09-02
Points: 0

Well I've just tested his theory by setting -XX:MinHeapFreeRatio=10 and -XX:MaxHeapFreeRatio=20, and NB happily decreases its heap size upon GC :)

sm1
Offline
Joined: 2003-06-10
Points: 0

I completely agree with mhunsicker, and the need for having no predetermined max memory is much more important, if not a must have, for applications deployed to end users that are not Java developers or techies.

This new feature could be implemented with a JVM switch that enables no predetermined max memory.

The lack of such a feature in Java today leads me to suppose that there must be very few serious Java desktop applications for the general public out there. Btw, I've been developing in Java since 1996 but mostly for environments with fixed memory requirements. I now have to deploy a desktop app for the general public that badly needs such a feature (no predetermined max memory), otherwise lay persons will have to edit the java command line and change switches, a real issue.

The lack of such a feature, no predetermined max memory, is unacceptable after 10 years of existence.

ronaldyang
Offline
Joined: 2003-06-19
Points: 0

it makes me sad that java doesn't release memory back to the OS. :_(

mthornton
Offline
Joined: 2003-06-10
Points: 0

> it makes me sad that java doesn't release memory back
> to the OS. :_(

I've already said that isn't true.
http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html

Look particularly at the -XX:MaxHeapFreeRatio
By default this is 70%, which means if there is more than 70% free space in the heap, it will be reduced in size by releasing memory back to OS.

As far as I can tell these parameters have existed since at least 1.3.

alexlamsl
Offline
Joined: 2004-09-02
Points: 0

Oh I see - I stand corrected.

Now I'll try and tune NetBeans using those parameters to see if I can get it to behave... ;)

olsonje
Offline
Joined: 2005-08-10
Points: 0

@alexlamsl

> Yes, I've got tons of RAM on my desktop as well - but
> I want myself, the user, to decide how much I'm going
> to allow that program that you have written to use.

See, that’s totally a bogus statement for 99.99% of all users out there. They don't CARE as long as the program runs. Can I load up my 200 page publisher file? Yep, sweet! Can I load up my 500 page thesis for grad school? Check! Can I load up up my Java game, yep! Wait, no it just crashed because it threw an out of memory exception! POS!!

Users don't know how much memory things need, if your lucky the read boxes of the programs they purchase to see if they can run it, most of them have no clue what those spec's mean in the first place and if it doesn't "just work", then its not going to be around for long. Telling a user to edit this line in this file and increase it to this value or add this to the shortcut or whatever, it is beyond expectations. Users will not tolerate that for long, they don't want the pain of dealing with those things. Why would they a program touting java if there last 5 experiences with a java program were completely crappy? They won't. They will stray away and never look back and refer to it in very negative harsh words that will only do harm to java in the long run. It's like all the crap about java being slow, yea in the old days it might have been a tad sluggish, but it’s not really overly bad now... Yet look at all the techies out there who hold onto that initial impression without looking back? Checked out Slashdot lately to see what they, the viewers, think of Java? If not, get a reality check and go find some java posts on there.

> There are times (and it's quite often) that "If it
> was bad enough that it caused me problems", I will
> simply end up having data losses (due to your program
> dragging the whole system), so it's irreversible
> damage right before I can "quit using it".

That can happen with or without the JVM having free memory usage as its needed. Last I time I checked I could still crash any program at random times and loss data. Part of running a system is knowing what you run. If you load up something that’s just going to take as much memory as it can, then deal with it, as a most likely "power user" you know that can happen yet you still ran it. Most folks equate a system getting sluggish or unresponsive to the system being old in the first place, but its still a choice we consciously make each and every day.

An example, as a gamer I try to stay within a year or two of tech releases if possible because I like to have the bleeding edge games and have fun with it, yet I know that even my current system can't run this stuff right, nor can I expect my system to run a ton of things when my games taking up gigs of memory just to do its things. These are givens in today’s society, people live with it.

Message was edited by: olsonje

iwadasn
Offline
Joined: 2004-11-09
Points: 0

Why not just add a setting that tells the VM to use as little memory as it can get away with, then you can set the maximum to some astronomical value (1-2-100GB or so). Beyond that sort of level you'd probably want to rethink your datastructures anyway, so set it at the highest reasonable level where your underlying code would be able to cope, and then just count on the VM to not try to use all that memory if it doesn't have to.

That seems like the real solution to me. And for many programs, whose memory use is really not a function of their data set, the setting should be calculated appropriately and never touched again.

olsonje
Offline
Joined: 2005-08-10
Points: 0

@mhunsicker
> I'm not following what's a use-case for wanting to
> limit memory. Yes, a program COULD use all your
> memory. So? Maybe that's what my program is supposed
> to do. That's why I have 2 GB in my machine: so my
> apps can use it.

Thats exactly the case, we don't buy multiple gigs of ram just to have it sit there and do nothing, if the programs can't take advantage of it, there is no use for it beyond a point.

> I'm not against java helping the developer whenever
> possible, but forcing me to know up front my memory
> usage is just bad. What if you made a java-based Word
> processor? You're going to arbitrarily limit how long
> of a document the user can write? You have no way of
> knowing how much memory they will need.

This brings up an issue all to itself, with all the articles on proper GC usage and tuning the correct amount of memory for your app's usage, can we afford to let the JVM just use all the memory the application wants? The second it starts eatting gigs of ram there is going to most likely be noticable GC hits over the running life of the app. What would you tell your users then? This entire issue just sucks honestly, its a damned if you do damned if you don't type deal and there needs to be some way out of it.

I would _love_ not to have to worry about the amount of memory allocated to a program, but the cost is there if it just runs wild because we pay for it in the overhead the JVM has that also provides us the platform we use daily. The flipside is looking at things like c/c++ where you can just take and take and take and god only knows if its ever going to be given back, espically if its a shoddy developer, where as with the JVM, its less likely to be an issue, in theory.

I'm afraid there is no win with this.

mhunsicker
Offline
Joined: 2005-09-06
Points: 0

I just can't understand your position. Yes ill-behaved programs can consume more memory than they should. But why should I not be allowed to use as much as I need? Honestly (and I'm not trying to be on the attack) what is a reason that a well-behaved (and stress tested) program should not be allowed to use as much memory as it needs?

You talk about GC hits being a problem. Ok, that's a technical issue. I'm not saying its easy to solve. I'm saying that WHAT IF all the technical issues could be solved, why would I NOT want my application to have as much memory as it needs?

You keep going back to allowing an app to use too much memory is always bad. I'm proposing that many apps use lots of memory because that's what they do. Would you not agree that some apps use lots of memory intentionally? Would you also agree that some apps have no way of knowing how much memory they will need?

And no, I'm not suggesting we bring back malloc. I just want my app to have the freedom to determine its memory usage at run-time (just like every other language that I know of).

mhunsicker
Offline
Joined: 2005-09-06
Points: 0

My last topic was to alexlamsl (in case you couldn't tell) :)

mthornton
Offline
Joined: 2003-06-10
Points: 0

> I agree this is an issue that needs to be solved.
> It's also really bad that Java currently can't free
> e memory back to the OS without killing the VM.

What makes you think that? The current JVM [b]does[/b] release memory back to the OS.

alexlamsl
Offline
Joined: 2004-09-02
Points: 0

> What makes you think that? The current JVM
> [b]does[/b] release memory back to the OS.

Really? :-O

Now that you've asked - I get this impression directly from NetBeans' memory status bar; the maximum heap memory is always a monotonically increasing function of time.

slasha
Offline
Joined: 2004-07-04
Points: 0

mthornton said:
> Why don't you try it and see what actually happens? As
> far as I can see the penalty for reserving 800m, when
> you in fact only use 50m, is quite small. As far as I
> know it works even when you don't have that much real
> memory --- I don't know if it checks at that point
> that the amount is less than available virtual memory.

Perhaps I was not clear enough. I agree that setting Xmx much larger than you will use does not actually impact (much, if at all) whether the VM grows to that size or causes increased likelyhood of virtual memory usage. [As a side note, the VM does reserve the entire Xmx amount of RAM, so there might be some impact from that, but I don't know what that impact might be.] What I was saying was that specifying a large Xmx simply lets the application grow to it's needed size. If that size is large enough, virtual memory will then be used. Once virtual memory is used, Java (garbage collection, in particular) starts to perform very poorly. The original poster was asking for a removal of the Xmx limitation, which does not really help his/her use case much if Java cannot effiiently deal with being swapped to virtual memory.

olsonje said:
> With 64bit machines just now really starting to reach
> the consumer level, you can't expect that to be a
> viable option at this point. In the future a few years
> from now, possible

Realize that the changes we are talking about here will not be in the JVM until 1.7 at the earliest. This is at least a couple of years away. I wish it could happen sooner, but experience tells me otherwise.

mthornton said:
> What sort of application, for the average home user,
> needs a heap size > 800m?

Lots more than I can think of, I'm sure. But here is a small sampling: word processors, image and video editing, databases (such as MS Access), Financial software (such as TurboTax). This doesn't even mention the biggest selling home application market of all: games. Sun is trying very hard to convince game makers to use Java and this memory limitation is one thing (albeit a smaller one, currently) that is preventing them from having much success.

slasha
Offline
Joined: 2004-07-04
Points: 0

Wow, quite a discussion. Let me explain my viewpoint... no, there is too much -- let me sum up.

mhunsicker said:
> I would like to see java dynamically allocate (and
> free) memory as its needs arise. Most of the
> applications I create work with files of varying
> size. Users can make large files or small files. They
> can also work with multiple files at a time. I have
> no way of knowing what’s a good maximum memory size.
> Some users can get by with 256 MB. Others require 1
> GB.
This is a very reasonable statement in my mind. There are several technical arguments that are made against it here, but this statement is basically a true statement of how Java should "Just Work". And here is the crux of my point: Sun has said that they want Java to become successful on the desktop. What mhunsicker is asking for is a NECESSITY if that goal is to be acheived. The average computer user doesn't have any idea about VM memory limits, maximum and minimum memory settings, etc. Expecting the user to know about and understand these things is fine if you only want the server market, where you have trained administrators running them. But for the desktop, i.e. off-the-shelf applications for use in peoples homes, you cannot expect the users to be experts with computers.

alexlamsl said:
> I don't think elimating Xmx is a good idea - it'll give
> Java applications the possibility to engulf enough memory
> would start to hinder other processes; this happens with
> non-Java applications, and is not a desirable behaviour
> as far as I'm concerned.

I agree that it can be useful (mostly during development) to limit the amount of RAM used by a Java application. I don't wish to remove this option, but rather would like a new option of "use whatever you can". If we ever want Java applications to be "first class citizens" of the desktop world, we must get rid of this -Xmx limitation.

olsonje said:
> This brings up an issue all to itself, with all the
> articles on proper GC usage and tuning the correct amount
> of memory for your app's usage, can we afford to let the
> JVM just use all the memory the application wants? The
> second it starts eatting gigs of ram there is going to
> most likely be noticable GC hits over the running life of
> the app. What would you tell your users then? This entire
> issue just sucks honestly, its a damned if you do damned
> if you don't type deal and there needs to be some way out
> of it.

I understand that there will be some performance degradation associated with using this option, which is another reason it [i]should be[/i] an option. But I dislike the argument that because it will be a bit slower then no one can have it, even if they need such an option. This also seems like an area that Sun could improve, if they spent enough time and effort on it.

mthornton said:
> I think this problem only really affects a small minority
> of applications. Specifically where the heap requirements
> may approach the amount of contiguous address space
> available to a client or where a combination of java heap
> plus varying memory requirements of native code in the
> same process approach this limit. Where this does not
> apply you can use something like -Xmx800m and your
> clients will able to work on some pretty big documents
> without concern.

I agree this problem affects a small minority of [b]current[/b] aplications because if you want to develop a useful desktop application today, you would be foolish to use Java. The question is, does Sun wish to remove these limitations (of which this is one of the biggest) so that Java can be considered a reasonable choice for such appliations? Unfortunately, I can't just set "-Xmx800m" becuase my customer may not have 800m of RAM, and Java's current garbage collection strategies, as you later point out, perform very badly with respect to virtual memory. So, generally, it is in my best interest to keep my Java application from being swapped to disk.

mthornton said:
> The more general problem is that garbage collection that
> provokes extensive virtual memory paging is expensive. I
> don't know what can be done about this (and perhaps it
> deserves a separate topic).

This, I think, ends up being the primary issue that Sun must solve. Being able to use lots of memory doesn't help very much if the application proceeds to hang for 5 or 10 minutes because of a GC that must page memory in and out (thrash it's memory). With better behavior when using virtual memory, and 64 bit VMs, then this problem would be effectively solved by specifying a very large -Xmx value.

mthornton
Offline
Joined: 2003-06-10
Points: 0

> Unfortunately, I can't just set "-Xmx800m" becuase
> e my customer may not have 800m of RAM, and Java's
> current garbage collection strategies, as you later
> point out, perform very badly with respect to virtual

Why don't you try it and see what actually happens? As far as I can see the penalty for reserving 800m, when you in fact only use 50m, is quite small. As far as I know it works even when you don't have that much real memory --- I don't know if it checks at that point that the amount is less than available virtual memory.

olsonje
Offline
Joined: 2005-08-10
Points: 0

>Why when ridiculous amounts of contiguous address space
>are available on 64 bit systems? Contiguous address
>space is only a problem on 32 bit systems because the
>address space itself is small and then halved by the
>design of Windows and then further fragmented so we end
>up with little more than a quarter of what you might
>have expected from 32 bits.

With 64bit machines just now really starting to reach the consumer level, you can't expect that to be a viable option at this point. In the future a few years from now, possible, but currently the vast majority of folks who might interact with the random java application, 32bit is more of what must plan for.

>You could also use a different 32 bit operating system
>(such as Solaris) and get perhaps 3GB or even more heap.

Yes, I could if I wanted to use a different OS. For the average user, not a chance of that happening for the most part. I'm more likely to get them to buy a Mac before I could get them to install and use Solaris daily. Again this is the average home user I'm talking about.

mthornton
Offline
Joined: 2003-06-10
Points: 0

> Solaris daily. Again this is the average home user
> I'm talking about.
What sort of application, for the average home user, needs a heap size > 800m?

olsonje
Offline
Joined: 2005-08-10
Points: 0

> > Solaris daily. Again this is the average home user
> > I'm talking about.
> What sort of application, for the average home user,
> needs a heap size > 800m?

Any type of multimedia application could, lots of folks are starting to learn they can edit videos. Games could require that, last I saw games have come recommended with 1g ram requirements now. Anyone who does publishing for their own reasons be it religious, school, social events(like girl scouts, cub scouts, daycare, school plays, peta, etc...). Depending on how these things work, they could all use a ton of memory.

mthornton
Offline
Joined: 2003-06-10
Points: 0

> > needs a heap size > 800m?
>
> Any type of multimedia application could, lots of
> folks are starting to learn they can edit videos.
> Games could require that, last I saw games have come
> recommended with 1g ram requirements now. Anyone who
> does publishing for their own reasons be it
> religious, school, social events(like girl scouts,
> cub scouts, daycare, school plays, peta, etc...).
> Depending on how these things work, they could all
> use a ton of memory.

There is a difference between how much an application could use (with lazy programming) and how much they
[i]need[/i] to use. Given the issues with large heaps, garbage collection and virtual memory, I think that designing general use applications of this size may be unwise.

olsonje
Offline
Joined: 2005-08-10
Points: 0

> There is a difference between how much an application
> could use (with lazy programming) and how much they
> [i]need[/i] to use. Given the issues with large
> heaps, garbage collection and virtual memory, I think
> that designing general use applications of this size
> may be unwise.

Games are a MAJOR reason folks have computers, as it was stated above, Sun is trying its hardest to get game developers to use Java, in fact they have a Cheif Gaming Officer(*1people working in R&D for gaming technology(*2), and a very extensive game development community(*3). Games are a fact of life on computers, and any modern game uses hundreds of megs of ram even gigs if at all possible. 800m isn't squat for a game, espically in the coming years.

*1) http://blogs.sun.com/roller/page/ChrisM
*2) http://blogs.sun.com/gameguy
*3) http://www.javagaming.org/forums/index.php

iwadasn
Offline
Joined: 2004-11-09
Points: 0

true, but games know how much ram they are expected to use. If they don't know the value, they can just set the limit high enough that it won't be a problem, and hope that the VM decides to be frugal.

olsonje
Offline
Joined: 2005-08-10
Points: 0

> true, but games know how much ram they are expected
> to use. If they don't know the value, they can just
> set the limit high enough that it won't be a problem,
> and hope that the VM decides to be frugal.

That’s not exactly true at all. You might know a minimum that is acceptable, sure you could set that. But you can't just assign it some arbitrary max size and hope the machine supports it. The issue with this is yes you might be able to run a game with 512m, but if you can give it 2g it will run extremely better. But you can't assume that 2g is available on all machines, doing so would be a fatal assumption for the most part.

kcpeppe
Offline
Joined: 2003-06-15
Points: 0

> I would like to see java dynamically allocate (and
> free) memory as its needs arise. Most of the
> applications I create work with files of varying
> size. Users can make large files or small files. They
> can also work with multiple files at a time. I have
> no way of knowing what’s a good maximum memory size.
> Some users can get by with 256 MB. Others require 1
> GB.
>
> I know this poses numerous technical problems for
> or the JVM. But frankly, its embarrassing to have to
> deal with a customer who says my application threw an
> out of memory exception when they have 2 GB RAM on
> their machine, nothing else running, and my app was
> only consuming 512 MB.
>
> This is the single biggest issue with the JVM. And
> nd while there is work-around, it requires the user
> to crash, lose their work, and change an obscure
> setting that I don't want to expose to them.
>

I think you've raised a number of interesting questions that sort of get all tangled together. Let see if we can decouple them.

First I think that it would be nice to be able to tell the JVM to consume all the memory it needs with say another obscure setting like -Xmx0. That said if you do this you do have to understand that you maybe setting yourself up for other problems (GC pauses for one).

Right now there is a work around for this, just set the -Xmx to the amount of ram that you have. I don't know how you interact with your customers but if this is a problem that you are running into then maybe adding a step in the install process that sets this value should be added to the interaction.

On the GC issues, most applications will reach some steady state in the amount of memory that they consume and they will stay there most of the time. Once in that state GC should work very well. However if you have a slow memory leak the system will destabilize and you'll have to intervene. But at that point you have a bigger problem then the problem of long GC pauses. The move to larger heap spaces is driving research efforts to solve the pausse time issue so that is going to to away in time.

However the biggest issue that you raise is the ability to give memory back to the OS. Shinking the size or an executable opens the door for segmentation violations. In order for this not to happen something has to reorganize (compaction) the executable to make sure that nothing is pointing to the memory that is being returned. Given how Java heap spaces are maintained in the executable it's not so easy to return memory inexpensivily and safely and OOME is much easier to solve then segv.

mhunsicker
Offline
Joined: 2005-09-06
Points: 0

kcpeppe,
You're right, we could just set it to the maximum amount of memory at installation time. While that would probably solve the problem most of the time, what if the user later adds more memory? Specifically, what if they add it because they realize that the way they use my program requires more memory? Then I’m back to telling them about this settings.

Maybe here’s where the hangup is: I do not believe in exposing java issues to my users. My users should have no idea what language I’m using. I have no idea what language my car’s ECU is written in and I don’t want to know. So to me, telling the user how to change this setting is unacceptable and embarrassing. I say embarrassing because no other (non-java) programs have to be told to use the memory that’s inside the computer. The average non-programmer would probably think that’s odd. To continue my car analogy, its like buying a car that I have to tell it to use all of the gas in the tank. Well, of course I want it to use all the gas.

Aside from the fact that I don’t want the user to even know about those settings, it’s very dangerous. We have fine tuned most of those settings to work properly. Changing some of them will introduce problems. I don’t want the user changing those either intentionally or accidentally and then screwing up my app. While some people know what they’re doing, my users are not java-savvy. They’re more likely to screw something up.

So to reiterate: a user will have to crash, lose data, then call our tech support, be told to change this obscure settings that’s actually hidden by our installer, then go change this setting. How is that better than there simply was never a problem?

Again, I recognize there are technical GC issues. My whole point is that I want Sun to find a solution to them. There may be no perfect solution, but an occasional pause is better than a crash.

alexlamsl
Offline
Joined: 2004-09-02
Points: 0

What if Java has an adaptive default value for Xmx, and that value is sufficiently high (say, 60%~80% of your total RAM)? Will that be a good compromise?