Skip to main content

Huge page support on Linux does not work

4 replies [Last post]
sevenm
Offline
Joined: 2003-06-10
Points: 0

Hello,

After reading http://blogs.sun.com/roller/page/dagastine?entry=java_se_tuning_tip_large I was trying to setup my application to use huge pages on a SuSE Linux but it does not seem to work. Any ideeas?

My environment:

apollo:~ # uname -a
Linux apollo 2.6.5-7.252-smp #1 SMP Tue Feb 14 11:11:04 UTC 2006 x86_64 x86_64 x
86_64 GNU/Linux
apollo:~ # uname -a
Linux apollo 2.6.5-7.252-smp #1 SMP Tue Feb 14 11:11:04 UTC 2006 x86_64 x86_64 x86_64 GNU/Linux
apollo:~ # cat /etc/SuSE-release
SUSE LINUX Enterprise Server 9 (x86_64)
VERSION = 9
PATCHLEVEL = 3
apollo:~ # cat /proc/meminfo | grep Huge
HugePages_Total:   150
HugePages_Free:    150
Hugepagesize:     2048 kB

The application is running under a regular user but it has no restrictions:

ulimit -a
core file size        (blocks, -c) 0
data seg size         (kbytes, -d) unlimited
file size             (blocks, -f) unlimited
max locked memory     (kbytes, -l) unlimited
max memory size       (kbytes, -m) unlimited
open files                    (-n) 1024
pipe size          (512 bytes, -p) 8
stack size            (kbytes, -s) unlimited
cpu time             (seconds, -t) unlimited
max user processes            (-u) 16382
virtual memory        (kbytes, -v) unlimited

However when I start my app, I get :

Java HotSpot(TM) 64-Bit Server VM warning: Failed to reserve shared memory (errno = 22).
java version "1.6.0-rc"
Java(TM) SE Runtime Environment (build 1.6.0-rc-b95)
Java HotSpot(TM) 64-Bit Server VM (build 1.6.0-rc-b95, mixed mode)

The relevant command line arguments for my app are:

java -showversion -Xms128M -Xmx256M -Xss128K -Xmn64M -XX:+UseLargePages -cp ....

Regards,
Horia

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
ricochet
Offline
Joined: 2009-02-18
Points: 0
dfoster
Offline
Joined: 2003-09-11
Points: 0

Your env output says:
Hugepagesize: 2048 kB
HugePages_Total: 150

Which means you have a total of 2048KB * 150 = 300MB which (probably) means you don't have enough pages for the size of the JVM.

As far as I understand it when you turn on huge page support it needs to be as big as the maximum amount of memory the JVM will consume. This means not only the heap but all the memory the JVM will use. You may want to try cranking up the number of pages.

The other problem you may be suffering from is that even though the pages are (supposed to be) reserved they may not be contigious. The current JVM requires that the memory allocated to it is contigious. So if you do increase the page size or even if you don't you should reboot and fire up your app right away before the memory gets fragmented.

The bottom line is huge pages work. It just might take a little fiddling to get it going. Believe me I know. It took me a couple days well to read all the literature and figured out all the right knobs to turn.

briand
Offline
Joined: 2005-07-11
Points: 0

> The other problem you may be suffering from is that even though the pages are (supposed to be) reserved
> they may not be contigious. The current JVM requires that the memory allocated to it is contigious. So if
> you do increase the page size or even if you don't you should reboot and fire up your app right away
> before the memory gets fragmented.

When we say that the JVM needs contiguous memory, we are really talking about contiguous virtual memory in the process address space. This often gets confused with contiguous physical memory, but they are two different concepts. Contiguous process virtual memory pages can easily be constructed of non-contiguous physical pages and the JVM has no clue that this happens (and rightfully so).

That's not to say that the fragmentation issue that dfoster is alluding to is not important. If fact, it is, but it's a different issue than the JVM's need for contiguous process virtual address space.

As the OS satisfies virtual memory allocation requests, the physical memory managed by the OS can get fragmented such that a large page allocation, which requires contiguous physical memory, can't be satisfied. Some OS's can coalesce contiguous multiple small physical pages into a large page, and others cannot. For those that can't coalesce, or for those that can but have too much physical memory fragmentation to successfully coalesce, a reboot is typically the best way to get back to an un-fragmented state. As dfoster says, firing up your app as soon as possible after the reboot is the best way of guaranteeing that you get large pages.

It's important to note, though, that with enough memory pressure, some OS's will 'shred' large pages - converting them to standard sized pages - to meet memory demands. So, if you are depending on the performance gains from large pages, you'll want to make sure that there isn't a lot of competing memory demands on the system.

HTH
Brian

djjd
Offline
Joined: 2006-02-15
Points: 0

Horia,
Did you try running as root? Our testing has shown root permissions has been needed running Suse. Let us know if thats the case and we'll try to find the necessary config. changes needed for Suse.

-dave