Skip to main content

64-bit double math performance

9 replies [Last post]
Joined: 2005-03-14

I have a science application that uses double math almost exclusively for some pretty heavy duty and long-running analyses.

I thought that by using a 64-bit OS and Java VM I would naturally see about a 2x performance boost, but this is NOT the case. Performance actually degrades a bit, as the Sun 64-bit FAQ explains is due to increased pointer size.

Why is there no performance increase for double math on the 64-bit VM? Doubles are 64 bit in size and the underlying 64 bit machine has a 64 bit data path. I would imagine that there is now one memory fetch per value rather than two.

What is going on here?

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Joined: 2007-09-11

Hi, iam very new to this forum... I would like to post my thread in this forum...

Suma valluru

Joined: 2004-03-04

Doubles are already 64-bit native in a 320bit environment. You should expect to see some improvement for 64-bit integers, but not necessarily for floating point.

It also depends on what architecture you are using, but both SPARC and Opteron has native 64-bit floating point even is 32-bit mode.

Joined: 2004-03-04

32-bit, of course. Not 320 bits. That was typo. :-)

Joined: 2005-10-05

a CPU does not load more data when switching from 32 to 64 bit mode.
the mem path width is 64bit on the 'standard' PC since long.
a read is done in chunks of a cache line, 64-128 Byte is how its been on x86 for many years now.

The x86 floating points has 3 different operand sizes:
32 64 80 bit. no change from 32 to 64 bit mode there.

Its to expect that certain tasks do run slower in 64bit mode due to the pointers being bigger.
in general its estimated that roughly 30% more memory is needed when switching to 64bit mode as the cause of this.
the cpu caches will hence need to read from ram more often in 64bit mode.
likewise for the OS need of paging to disk.

thats why these memory allocation sizes differs:
String char len: 32bit: 64bit:
0 40 64
8 56 80

gustav trede

Joined: 2005-09-22


The first thing came to my mind, are using a 32bit or 64bit JVM?

if you're using 32bit JVM, you won't take any advantage of your CPU.
One more thing, People usually upgrade to 64-bit JVM+CPU to take advantage of the
Huge address space the architecture offers.


Joined: 2003-06-10

I would say the time it takes for your data to travel from memory to the computation unit is insignificant when compared to the time it takes to actually perform the computation. This would explain the behavior you describe. Plus, depending on the algorithm your application implements and on the size of your cache, data you access frequently may be cached and therefore accessed very rapidly in both cases, and as such not making any difference on the final duration of your procedure.

Joined: 2007-01-10

I make some benchmarks between JVM 64 and 32 on my Ubuntu Feisty 64 bits (with a Core 2 Duo) and I see an improvement on arithemetic operations (+; -, *, /). I don't remember exactly, but I think it is 30% or 40%).

I'm a quite surprised of your post.

Joined: 2004-01-07

> I make some benchmarks between JVM 64 and 32 on my
> Ubuntu Feisty 64 bits (with a Core 2 Duo) and I see
> an improvement on arithemetic operations (+; -, *,
> /). I don't remember exactly, but I think it is 30%
> or 40%).
Floating Point or Integer arithmetik?

lg Clemens

Joined: 2004-01-07

I don't know which platform you are running on, but I guess its AMD64.
Well the whole SSE2-stuff is the same running in 32 or 64-bit mode so, well, there simply is no reason why it should be faster or slower (from the FP point of view).

lg Clemens