Posted by editor
on July 9, 2013 at 11:30 PM PDT
This past week one of our aging quad-core PCs (running Debian 6) ceased sending out video signals. Since the computer is fairly old, and since at any given moment several of our computers are idle, I decided to shuffle the computers that have working video, and possibly use the old quad as a headless machine...
This past week one of our aging quad-core PCs (running Debian 6) ceased sending out video signals. Since the computer is fairly old, and since at any given moment several of our computers are idle, I decided to shuffle the computers that have working video, and possibly use the old quad as a headless machine.
The computer that partially failed was for both work and fun -- online video games were part of the fun. In recent months, we'd noticed that games with active graphics seemed not to work as well. Without really investigating, I attributed this to either changes in the Flash player, or changes in the game software.
Anyway, that machine could no longer be used for anything that required a monitor, so I switched its boot disk with the one in our other quad-core and booted it up. That was also running Ubuntu 6, and though the hardware was somewhat different, the machine booted and put the correct images up on the monitor.
So, let the gaming begin, right? Well -- the performance was terrible, much worse than on the old quad-core. The game was almost unplayable. Yes, it could have been drivers, etc. But this was already taking too much time -- the game had to be played!
So, as an experiment, we tried the game on our fairly new, but quite low end, Windows machine. No quad-core, just a basic HP desktop running Windows 7. Lo and behold, the game worked great, better than it had in a very long time!
How could this be? The game runs best on the lowest-end computer? Could it be, perhaps, that the game somehow tuned for Windows? That was possible, but surely not certain...
It was a sufficiently interesting problem that I started investigating the processors on the three machines. Here's what I found (note: I write this while on travel, so the numbers aren't exact):
- Quad-Core 1 (the one with the now-broken video capability) has 4 cores with a processing speed in the 2.3 GHz range;
- Quad-Core 2 (the replacement where the game performed terribly) has 4 cores operating at a significantly lower range;
- the Windows 7 machine identified itself as having a single processor running at a speed of about 3.1 GHz
There's the answer, right? Or, at least part of it. The game using a single processor, so it will perform best on the machine that has the fastest processor.
So, what's this have to do with Lambda Expressions? Everything. Consider this same game, written in JavaFX, utilizing Java 8 Lambda threads for its computational routines. Assuming equal graphics speed, and no limitations on how many processors the game has access to, which computer will the multithread version of the game perform best on? Surely the quad-core with the highest processor speed (i.e., the most powerful computer in the house) -- which is as it should be (more powerful computer equals better performance).
Software developers were "saved" for decades by ever-increasing processor speeds, which made developing ever-more-complex software using the same old single-thread techniques not a problem. But once the limits to how thin silicon could be sliced began to be approached, the procession of doubling of processor speeds every few years was no longer possible.
Enter the multicore processor: the processor's overall power and processing capacity continues to increase, but the processing is divided among multiple processing cores.
But that's a problem for developers: the same old single-thread development practices don't take advantage of the multiple cores. And, those of us who have spent years working on multithreaded / parallelized software know how difficult it is to manage threads, make computation functions thread-safe, etc. Then there's the problem of debugging threading issues!
But it's a multicore world now. How in the world are we going to retrain millions of software engineers about how to write thread-safe software?
This is the brilliance of Lambda Expressions: they make that retraining unnecessary, by providing Java developers with a set of syntactical tools that indicate to the compiler that this set of code should run in parallel. Then, the JVM itself takes care of the parallelization.
Can't get much better than that, can you?
Subscriptions and Archives: You can subscribe to this blog using the java.net Editor's Blog Feed . You can also subscribe to the Java Today RSS feed and the java.net blogs feed .
-- Kevin Farnham (@kevin_farnham )