Posted by editor
on September 25, 2013 at 1:25 AM PDT
There are a great many sessions at JavaOne 2013 that feature JavaFX. But only one of them dared people to attend the session. That session would be "No Guts, No Glory! A Deep Dive into the Internals of JavaFX [BOF5802]", presented by Steve Northover, Kevin Rushforth, and Richard Bair (all of Oracle). At first, I was wondering if the challenge would mean I'd arrive at the session, only to find...
There are a great many sessions at JavaOne 2013 that feature JavaFX. But only one of them dared people to attend the session. That session would be No Guts, No Glory! A Deep Dive into the Internals of JavaFX [BOF5802] , presented by Steve Northover, Kevin Rushforth, and Richard Bair (all of Oracle). At first, I was wondering if the challenge would mean I'd arrive at the session, only to find myself turned away because the room was already filled to capacity. But, no such luck!
So, after I entered, as the three peered my way from the front of the room (the session was already under way), I ducked behind the last row of seats and, still maintaining a low profile, crept forward until I found a seat that was partially obscured from their view, but which still offered a vantage point from which I could conveniently observe the proceedings.
Indeed, this was a session which exposed (sometimes seemingly without intention) some of the historical guts of JavaFX, along with its current guts. There were several moments when one speaker strayed a bit too far into the past, and started to reveal secrets the team preferred to keep buried -- at which point another of the three would step in with a word or two to get the speaker refocused on the bullet points on the slide deck.
Seriously, though, this was a superb session for anyone who has followed JavaFX throughout its history (as in, at least back to the 1.x days), and who's wondered how in the world they get all those fancy dancing Dukes to come into being. Did you know, for example, that a screen that has a basically static background, save for a sine wave whose frequency and amplitude is constantly varying that runs from the upper left corner of the screen to the bottom right corner of the screen is a very problematic situation for JavaFX (and, undoubtedly, other graphics rendering libraries) within the context of a great many graphical output devices? This is because that silly little wiggling line causes the construction of a spatial container that's as wide as its amplitude that extends from the upper left to the lower right of the entire screen. And every time the amplitude or frequency of that line changes, a recomputation of the pixelation that's required for drawing that entire huge portion of the screen is required, even if the rest of the screen isn't changing at all. This requires lots of memory. It's like having to recreate very big objects repeatedly, quickly...
And consider: for each new diagonal sine wave, you also have to redo the antialiasing computations in order to avoid a horrid jagged pixel effect. Heaven forbid me wanting to throw in the added wrench of requiring each peak of my sine wave to be labeled with text that reads "Amplitude"! Antialias moving characters as well??? It's enough to make a JavaFX architect throw up his hands in dismay -- and move to the next slide.
Then there's that damned stuff that's made of molecules that you have to cope with: hardware. If only the world was made only of software! Then we wouldn't have to cope with problems such as the design of modern high end graphics boards -- which are tuned for great performance in displaying the type of graphics output by modern games, not dumb fluctuating diagonal sine waves.
At that moment, I decided that my very next experiment with JavaFX was going to be to make a simple dynamic XY plot of that labeled fluctuating diagonal sine wave... And I began to think of other devious complications -- "enhancements" to the design, so to speak...
Sensing this, the JavaFX team shifted their strategy, and moved to slides talking about the future. You see, today, JavaFX only utilizes two threads: a UI thread that works with FX nodes, and a Render thread that works with NG nodes (which need a name change, since "NG" stands for "New Graphics" and that name is old, not new). But, as we all know, many modern devices have many more than two cores. So, what if JavaFX was enhanced such that it could use all available cores? Then the graphics would render that much faster.
Bang! That idea immediately bumps into an OpenGL graphics card wall. You see, OpenGL strongly prefers a single incoming thread. So, just replicating the current model by a factor of N (where N might be a bit less than the number of cores divided by 2) -- that is, having N UI threads and N Render threads -- is virtually a non-starter on OpenGL devices.
So, how to efficiently draw my beautiful fluctuating labeled diagonal sine wave? One option is by modifying JavaFX to utilize command buffers. For JavaFX, this would mean creating multiple UI threads to utilize the available processor cores, then striping all the UI thread results back into a single data entity and converting it into a package handed off to a single Render thread, which arrives smilingly single-threadedly at the OpenGL door.
This possibility convinced me that it was now safe for me to retreat. So I silently rose from my seat and walked back and out of the room -- safely shielded by the start of the Q&A portion of the BOF.
Subscriptions and Archives: You can subscribe to this blog using the java.net Editor's Blog Feed . You can also subscribe to the Java Today RSS feed and the java.net blogs feed .
-- Kevin Farnham (@kevin_farnham )