Posted by sdo
on November 11, 2005 at 12:57 PM PST
The JavaOne Call for Papers is out, and I'm torn between talking about new, exciting performance issues and revisiting old (but somehow still-recurring) performance myths.
I was a little surprised to find the JavaOne 2006 Call For Papers in my email this week; wasn't JavaOne 2005 just last month? It can't be mid-November; it's been 60 degrees for weeks in New York.
If you're interested in presenting anything releated to Java EE performance at JavaOne, I encourage you to submit an abstract. We did not have the largest selection of such talks last year, and I'd like to see a lot of performance talks this year.
By the same token, if there are performance-related topics you'd like to hear about at JavaOne, let us know.
I'm vacillating about what I'd like to talk about this year. On the one hand, I'd love people to hear about EJB 3.0 performance and enhancements we've done in grizzly. On the other hand, I've spent so much time this week dispelling performance myths and half-truths that I'm thinking a basic talk about performance may be what's called for.
I remember spending a lot of time 9 years ago dispelling myths about Java performance; in those days, parts of Java were indeed slow. But many other things contributed to performance as well; I remember the article by one Microsoft marketing person talking about an applet he was running, saying that the next step was to wait...and wait...and wait -- because, you see, Java is slow. Of course, he was waiting because he was downloading code over his 14K modem; his issues had nothing to do with Java's performance (even if it wasn't stellar at the time). But that was nine years ago, I remind myself.
So it was depressing to me this week to run into three instances of this sort of thing; apparently we haven't made that much progress in understanding performance. Two of these cases were by developers who ought to have learned better by now (and to be fair, they were willing to), and the third was yet more misanalysis of performance by BEA about SPECjAppServer scores.
The BEA case is of course all about marketing, but it's still depressing to see such misanalysis. In particular, BEA rightly argues that you can't just look at total JOPS and tell anything about a SPECjAppServer 2004 submission, and they further posit that what's important is determining the software and hardware required for your requirements. Exactly so.
Why, then, does BEA next show a calculation of $Hardware/Operations? Didn't they just say that software was an equally important member of the equation? Did they leave out software $ because they didn't want to draw attention to their licensing costs and change the equation out of their favor?
Then, after saying that what's important is $Hardware/Operations, BEA performs a completely different calculation of #CPUS/Operations, as if all CPUs and all systems cost the same amount of money.
Mind-boggling performance analyses like this make me feel that at a fundamental level, performance is still a misunderstood quantity, and rather than talking about the progress we've made, it's time to step back and (re-)learn some fundamentals.