By now, you are hopefully well aware that Glassfish 3.1 has been released. Because the performance group has been a little quiet lately, maybe you're thinking there aren't a lot of interesting performance features in this release. In fact, there are two key performance benefits: one which benefits developers, and one which is important for anyone using Glassfish's new clustering and high-availability features.
For most of the year, I've been working on session replication code for Sailfin. When I came back to work with the Glassfish performance team, I found that we had some pretty aggressive goals around performance, particularly considering that Glassfish V3 had a completely new architecture, was a major rewrite of major sections of code, and implements th
Avid readers of the glassfish aliases know that we are frequently asked questions about why their server isn't responding, or why it is slow, or how many requests are being worked on. And the first thing we always say is to look at the jstack output.
Yesterday, I wrote that I'm often asked which X is faster (for a variety of
X). The answer to that question always depends on your perspective.
As a performance engineer, I'm often asked which X is faster (for a variety of X). The answer to that question always depends on your perspective.
Recently, I've been reading an article entitled
The Fallacy of Premature Optimization by Randall Hyde. I urge everyone to go read the full article, but I can't help
summarizing some of it here -- it meshes so well with some of my conversations with developers
over the past few years.
I've written several times before about how you have to measure performance to understand how you're doing -- and so here's my favorite performance stat of the day: New York 17, New England 14.
I spent last week working with a customer in Phoenix (only a few weeks before the Giants go there to beat the Patriots), and one of the things we wanted to test was how their application would work with the new in-memory replication feature of the appserver.
[NOTE: The code in this blog was revised 2/11/08 due to some errors on my part the first time, and some changes as it was ingtegrated into grizzly. And thanks to Erik Svensson for pointing out a few errors, it has been revised again on 2/13/08.]
When I reported our recent excellent SPECjAppServer 2004 scores, one glassfish user responded:
I sure wish you guys were able to come up with a thorough write up about the SPEC Benchmark architecture, and the techniques you guys used to get the numbers you get and, more importantly, how those techniques might