By now, you are hopefully well aware that Glassfish 3.1 has been released. Because the performance group has been a little quiet lately, maybe you're thinking there aren't a lot of interesting performance features in this release. In fact, there are two key performance benefits: one which benefits developers, and one which is important for anyone using Glassfish's new clustering and high-...
Glassfish V3 is a .0 release of new code, a new architecture, and a new Java EE specification. Should we have high expectations about its performance?
We're frequent asked: what are my (glassfish/sailfin) threads doing? Here's how I figure it out.
If a machine does a simple test faster than machine B, is machine A the faster machine for your needs?
Premature optimization is the root of all evil. Writing badly-performing code is even worse.
It's impossible to tell performance without measuring
jmeter leads us down a blind alley -- should we have known better?
Everything (almost) you wanted to know about tuning glassfish without reading the manual.
Sun has submitted a SPECjAppServer 2004 submission that scales across a lot of hardware. Is it just a question of throwing hardware at the problem?
Glassfish V1 was a price performance leader with good enough performance. Good enough is no longer enough.
Java has two switch statements -- should you actually care?
Thread pools can typically be dynamically resized, but is that a feature you should take advantage of? In a word -- no.
NIO can easily scale to thousands of users, but how do you accurately test if you're measuring 16,000 users?
ab is popular as a tool to measure appserver performance, but it is clearly the wrong tool for the job.
Glassfish continues to be the price-performance leader for SPECjAppServer 2004 application servers.
Sun posts the first-ever SPECjAppserver 2004 benchmark result using an open-source server.
Recent experience using the NetBeans profiler haslet me overcome my usual inertia toward new tools and fully embrace NetBeans.
You'll never know what performs better until you test it under a variety of circumstances.
The JavaOne Call for Papers is out, and I'm torn between talking about new, exciting performance issues and revisiting old (but somehow still-recurring) performance myths.
When looking at benchmark results, make sure to look at what's important.