In this, our first interview on linux.java.net, Kevin Bedell talks with Bob Griswold , who is in charge of JRockit at BEA. This interview was conducted late last week.
In this, our first interview on linux.java.net, Kevin Bedell talks with Bob Griswold, who is in charge of JRockit at BEA. This interview was conducted late last week.
In this, our first interview on linux.java.net, Kevin Bedell talks with Bob Griswold, who is in charge of JRockit at BEA . Next week, we will have Kevin's interview with Sun's Calvin Austin. This interview was conducted late last week.
Kevin Bedell: I understand that BEA, with their JRockit JVM, has built
one of the premiere JVMs for the Linux platform. But before we get too
much into that, I'd like to ask you a little bit about the history of
BEA's involvement with Linux and with Java on Linux. How far back does
this go with BEA?
Bob Griswold: Java on Linux has been a serious thing for about two years
now. A little bit earlier than that, we had a lot of interest from
developers wanting to develop on Linux, but there weren't a whole lot of
production deployments on Linux until about 18 months ago. There was a lot
of interest before that, but about 18 months ago it started really gaining
momentum. It started with Wall Street, and now you'll find major Wall
Street houses replacing wholesale their RISC and Unix environments with
Linux in a very short period of time. So it's an adoption curve that's the
steepest I've seen for any technology.
And that's very encouraging to me, because the way BEA looks at this, Java
and Linux are really made for each other. Java runs great on Unix and,
yes, Java runs great on Windows, but I personally view the world as moving
to Windows and Linux. There will be two operating systems that matter in
five years and that's Windows and Linux.
If you look at what Microsoft is doing with their .NET platform, they're
really validating most of the major concepts of Java: they have the
managed runtime environment with their CLR, and server libraries with
their .NET framework, for example. It's really mimicking and validating
the strategies that server-side Java has been pursuing for the last four to five
So in that world, five years from now -- with Windows and Linux -- Java
has to win; it has to be the technology on Linux. Microsoft is not
about to make their .NET framework available for Linux, so any serious
enterprise server foundation running on Linux five years from now will be
on Java. BEA has known this for a while, and a bit over two years ago, we
put together a very deep relationship with Intel Corporation. As part of
this relationship, we acquired a very talented group of people in
Stockholm, Sweden, that produced the JRockit Java Virtual Machine. We
integrated that company into BEA, and now we have a very serious, very
competitive JVM on Linux. It really is the premiere JVM on Linux for a
number of reasons.
If you look at the various Java efforts on Linux over the last several
years it really, in my mind, is divided into two groups; there's the Hot
Spot world, and then there are the open source guys trying to do open
source Java implementations. At the end of the day, JVMs are hard. They
are very difficult, very complex beasts. They are fundamentally an
operating system and a compiler mixed together. The JVM manages memory, it
manages threads, orders instructions, etc. -- everything you would think of
as operating system tasks. But it also compiles code, recompiles code,
and optimizes code while the application is running, and so it's really a
truly managed runtime environment. And to get it right and to get it high-performing -- well, there aren't too many people in the world that know
how to do that well. And BEA's very fortunate to have this group of people
in Stockholm. They're the world's experts in making Java fast on various
platforms. They've done a tremendous job. I think the record shows how
we've changed the economics of server-side Java. I mean, now, it's not like
an enterprise is looking at deciding between running a Unix/RISC
environment and an Intel/Linux environment and there's, you know, 10
percent cost savings -- we're talking orders of magnitude, ground-breaking
economic changes here. A corporation can get better performance on a
$20,000 set of Dell boxes than they could with $200,000 worth of
proprietary Unix machines. So with that sort of economics staring people
in the face, BEA and others have seen a tidal wave of adoption and shift
from these Unix environments to Linux, and I like to think JRockit really
played a big part in driving those economics.
KB: When did JRockit first appear?
BG: When we bought Appeal Virtual Machines in Stockholm, Sweden -- we
bought them just over two years ago; it was February 2002 -- they were on
version 3 of JRockit. But really it was more of an interesting project for
them. They didn't have a QA group, they didn't really do stress testing of
it. They basically wrote the code and put it up on the Internet for
So JRockit has existed for four to five years now. But BEA bought it and brought it
into BEA two years ago, and since then we've shipped four generations of
JRockit with increasing quality and stability; with increasing performance
and increasing manageability.
KB: So if I need to run a JDK 1.3 application, is that version still
JB: Yes. We have a 1.3.1 and a 1.4.0, 1.4.1 and 1.4.2, of course, and
we're working on a 1.5 implementation as well. Everything from 1.3.1 on, we
have JRockit for.
KB: Well, thanks -- where are things at today? Are there holes in the
platform, or is everything I need there today?
BG: Well, I think the evidence of companies that have adopted Java on
Linux -- I could reel off a list of dozens of companies that are making
major investments in Linux -- that shows it's ready for prime time.
I would've said that there were holes in the stack until the 2.6 kernel
came out. The Linux 2.6 kernel fixed probably the biggest nagging problem
we had with Linux, which were the threading libraries. The Native Posix
Threading Libraries (NPTL) are far more efficient, faster, and more
Linux-friendly than the old Linux threads. Essentially, it
was very expensive to have new threads and to context-switch in Linux
before the Native Posix Thread Libraries came out.
Editor's note: for more information on this, please refer to the Red Hat white paper at people.redhat.com/drepper/nptl-design.pdf .
And it's not just 2.6 -- Red Hat Advanced Server 3.0 is a 2.4 kernel, but
they've back-ported the NPTL to the 2.4 kernel. SuSE Enterprise Server 9
is a true 2.6-kernel Linux distribution.
KB: So what are you seeing people using Linux and Java for today?
BG: Like I said, Wall Street is the place that sparked this whole
revolution, if you want to call it that. It's the big trading houses, the
Morgan Stanleys and Lehmans, the Bank of New Yorks -- the big financial
institutions running their core trading applications, derivatives, and the
other core systems for their businesses. They're now moving over
to Linux wholesale.
The other industries -- retail, health care, manufacturing -- have really
been following the financial services industry.
It's everything that people use Java for. Airline reservation systems,
online trading systems. All enterprise Java applications are game now to
move over to Linux.
KB: Other than providing an implementation of the Java API and a managed
runtime environment, what kind of features does JRockit provide that
might not be found in every JVM?
BG: I'm glad you asked that. When people think of a JVM, the first
thought that they typically have is "performance engine" -- a black box
that sits below an application that makes things run and can make things
run fast. We look at this really as way more than just a performance
engine; it's a true managed runtime environment. That's how the JVM was
originally designed back in the early 1990s, but I think a lot of people
have just glossed over the fact that it's a managed runtime.
So with JRockit, where we really distinguish ourselves from the other JVMs
is -- yes, we're very, very fast. We're faster than everyone else, and
that's for a bunch of reasons that I could get into.
But I think personally the most exciting part of what we're doing is the
manageability. We're really turning a "managed runtime environment" into a
"manageable runtime environment." And that's an important distinction
because -- yes, the JVM automatically manages memory and threads and
classloading, etc. -- but with JRockit you can actually break open the
black box and see what's happening inside Java at any time.
We have a management console that we've shipped with JRockit from the very
beginning that shows memory utilization in realtime. It has very, very low
overhead -- practically undetectable -- method profiling capabilities. You
can look at method invocation counts, time spent in methods, what objects
are on the heap at any time, how long is garbage collection taking, how
often does it happen -- that sort of thing. We have that in our management
console today and we've had it for two years.
More exciting are some of the innovations that we've been introducing in
the last six to 12 months. The first is what we call the JRockit Runtime
Analyzer. JRockit has been designed from the very beginning to be
manageable and to be very, very fast. But one of the architectural
decisions that we made at the very beginning was that we do not have an
interpreter. We JIT compile everything we see, and then we constantly are
sampling what's happening inside the application and we're constantly
recompiling and "in-lining" methods and doing more heavy optimization of
the things that matter in order to make things run fast.
In doing that we've built a very lightweight, extremely fast, and extremely
low-overhead way for exposing what's happening in the JVM. So using the
JRockit runtime analyzer on a running application in production, you can
ask JRockit to start recording a session and run the recording for 120
seconds, 30 minutes, 60 minutes, or several hours.
Basically what JRockit does is keep track of everything it's already
observing about Java and record it to a file. And when you read this file
into a tool we have available on our dev2dev web site, you get information
that you would find in a high-end Java profiling tool: time spent in
methods; a call graph; detailed statistics of what's on the Java heap at
point any time; for every garbage collection, how many weak references,
soft references, phantom references survive the garbage collection; what
is the fragmentation of the Java heap -- all this stuff is included in
It's like a Java flight-data recorder. If you're running an application in
production and something starts looking strange and you don't know what's
going on, you can ask JRockit to create this recording. And then when you
read this recording into JRockit Runtime Analyzer, it "unlocks the black
box" that people get frustrated about with JVMs.
We're also doing some advanced work with Aspect-Oriented Programming now.
There are two things that JRockit does really well that other JVMs don't.
The first is this very cheap, inexpensive, low-overhead way of counting
things. That's what's behind the JRockit Runtime Analyzer and our
management console. We can count things, record them and stream them out,
or dump them to a file.
The other thing that JRockit does just as a matter of course is replace
methods and classes at runtime. We're constantly sampling the
application, doing heavy optimization on methods and classes and then
swapping out code while it's running. So taking advantage of that
characteristic of JRockit you can see all sorts of possibilities with
Aspect-Oriented Programming. We have on my team in Stockholm someone named
Jonas Boner who's the founder of AspectWerkz -- which is one of the
leading Aspect-Oriented frameworks. He works for me in Stockholm with the
rest of the JRockit team. And what we're putting together is a true
realtime, runtime Aspect-Oriented Programming framework.
We're going to show a demo at eWorld (BEA's annual user conference) where we'll have a Java application running that has a bottleneck in it. First we'll use the JRockit runtime analyzer to find the bottleneck -- to find the method, the particular method that's causing the bottleneck. Then using the AspectWerkz framework and the JRockit console, we'll find an aspect that's been written for this application and we'll write an aspect that inserts a cache into the particular method. And then, while the application's running, we'll select this cache aspect that has
been pre-written, select the appropriate join point and point-cut that
tells the JVM where to "weave" that code into and then say, "weave now"
and JRockit will weave that cache code right into the running application
without taking the server down. And that's just the beginning. We're going
to keep on taking advantage of these capabilities and keep making the JVM
a "manageable runtime environment."
KB: This is probably a good place to talk about the future of Java and
Linux and JVMs. What do you see happening a few years out? What kinds of
capabilities will we have, and what will be the compelling reasons to use
Java and Linux at that time?
BG: Like I said before, the decision that CIOs and IT managers will have
in a few years time is, "Do I go with Microsoft, or do I go with Java?"
Microsoft is pursuing a vision that's very similar to Java with this
managed runtime environment. They are promoting "managed code" over
"unmanaged code" now, in a big way. Longhorn is going to have a CLR right
smack dab in the middle of the operating system. So what I think what
you're going to see on the Java side is a very similar picture. The JVM,
as the managed or manageable run time environment for Java, will become
more and more integrated with the operating system. So, rather than having
the JVM manage memory and manage threads and then pass it on to the
operating system to independently manage memory and manage threads, you're
going to see a lot more cooperation between Linux and the JVM. And we're
working very closely with all the major Linux distributions to do exactly
that. So that's number one.
The second thing is, you're going to find a whole lot more facilities
built on this managed runtime that will solve customer's problems.
Profiling information and OA&M (Operating, Administrative, and Management)
tasks will take advantage of the Java Virtual Machine. Aspect Oriented
Programming is also going to start hitting us in a big way, and the JVM is
indispensable for real Aspect-Oriented Programming.
Related Topics >>