Skip to main content

Re: Performance tuning and trying to make Glassfish use available resources

1 reply [Last post]
vetler
Offline
Joined: 2010-05-25
Points: 0

Hi again,

We've increased the ORB message fragment size, and this has given us
higher throughput. We see that there's only two socket connections
between the front end and back end servers - so we assume that all EJB
traffic is multiplexed on this? Is there any way to make it use
multiple connections, perhaps that would increase performance?

I've found http://java.net/jira/browse/GLASSFISH-952 and
http://java.net/jira/browse/GLASSFISH-4074, which indicates that this
is possible in GF 4.0, so I assume that it's not possible in 3.1.1?

Regards,
Vetle

On Thu, Jun 21, 2012 at 1:14 PM, Vetle Roeim wrote:
> Hi,
>
> We're performance tuning an application running on Glassfish, and are
> trying to make sense of the values we get from the monitoring. Our
> application is a Java EE 6 application, running a front end part with
> JSF on one server and a back end part with EJBs on a different server,
> and we're getting very high response times on the front end server.
>
> When we measure the performance, we see that the load is high. This
> should indicate that something is being blocked, but the CPU
> utilization is low there is hardly any disk I/O. The database on a
> third server is also doing fine. Somehow, something is causing the
> servers not to be utilized fully, and we're trying to find out what
> buttons we have to push to make it actually use our servers. We've
> ruled out incoming HTTP connections, since we turned up the number of
> max incoming connections, and got worse performance. The problem seems
> to be that the work doesn't get executed fast enough, even though we
> have plenty of free CPU and network/disk I/O.
>
> So, we've turned on monitoring and are trying to find out where the problem is.
>
> In particular, we're wondering about the relationship between
> server.thread-pool.orb.threadpool.thread-pool-1.currentbusythreads-count
> and server.thread-pool.orb.threadpool.thread-pool-1.numberofworkitemsinqueue-current.
> On both the front end and the back end servers,
> currentbusythreads-count is consistently low (around 15 - 17), and
> numberofworkitemsinqueue-current consistently high (400 - 500) - this
> despite the number of available threads is very high on the back end
> server (server.thread-pool.orb.threadpool.thread-pool-1.numberofavailablethreads-count
> is around 900).
>
> The way we're reading this, is that work gets queued, even though
> there are plently of threads available.
>
> Are we reading this the right way? Why is
> numberofworkitemsinqueue-current so high?
> Are there other values we should be monitoring?
> Any other ideas?
>
> We're using Glassfish 3.1.1, JDK 1.6 and Linux.
>
> Regards,
> Vetle

--
vr

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
vetler
Offline
Joined: 2010-05-25
Points: 0

Hi,

We're performance tuning an application running on Glassfish, and are
trying to make sense of the values we get from the monitoring. Our
application is a Java EE 6 application, running a front end part with
JSF on one server and a back end part with EJBs on a different server,
and we're getting very high response times on the front end server.

When we measure the performance, we see that the load is high. This
should indicate that something is being blocked, but the CPU
utilization is low there is hardly any disk I/O. The database on a
third server is also doing fine. Somehow, something is causing the
servers not to be utilized fully, and we're trying to find out what
buttons we have to push to make it actually use our servers. We've
ruled out incoming HTTP connections, since we turned up the number of
max incoming connections, and got worse performance. The problem seems
to be that the work doesn't get executed fast enough, even though we
have plenty of free CPU and network/disk I/O.

So, we've turned on monitoring and are trying to find out where the problem is.

In particular, we're wondering about the relationship between
server.thread-pool.orb.threadpool.thread-pool-1.currentbusythreads-count
and server.thread-pool.orb.threadpool.thread-pool-1.numberofworkitemsinqueue-current.
On both the front end and the back end servers,
currentbusythreads-count is consistently low (around 15 - 17), and
numberofworkitemsinqueue-current consistently high (400 - 500) - this
despite the number of available threads is very high on the back end
server (server.thread-pool.orb.threadpool.thread-pool-1.numberofavailablethreads-count
is around 900).

The way we're reading this, is that work gets queued, even though
there are plently of threads available.

Are we reading this the right way? Why is
numberofworkitemsinqueue-current so high?
Are there other values we should be monitoring?
Any other ideas?

We're using Glassfish 3.1.1, JDK 1.6 and Linux.

Regards,
Vetle