Skip to main content

GlassFish v3 jdbc In-use connections equal max-pool-size

13 replies [Last post]
fericit_bostan
Offline
Joined: 2010-06-09
Points: 0

I'm using GlassFish v3.0.1 in our production environment. We have several applications deployed, each using their own datasource. We are encountering an issue where we seem to run out of database connections. We constantly seem to hit the following error:
java.sql.SQLException: Error in allocating a connection. Cause: In-use connections equal max-pool-size and expired max-wait-time. Cannot allocate more connections.
To try and identify the issue I enabled leak detection by setting the Leak Timeout to 180 seconds and enabled Leak Reclaim. This resulted in stack traces being written out to the log file stating:
A potential connection leak detected for connection pool
While this would tell me about the leaked connection it did not help identify the offender because the leaks appear in different datasources / applications each time. Each application uses a its own datasource and in some cases a different persistence technology: (one uses JPA with EclipseLink, others use Hibernate with Spring) This leads me to believe that there may be an issue with GlassFish and the way it is managing the connections.

I have configured each connection pool with the following characteristics:
Wrap JDBC Objects: enabled
Pooling: enabled
Leak Timeout: 180
Leak Reclaim: enabled
Creation Retry Attempts: 6
Retry Interval: 10
Associate With Thread: enabled
Max Connection Usage: 1
Connection Validation: enabled
Validation Method: auto-commit
Transaction Isolation: read-committed
Isolation Level: guaranteed

Each time a stack trace is printed pertaining to a leaked connection, the one thing that seems constant is the method from which the exception was thrown in the stack trace:

The stack trace of the thread is provided below :
com.sun.enterprise.resource.pool.ConnectionPool.setResourceStateToBusy(ConnectionPool.java:319)
com.sun.enterprise.resource.pool.ConnectionPool.getResourceFromPool(ConnectionPool.java:694)
com.sun.enterprise.resource.pool.ConnectionPool.getUnenlistedResource(ConnectionPool.java:572)
com.sun.enterprise.resource.pool.AssocWithThreadResourcePool.getUnenlistedResource(AssocWithThreadResourcePool.java:164)
com.sun.enterprise.resource.pool.ConnectionPool.internalGetResource(ConnectionPool.java:467)
com.sun.enterprise.resource.pool.ConnectionPool.getResource(ConnectionPool.java:369)
com.sun.enterprise.resource.pool.PoolManagerImpl.getResourceFromPool(PoolManagerImpl.java:226)
com.sun.enterprise.resource.pool.PoolManagerImpl.getResource(PoolManagerImpl.java:150)
com.sun.enterprise.connectors.ConnectionManagerImpl.getResource(ConnectionManagerImpl.java:327)
com.sun.enterprise.connectors.ConnectionManagerImpl.internalGetConnection(ConnectionManagerImpl.java:290)
com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:227)
com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:159)
com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:154)
com.sun.gjc.spi.base.DataSource.getConnection(DataSource.java:105)

It appears that there may be a deadlock situation occurring in ConnectionPool.setResourceStateToBusy.

Currently the only ay to resolve this issue is to bounce the glassfish instance as I eventually run out of connection, but this is not a viable work-around in a production environment. Are there any configuration changes that I can make that will prevent this from occurring? Is this a know issue that will be addressed in a patch or an up-coming release?

Reply viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
deniz-777
Offline
Joined: 2013-09-04
Points: 0

I have similar issue with glassfish 3.1.2.2.

adinath
Offline
Joined: 2008-07-22
Points: 0

Hi,
I have encountered this issue in my dev environment with GF v3.0.1, we have not gone live yet. I have done code analysis to verify that it is extremely unlikely that the app is leaking connections, for one thing we always close the connection in finally.
An then last night I found this entry in the GF JIRA:
http://java.net/jira/browse/GLASSFISH-12442
I think the only way to get a fix for this right now is to use a GF v3.1 promoted build. I am planning to do this, since I have other serious connections related issues I am trying to resolve.
Adi

jr158900
Offline
Joined: 2005-04-13
Points: 0

Hi,

The common stack trace that you are seeing is application server's call
trace. There will be application's call trace that acquired the
connection will be printed in server.log
It indicates the caller that has acquired the connection but did not
return the connection back to the pool in specified timeout period
(connection-leak-timeout-in-seconds attribute of jdbc-connection-pool).

For more information about leak-tracing and recliam :
http://blogs.sun.com/kshitiz/entry/connection_leak_tracing

If connection leak-reclaim is enabled, connection will be returned to
the pool (it may be killed to avoid the leaked client from misusing it
and will be replaced by a new connection).

Note : It may happen that the application is caching the connection in
which case it will not be returned to the pool within specified period.
You may have to tune the pool settings. The maximum load on your system
is more than what the pool could offer. (refer max-pool-size)

Also, you have set max-connection-usage to 1 which indicates that the
connection will be destroyed (and a new one will be replaced with) after
re-using it only once which will prove to be a costlier operation.

You can try the following :

1) disable max-connection-usage
2) disable associate-with-thread
3) disable connection-leak-reclaim
4) enable connection-leak-timeout.
Find the callers that did not return the connection. If you see those
callers are not closing the connection, application need to be fixed. If
not, check the load on your system and tune the pool settings
(max-pool-size, pool-resize-quantity, max-wait-time-in-milliseconds)
accordingly.

Thanks,
-Jagadish

On Tue, 2010-11-23 at 12:44 -0800, forums@java.net wrote:
> I'm using GlassFish v3.0.1 in our production environment. We have several
> applications deployed, each using their own datasource. We are encountering
> an issue where we seem to run out of database connections. We constantly seem
> to hit the following error:
>
> /*java.sql.SQLException: Error in allocating a connection. Cause: In-use
> connections equal max-pool-size and expired max-wait-time. Cannot allocate
> more connections.*/
>
> To try and identify the issue I enabled leak detection by setting the Leak
> Timeout to 180 seconds and enabled Leak Reclaim. This resulted in stack
> traces being written out to the log file stating:
>
> /*A potential connection leak detected for connection pool */
>
> While this would tell me about the leaked connection it did not help identify
> the offender because the leaks appear in different datasources / applications
> each time. Each application uses a its own datasource and in some cases a
> different persistence technology: (one uses JPA with EclipseLink, others use
> Hibernate with Spring) This leads me to believe that there may be an issue
> with GlassFish and the way it is managing the connections.
>
>
>
> I have configured each connection pool with the following characteristics:
>
> Wrap JDBC Objects: enabled
>
> Pooling: enabled
>
> Leak Timeout: 180
>
> Leak Reclaim: enabled
>
> Creation Retry Attempts: 6
>
> Retry Interval: 10
>
> Associate With Thread: enabled
>
> Max Connection Usage: 1
>
> Connection Validation: enabled
>
> Validation Method: auto-commit
>
> Transaction Isolation: read-committed
>
> Isolation Level: guaranteed
>
>
>
> Each time a stack trace is printed pertaining to a leaked connection, the one
> thing that seems constant is the method from which the exception was thrown
> in the stack trace:
>
>
>
> /The stack trace of the thread is provided below :/
> /com.sun.enterprise.resource.pool.ConnectionPool.setResourceStateToBusy(ConnectionPool.java:319)/
> /com.sun.enterprise.resource.pool.ConnectionPool.getResourceFromPool(ConnectionPool.java:694)/
> /com.sun.enterprise.resource.pool.ConnectionPool.getUnenlistedResource(ConnectionPool.java:572)/
> /com.sun.enterprise.resource.pool.AssocWithThreadResourcePool.getUnenlistedResource(AssocWithThreadResourcePool.java:164)/
> /com.sun.enterprise.resource.pool.ConnectionPool.internalGetResource(ConnectionPool.java:467)/
> /com.sun.enterprise.resource.pool.ConnectionPool.getResource(ConnectionPool.java:369)/
> /com.sun.enterprise.resource.pool.PoolManagerImpl.getResourceFromPool(PoolManagerImpl.java:226)/
> /com.sun.enterprise.resource.pool.PoolManagerImpl.getResource(PoolManagerImpl.java:150)/
> /com.sun.enterprise.connectors.ConnectionManagerImpl.getResource(ConnectionManagerImpl.java:327)/
> /com.sun.enterprise.connectors.ConnectionManagerImpl.internalGetConnection(ConnectionManagerImpl.java:290)/
> /com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:227)/
> /com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:159)/
> /com.sun.enterprise.connectors.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:154)/
> /com.sun.gjc.spi.base.DataSource.getConnection(DataSource.java:105)/ It
> appears that there may be a deadlock situation occurring
> in /ConnectionPool.setResourceStateToBusy./ Currently the only ay to
> resolve this issue is to bounce the glassfish instance as I eventually run
> out of connection, but this is not a viable work-around in a production
> environment. Are there any configuration changes that I can make that will
> prevent this from occurring? Is this a know issue that will be addressed in a
> patch or an up-coming release?
>
>
>

fericit_bostan
Offline
Joined: 2010-06-09
Points: 0

Thanks for the reply. Perhaps you missed my statement about the fact that I have several applications running in the GlassFish container, each one using its own DataSource. But lets talk about only 2 of them. Each one is using a different persistence technology: one uses Hibernate and the other EclipseLink JPA. While I would normally agree with you that somewhere in the application a connection is not being returned to the pool, in this situation I have both ConnectionPools showing leaks. So unless both Hibernate and EclipseLink are leaking connections then I highly doubt that the issue is with the applications. The EclipseLink application for example, is using pure JPA and allowing the container to manage everything including the transactions. There is no direct JDBC Connection access at all, so I highly doubt that it is leaking connections.
Each time a leak is detected it appears to be associated with a simple SELECT statement that is being executed. The statement executed is always different each time a leak is detected, so there is no one spot in the code that is causing this issue. However, the same GlassFish code always appears in the stack trace. setResourceStateToBusy(ConnectionPoo.java:319)
I enabled max-connection-usage in an effort to diagnose the issue and see if it would make a difference. No surprise here, it didn't.
if I disable associate-with-thread I exhaust the connection pool much more rapidly.
If I disable connection-leak-reclaim then I run out of connections very quickly and I must bounce the GlassFish instance to resolve the issue.
I already have enabled connection-leak-timeout. It currently is configured for 180 seconds, which far exceeds how long a connection should be in use.
As I stated, the application is returning connections to the pool. It appears the issue is with the GlassFish connection pool and not my applications. I currently have the connection pool configured for 100 connections, which should be more that adequate for our application and usage but the connection pool is still exhausted in a matter of hours.
This would not be the first time that I saw issues with GlassFish pertaining to Connection Pools. There are other reports of people encountering the same issue when performing regression testing against GlassFish. It seems that there may be a deadlock issue in the leak detection code and not an application issue.

jr158900
Offline
Joined: 2005-04-13
Points: 0

> If I disable connection-leak-reclaim then I run out of connections very quickly and I must bounce the GlassFish instance to resolve the issue.
If you are sure that connections are returned, then it is possible that connections are being cached by the hibernate/Eclipselink for more than 180 seconds. I do not expect the connections to be leaked by GlassFish as it would have been easily seen by others.
You told that the server does not respond after sometime. Did you see any deadlock with leaktracking, reclaim enabled when the server stopped responding ? (you can get "jstack" information of the GlassFish process and post it)
Can you enable connection pool monitoring and get the number of connections acquired, released, free, in-use statistics ?
http://blogs.sun.com/JagadishPrasath/entry/monitoring_jdbc_connection_po...

fericit_bostan
Offline
Joined: 2010-06-09
Points: 0

Who can I contact to get support for GlassFish? At least then I might get a response to my questions....

alexismp
Offline
Joined: 2005-01-06
Points: 0

If you go to http://oracle.com/goto/glassfish you'll see a "Sales Chat Live" link on the top right hand side.
That would probably be the best way to talk to a representative to discuss GlassFish support.

fericit_bostan
Offline
Joined: 2010-06-09
Points: 0

I'm still hitting this issue consistently. Almost every day I am forced to bounce our GlassFish instance to ensure that the ConnectionPools are cleaned up otherwise we run out of connections and the applications are not accessible.
if this were an application issue I would not expect to see the following in the stack trace: com.sun.enterprise.resource.pool.ConnectionPool.setResourceStateToBusy(ConnectionPool.java:319)
Has anyone else seen a similar issue? Is anyone running GlassFish 3.0.1 in a production environment?

eligetiv
Offline
Joined: 2008-01-18
Points: 0

Have you found any solution for your problem? I am also seeing the same kind of issue except that i am running on a VM if that matters at all.

fericit_bostan
Offline
Joined: 2010-06-09
Points: 0

Sadly, no. I have not found a resolution to this issue. I know that hibernate can cache connections and of course that is the easy answer that is being offered. But as I outlined above, I have applications that use Hibernate that are leaking connections as well as applications that use EclipseLink that are leaking connections as well. So am I to believe that both of these technologies are leaking connections rather than GlassFish? I think there is a larger community of Hibernate users that GlassFish users and I think that they would have seen an issue long before GlassFish would have ever encountered it.
It clearly points the finger to GlassFish as the offender. I can only hope that this issue is resolved with the v3.1 release of GlassFish.

fericit_bostan
Offline
Joined: 2010-06-09
Points: 0

I'm still hitting this issue. It is so annoying. I've finally resorted to scheduling a cron job to bounce the GlassFish instance each day at midnight. I've enabled Leak Detection, which shows the stack trace in the log file, but does not help to determine the problem. (I believe this to be a GlassFish issue from a dead-lock occurring with their connection monitor)
I enabled the Monitor on the Connection Pool and while it does display statistics about the connection pool usage, it does not help to identify the issue. According to the monitor information, it found a total of 4 potential connections leaked from the pool. But that does not explain as to why my connection pool has exhausted all of the connections. My pool is configured for a minimum of 8 connections and a maximum of 120. The total number of connections created shows 2398 with a total number destroyed of 2278. Doing the math, that means that 120 connections are allocated currently. Why so many? According to the Monitor there are a total of 114 free connections in the pool. (If they are free then why not release them so the number can go back down? ) But that is irrelevant. The number of connections used shows a total of 6, not 120. So why is the pool reporting that in-use connections is equal to max connections?
My connection pool is configured to reclaim the connection if it is deemed as a leak after 180 seconds. So if indeed my application is leaking connections, they should be reclaimed by GlassFish and made available to the pool. I have the Max Connection Usage configured to be 20, so after 20 uses the connection should be closed and thrown away so a new connection will be created. So as I understand it, if a connection is leaked it will be reclaimed. If it exceeds 20 uses, it will be destroyed. The monitor shows the number of connections acquired at 1191 and the number of connections released is 1187. So this accounts for the 4 potential connections leaked. But what happened to all the other connections allocated? And why did the allocated connections equal the max pool size?
There is clearly an issue with the way the GlassFish is handling the connection pool and not the application that is causing the issue. It would be very helpful if someone from Oracle would comment on the issue rather than sweeping it under the carpet.
I've attached screenshots of my Monitor's output as well as the configuration of my connection pool.
dl.dropbox.com/u/601181/Intrax/GlassFish/images/Glassfish%20Monitoring.jpeg

dl.dropbox.com/u/601181/Intrax/GlassFish/images/DataSource%20-%20General.jpeg

dl.dropbox.com/u/601181/Intrax/GlassFish/images/DataSource%20-%20Advanced.jpeg

jr158900
Offline
Joined: 2005-04-13
Points: 0

> And why did the allocated connections equal the max pool size?
Either the load in the system is such that pool is operating at max-pool-size
or
Since associate-with-thread is enabled, the connections are associated with the thread and hence it is operating at max pool size (120).

Note :
There were two issues fixed in 3.1 related to associate-with-thread
http://java.net/jira/browse/GLASSFISH-11297
http://java.net/jira/browse/GLASSFISH-12495

I see the following options :
1) Since these fixes are available in 3.1 (and not in 3.0.1), can you disable associate-with-thread (restart GlassFish) and try again ?
[I remember you stating that it did not make a difference by switching off assoc-with-thread, but from your recent post it looks like it is ON, can you try again by turning off assoc-with-thread ? ]
2) Can you try your application in latest 3.1 promoted build (b-37) ?
Download link : http://dlc.sun.com.edgesuite.net/glassfish/3.1/promoted/
3) Post a reproducible test-case that will be useful to figure out what's going on.

pgiblox
Offline
Joined: 2008-08-21
Points: 0

You are not alone. I've run 3.0.1 in production for several months. Just several days ago we started getting the same issue. Glassfish is not returning connections to the pool, and we eventually have no more free connections and I have to restart the glassfish server. This happens to us every 2-3 hours, so it is VERY annoying. Also interesting, it seems to affect both of our application servers, neither is running in clustering.
I don't have the exact same problem. If I enable connection-leak tracking, then I receive NO additional stacktraces. According to a GF article I read, there should be some stacktrace with 'leak' somewhere in the the trace. Grepping our logs shows no such trace.
I am going to post my exact issue on another topic and link to/from this thread. It is odd that both of our systems would start doing that around the same date/time..
-Paul