GlassFish v3 jdbc In-use connections equal max-pool-size
I'm using GlassFish v3.0.1 in our production environment. We have several applications deployed, each using their own datasource. We are encountering an issue where we seem to run out of database connections. We constantly seem to hit the following error:
java.sql.SQLException: Error in allocating a connection. Cause: In-use connections equal max-pool-size and expired max-wait-time. Cannot allocate more connections.
To try and identify the issue I enabled leak detection by setting the Leak Timeout to 180 seconds and enabled Leak Reclaim. This resulted in stack traces being written out to the log file stating:
A potential connection leak detected for connection pool
While this would tell me about the leaked connection it did not help identify the offender because the leaks appear in different datasources / applications each time. Each application uses a its own datasource and in some cases a different persistence technology: (one uses JPA with EclipseLink, others use Hibernate with Spring) This leads me to believe that there may be an issue with GlassFish and the way it is managing the connections.
I have configured each connection pool with the following characteristics:
Wrap JDBC Objects: enabled
Leak Timeout: 180
Leak Reclaim: enabled
Creation Retry Attempts: 6
Retry Interval: 10
Associate With Thread: enabled
Max Connection Usage: 1
Connection Validation: enabled
Validation Method: auto-commit
Transaction Isolation: read-committed
Isolation Level: guaranteed
Each time a stack trace is printed pertaining to a leaked connection, the one thing that seems constant is the method from which the exception was thrown in the stack trace:
The stack trace of the thread is provided below :
It appears that there may be a deadlock situation occurring in ConnectionPool.setResourceStateToBusy.
Currently the only ay to resolve this issue is to bounce the glassfish instance as I eventually run out of connection, but this is not a viable work-around in a production environment. Are there any configuration changes that I can make that will prevent this from occurring? Is this a know issue that will be addressed in a patch or an up-coming release?