I'm using JPA (Hibernate as provider), Glassfish and MySQL. Everything works great in development, but when I deploy the app to a test server and let it run (largely idle) overnight, I'm usually greeted with this in the morning:
[#|2011-03-09T15:06:00.229+0000|INFO|glassfish3.0.1|javax.enterprise.system.std.com.sun.enterprise.v3.services.impl|_ThreadID=23;_ThreadName=Thread-1;|ERROR [htt\
p-thread-pool-8080-(1)] (JDBCTransaction.java:91) - JDBC begin failed
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 41,936,868 milliseconds ago. The last packet \
sent successfully to the server was 41,936,868 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expirin\
g and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connec\
tion property 'autoReconnect=true' to avoid this problem.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:532)
at com.mysql.jdbc.Util.handleNewInstance(Util.java开发者_运维百科:409)
at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1118)
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3321)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1940)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2113)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2562)
at com.mysql.jdbc.ConnectionImpl.setAutoCommit(ConnectionImpl.java:4956)
at org.hibernate.transaction.JDBCTransaction.begin(JDBCTransaction.java:87)
at org.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:1473)
at org.hibernate.ejb.TransactionImpl.begin(TransactionImpl.java:60)
I tried using the following in my persistence.xml
, but it didn't help:
<property name="hibernate.c3p0.min_size" value="5"/>
<property name="hibernate.c3p0.max_size" value="20"/>
<property name="hibernate.c3p0.idleTestPeriod" value="30"/>
<property name="hibernate.c3p0.timeout" value="0"/>
<property name="hibernate.c3p0.max_statements" value="0"/>
So that's the C3p0 configuration; it's entirely possible I'm missing the part that actually tells hibernate "hey, use c3p0".
I'm about to try the suggestion that's right there in the error message: add autoReconnect=true
to my JDBC URL, but this is really starting to feel like cargo-cult development at this point. I would appreciate some guidance on the proper way to address this issue. It's hard to debug, because the test cycle is effectively "run it overnight, see what happens in the morning".
I should probably mention how I'm actually using connections in my app. I have a custom Servlet Filter that intercepts all requests. It creates an EntityManager, stores it in a ThreadLocal, and is closed by the filter in a catch/finally block. All my entities obtain a reference to the EntityManager
from the ThreadLocal
.
It's entirely possible that my filter is at fault, but as it only seems to happen after idle periods, I suspect something else is wrong. I do intend to move to Seam/Weld when I have a chance to catch my breath, but for now I'm relying on this filter.
Edit: here's the TL;DR solution:
- use your container's connection pool, if you can (thanks, @partenon)
- make sure your connection pool uses connection validation (thanks, @matt b)
In my case, I had to go into the Glassfish console under Resources/JDBC/Connection Pools, Advanced Tab, and then enable Connection Validation:
This was really the crucial step. You also probably want to set Validate At Most Once
to something reasonable, say 100 seconds. If you're using C3P0 or similar, make sure you configure idle_test_period
and preferredTestQuery
.
Whatever you end up doing, it's important to test out your changes to see if they have the desired effect. To make the timeout happen faster in MySQL, you can temporarily set the wait_timeout
to something low like 30 seconds by editing my.cnf
. This was a tremendous help in debugging this problem, as it allowed me to test changes in seconds, rather than hours.
I think the real question is: why are you using an external connection pooling mechanism instead of using Glassfish' own pooling? Your app server is better suited to provide this kind of service to your application. "External" connection pooling mechanisms are better suited for standalone applications, not in-container applications.
You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem.
Just a shot in the dark, but have you taken a look at setting the autoReconnect=true
property in your JDBC driver? Or consider disabling the server-side setting for client connection timeouts.
I think the property to set the connection test period is idle_test_period
, not idleTestPeriod
as per C3P0 documentation here. So you should be using:
<property name="hibernate.c3p0.idle_test_period" value="30"/>
instead.
I was getting the same problem and it took time to figure out the solution.
I use Hibernate 4.0.1 and mysql 5.1(no spring framework) and I was facing the issue. First make sure that you configured the c3p0 jars properly which are essential.
I used these properties in hibernate.cfg.xml
<property name="hibernate.c3p0.validate">true</property>
<property name="hibernate.connection.provider_class">org.hibernate.service.jdbc.connections.internal.C3P0ConnectionProvider</property>
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.max_statements">50</property>
<property name="hibernate.c3p0.preferredTestQuery">SELECT 1;</property>
<property name="hibernate.c3p0.testConnectionOnCheckout">true</property>
<property name="hibernate.c3p0.idle_test_period">10</property>
<property name="hibernate.c3p0.acquireRetryAttempts">5</property>
<property name="hibernate.c3p0.acquireRetryDelay">200</property>
<property name="hibernate.c3p0.timeout">40</property>
But it's of no use 'cause C3p0 was still taking the default properties not the properties which I set in hibernate.cfg.xml, You can check it in logs. So, I searched many websites for right solution and finally I came up with this. remove the C3p0 properties in cfg.xml and create c3p0-config.xml in the root path(along with cfg.xml) and set properties as follows.
<c3p0-config>
<default-config>
<property name="automaticTestTable">con_test</property>
<property name="checkoutTimeout">40</property>
<property name="idleConnectionTestPeriod">10</property>
<property name="initialPoolSize">10</property>
<property name="maxPoolSize">20</property>
<property name="minPoolSize">5</property>
<property name="maxStatements">50</property>
<property name="preferredTestQuery">SELECT 1;</property>
<property name="acquireRetryAttempts">5</property>
<property name="acquireRetryDelay">200</property>
<property name="maxIdleTime">30</property>
</default-config>
</c3p0-config>
but if you run, ORM takes the jdbc connection but not C3p0 connection pool 'cause we should add these properties in hibernate.cfg.xml
<property name="hibernate.c3p0.validate">true</property>
<property name="hibernate.connection.provider_class">org.hibernate.service.jdbc.connections.internal.C3P0ConnectionProvider</property>
now everything works fine(At least it worked fine for me) and the issue is solved.
check the following for references.
http://www.mchange.com/projects/c3p0/index.html#configuring_connection_testing
https://community.jboss.org/wiki/HowToConfigureTheC3P0ConnectionPool
I hope this solves your problem.
精彩评论