开发者

Out of heap space with tomcat 6 dbcp and oracle 11 thin driver

开发者 https://www.devze.com 2023-02-19 13:54 出处:网络
I am running a web application on Tomcat 6, using Tomcat DBCP to manage JDBC connections.This application can run with MySQL, SQL Server, and we recently started testing against Oracle.Unlike with MyS

I am running a web application on Tomcat 6, using Tomcat DBCP to manage JDBC connections. This application can run with MySQL, SQL Server, and we recently started testing against Oracle. Unlike with MySQL and SQL Server, when we started using the Oracle thin driver, our application started using endless memory.

The application is running on a Windows 2008 server, and an Oracle 11g install. Tomcat is running as a service, with a max memory of 2 gigs, -XX:MaxPermSize=1024m and the max thread stack size of 1024k. The DBCP settings have a max active of 20, and a max idle of 10. Left to run for a while, it will take all 2 gigs and start reporting:

java.lang.OutOfMemoryError: Java heap space

Our usage of JDBC is quite rudimentary. We get a connection from the JDBC datasource, perform our queries or updates, and call close() on the result, statement and connection (if each exists).

When running with the MySQL 5 driver, or the JTDS driver, we can run in under 1 gig of memory. The only difference is the Oracle driver.

What can I do to stop this?

Update (March 30, 2011): I have added comments as responses below. Can anyone help?

Here are the answers to the comments:

The database is not on the same server as Tomcat. The server hosting Tomcat has 8 gigs of physical memory.

I do not close the connection after each use. I am using Tomcat's DBCP, and I do call close() after each use, but pool is set to a max active of 20, max idle of 10.

The version I am running of the thin driver is 11.2.0开发者_如何学运维.2.0.

As for the -Xmx, I am running this as a service, with the "Initial memory pool" of 1024MB, the "Maximum memory Pool" of 2048MB, and the "Thread stack size" of 1024KB

I don't have a stack trace - there is no error until the out of memory error (with no stack trace).


This answer might be very late, but is posted to aid others who might encounter a similar situation.

Whenever an OOME (OutOfMemoryError) is encountered during the operation of an application using JDBC, obtaining a heap dump more or less aids in resolving the root cause of the issue. In certain scenarios, an OOME is thrown naturally, without any underlying leak. This is due to the heap itself getting exhausted. Such a situation may be encountered when a large result set is returned as a result of the JDBC Statement execution.

The objects created by the Oracle JDBC driver when creating the result set are apparently quite heavy, even they are short lived (for most applications do not retain references to the result set after obtaining the desired data). Returning a large data items (a large number of rows or columns) is bound to result in this scenario.

If this is the case, the only options available for resolution are:

  • Increase the amount of memory available to the Java heap.
  • Rewrite the query if possible to return a smaller dataset.

There could be other underlying issues, but those would have already been treated in enough detail in other questions - for instance, closing the result set, statement and logical connection objects etc.


Vinet's answer is good. Here is a great link explaining the caching strategy of the 11g drivers and why this answer makes sense: http://www.oracle.com/technetwork/database/enterprise-edition/memory.pdf .

If the result set of your queries is very large than it is most likely jdbc's caching strategy that is chewing up your entire heap. The only other option I would add to Vinent's answer is setting of the max cache buffer size. If this value is set too low, it will impact the performance of your query lookups though.

0

精彩评论

暂无评论...
验证码 换一张
取 消