开发者

Is JDBC a big memory hog?

开发者 https://www.devze.com 2022-12-11 14:22 出处:网络
I need to access (read in) data repeatedly from a database in my java codes, and therefore adopted JDBC. However, it seems to me that using JDBC takes up a lot of memory. I tried to be careful about c

I need to access (read in) data repeatedly from a database in my java codes, and therefore adopted JDBC. However, it seems to me that using JDBC takes up a lot of memory. I tried to be careful about closing the objects created for JDBC (ResultSet, Statemenet), but still it seems to hog a lot of memeory especially compared to reading in input from a textfile. Does anybody know the best way to reduce开发者_如何学Go memeory consumption? Thanks.


JDBC does not eat up very much memory for itself. It caches several pieces of metadata, but the vast majority of memory is taken up by the query results usually. The idea of it is just providing a standard interface of accessing your data, it is not very much of an implementation in its own right. That's why you need JDBC drivers that implement the specifics for each database product.

JDBC being not resource intensive is supported by the fact that you can run JDBC even on mobile devices with JavaME - a very resource limited environment.

So while you might appreciate the easier handling of data with frameworks like Hibernate or JPA, you should not worry about JDBC consuming any amount of resources worth mentioning in the context of the overall application.

As they say: Don't try to optimize things that are not a problem :)


JDBC is not a memory hog. The data it returns can be a huge memory hog.

One common problem with JDBC is that the result sets, which contain the huge amount of data (and are not always memory optimized), are not handled or closed correctly. In order to prevent memory leaks from ResultSet object lying around, the developer must take careful steps to make sure the memory is released before moving on. (Java handling most memory clean-up means this is a blind spot for most developers, so it isn't surprising.)

Consider using this code. It uses the "try/catch try/finally" pattern to definitively close the result set:

try{
    Statement stmt = conn.createStatement();
    try {
        ResultSet rs = stmt.createQuery("some sql");
        try {
            // ResultSet processing goes here
        } finally {
            rs.close();
        }
    } finally {
        stmt.close();
    }
}
catch ( SQLException ex ) { // exception processing for any problems. }

This guarantees that the result set is closed - even if a exception is thrown.


It is very unlikely that 'JDBC' is a memory hog. JDBC is just a pipe. The amount of memory used will depend on what database you are talking to and, more importantly, what sort of queries you are running, and whether you are using cursors correctly.

You might look at Hibernate for help in managing memory usage when working with large amounts of data.


If you ever encounter this problem then it's probably your own code which is a memory hog. This can happen if you haul the complete database contents into Java's memory. That's actually not JDBC's fault.

To reduce memory consumption, best is to fire as specific as possible SQL queries. Just only query the data you actually need, nothing else.

If you really need all the data of a table, then you're probably using the wrong tool with JDBC. Decent DB servers ships with more decent import/export tools for that. Give it a look first.


I have to agree what others have said here, that JDBC itself doesn't use a large memory footprint. However if you're concerned about a particular JDBC operation using a lot of memory you could use either JConsole or a profiler to see how much heap is used during said method execution.


I would like to add an important point here, after running into memory problems myself:

I was reading large datasets from a MySQL database (about 600,000 rows of 20 columns) and kept running out of heap space. I thought I could fix it by changing the fetch size, but setting the fetch size on the PreparedStatement did nothing.

What I discovered was that the MySQL JDBC driver I was using (version 5.1.15) doesn't implement fetch sizes. In fact, for every MySQL query it loads the entire ResultSet into memory. However, if you set the fetch size to Integer.MIN_VALUE then the driver will load 1 row at a time from the MySQL server....with one caveat: you cannot execute any other statements on the connection until the ResultSet is closed.

It's documented here:

http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html

That being said, if you have control over your MySQL server you can set the 'useCursorFetch' to true and the server will return the 'defaultFetchSize' number of rows instead of all of them. This is true as of version 5.0.2 of MySQL. The developer claims it's an experimental hack though, so be warned: http://forums.mysql.com/read.php?39,137457,137457#msg-137457

Anyway, moral of the story here is to check the JDBC driver your using for any peculiarities.


Its more a factor of the data you are loading than the JDBC library. Since JDBC calls involve a lot of data, I'd check that you don't have a lot of objects sitting around in a collection that isn't being released and making it to the generation 2 of the heap.


One time JDBC can use a lot of memory is if you're doing queries which return many rows from the database, and scrolling result sets are not supported by your JDBC driver. This causes all rows to be retrieved from the server, and potentially loaded into RAM.

The solution is to split your queries into smaller batches, or enable scrolling result sets if possible.

0

精彩评论

暂无评论...
验证码 换一张
取 消