开发者

Optimizing for garbage collection of large collections

开发者 https://www.devze.com 2023-02-15 10:10 出处:网络
I am reading from database a large collections of this type List<Rows<Long,String,ByteBuffer>>

I am reading from database a large collections of this type List<Rows<Long,String,ByteBuffer>>

I then read the data from these list of rows one by one and put the data from them into container objects. Should I de-reference the individual rows, in the list, to null as I move ahead with reading each of them OR should I finally de-reference them finally so that they can be garbage collected ?

Since each row is quite big consisting of large strings开发者_StackOverflow中文版/blobs/ text content etc I am trying to optimize for garbage collection. I hope this is not called the premature optimization !?


If you haven't measured your program's performance, then it's a premature optimization.

(Not every optimization performed before measuring is premature, but these kind of micro-optimizations are.)


i would suggest dereferencing them. this is not premature optimization because unlike time, the amount of memory available to your program for accomplishing its is not as much under your control.


As larsmans said, this is the very definition of premature optimization. However, questions like these often pop up and rather than forgetting about them I like to add profiling points right away (wrapped by an on/off switch - like Logger.isEnabled()) and then move on. Look at http://netbeans.org/features/java/profiler.html for an easy profiling tool/setup


As larsmans has mentioned, there is the disadvantage of complexity.

But there may also be a performance disadvantage - nulling a reference involves writing to memory, and in a modern garbage-collected environment, writing to memory is not necessarily simply a store. There may also be some book-keeping for the benefit of the collector - look up 'write barrier' and 'card marking' in the context of garbage collection. Writing also has effects on processor caches; on a multiprocessor system, it will cause cache coherency traffic between processors, which consumes bandwidth.

Now, i don't think any of these effects are huge. But you should be aware that writes to memory are not always as cheap as you might think. That's why you have to profile before you optimise, and then profile again afterwards!

0

精彩评论

暂无评论...
验证码 换一张
取 消