I'm making a Java application that uses the Slick library to load images. However, on some computers, I get this error when trying to run the program:
Exception in thread "main" java.lang.OutOfMemoryError
at sun.misc.Unsafe.allocateMemory(Native Method)
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:99)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java开发者_StackOverflow:288)
at org.lwjgl.BufferUtils.createByteBuffer(BufferUtils.java:60)
at org.newdawn.slick.opengl.PNGImageData.loadImage(PNGImageData.java:692)
at org.newdawn.slick.opengl.CompositeImageData.loadImage(CompositeImageData.java:62)
at org.newdawn.slick.opengl.CompositeImageData.loadImage(CompositeImageData.java:43)
My VM options are:
-Djava.library.path=lib -Xms1024M -Xmx1024M -XX:PermSize=256M -XX:MaxPermSize=256M
The program loads a few large images (1024 x 768 resolution) at the beginning.
Any help to solve this problem would be greatly appreciated.
The OutOfMemoryError
simply indicates that JVM has run out of memory. The first line of the stacktrace isn't really relevant here as it's just "by coincidence" exactly there where JVM starts to run out of memory, with all Garbage Collecting in vain.
There are basically two solutions to this:
- Give the JVM more memory.
- Fix memory leaks and/or allocate less memory in the code (i.e. make code more memory efficient, don't get hold of memory expensive resources like large
byte[]
for too long and so on).
Point 1 is easy to do, but not always the solution if there's apparently a memory leak in the code. Point 2 is best to be nailed down with help of a Java profiler.
Depending on the exact scenario, it can be a manifestation of a JVM bug present since 2005.
If you look at the code in the java.nio.DirectByteBuffer class, you will see that it creates a thread to deallocate the requested memory. If you program use lots of instance of this class (indirectly via IOUtils for example), you might get OOME even if you have enough memory. It's just the the thread did not get the chance to free the memory.
This is a nasty one.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6296278
Found the problem, I was trying to load a 6144x6144 PNG into my program.
After re-sizing the image to a 256x256 TGA the program loads fine without the error.
@BalusC is mostly correct about the cause and solutions.
However, it is possible that the immediate cause of the OOMEs is that on some computers the JVM is unable to expand the heap to the size specified by the -XX options. For instance, if the amount of memory requested exceeds the remaining available physical memory + swap space, the OS will refuse the JVM's request to expand the heap. This might explain why the application works on some machines and not others ... with the same VM options, and (I guess) processor architectures and JVM versions.
If this is the problem, the OP will need to add more physical memory or increase the system's swap space.
To expand on @BalusC's second solution, the OP may need to change the application so that it does not eagerly load the images at startup. Rather, it could load them lazily, and use a cache with weak references to ensure that the GC can discard them if memory is tight. However, if it is critical that all images are preloaded, then the OP has no choice but figure out how to give the JVM a bigger heap; see above.
When you get this specific exception, with a stack trace ending in Unsafe.allocateMemory(Native Method)
, it means that the OS denied a request to allocate more memory; exceptions that indicate you've hit your JVM's limit on Java heap memory would have a different stack trace.
So the available fixes are: save some memory somewhere in your application, buy more physical RAM (or a larger drive, to have more room for swap files), or check whether you can configure your operating system to allow more memory allocation, e.g. by increasing swap space.
If you're allocating regular Java objects (and not native memory buffers or very large arrays of primitives) when this happens, I would only recommend rewriting your code to save memory, or getting a machine with more RAM; since the garbage collector needs to walk every reachable object on the heap in order to mark them as live, that means objects you won't reference often still have to be fetched into memory regularly by the JVM...so using a lot of swap for small Java objects gets quite expensive.
For the case of many large images, I would recommend using only soft references to hold on to images you think you might need again, which allows the JVM to drop them from memory if memory is needed, but encourages it to hold them in memory if practical, to avoid the performance issues of regularly re-loading them.
精彩评论