An application that we have recently started sporadically crashing with a message about "java.lang.OutOfMemoryError: requested 8589934608 bytes for Chunk::new. Out of swap space?".
I've looked around on the net, and everywhere suggestions are limited to
- revert to a previous version of Java
- fiddle with the memory settings
- use client instead of server mode
Reverting to a previous version implies that the new Java has a bug, but I haven't seen any indication of that. The memory isn't an issue at all; the server has 32GB available, and Xmx is set to 20 while Xms is 10. I can't see the the JVM running out of the remaining 12GB (less the amount given to the handful of other processes on the machine). And we're stuck with server mode due to the nature of the application and environment.
When I look at the memory and CPU usage for the application, I see constan开发者_开发问答t memory usage for the whole day, but then suddenly right before it dies CPU usage goes up to 100% and the memory usage goes from X, to X + 2GB, to X + 4GB, to (sometimes) X + 8GB, to JVM death. It would appear that there is maybe cycle of repeated array resizing going on in the JIT compilation.
I've now seen the error occur with the above 8GB request and also 16GB requests. All times, the method being compiled when this happens is the same. It is a simple method that has non-nested loops, no recursion, and uses methods on objects that return static member fields or instance member fields directly with little computation.
So I have 2 questions:
- Does anybody have any suggestions?
- Can I test out whether there is a problem compiling this specific method on a test environment, without running the whole application, invoking the JIT compiler directly? Or should I start up the application and tell it to compile methods after a much smaller call count (like 2) to force it to compile the method almost instantly instead of at a random point in the day?
@StephenC
The JVM is 1.6.0_20 (previously 1.6.0_0), running on Solaris. I know it's the compilation that is causing a problem for a couple reasons.
ps
in the seconds leading up to it shows that a java thread with id corresponding to the compiler thread (from jstack) is taking up 100% of the CPU timejstack
shows the issue is inJavaThread "CompilerThread1" daemon [_thread_in_native, id=34, ...]
The method mentioned in jstack
is always the same one, and is one we wrote. If you look at sample jstack
output you will know what I mean, but for obvious reasons I can't provide code samples or filenames. I will say that it is a very simple method. Essentiall a handful of null checks, 2 for loops that do equality checks and possibly assign values, and some simple method calls after. All in all maybe 40 lines of code.
This issue has happened 2 times in 2 weeks, though the application runs every day and is restarted daily. In addition, the application wasn't under heavy load any of these times.
You can exclude a particular method from being JIT'ed by creating a file called .hotspot_compiler
and putting it in your applications 'working directory'. Simply add an entry in the file in the following format:
exclude com/amir/SomeClass someMethod
And the console output from the compiler will look like:
### Excluding compile: com.amir.SomeClasst::someMethod
For more information, read this. If you're not sure what you're applications 'working directory' is, use the
-XX:CompileCommandFile=/my/excludefile/location/.hotspot_compiler
in your Java start script or command line.
Alternatively, if you're not sure its the JIT compilers fault, and want to see if you can reproduce the problem without any JIT'ing, run your Java process with -Xint
.
Okay, I did a quick search and found a thread on sun java forums that discusses this. Hope it helps.
Here another entry on Oracles forum. Similiar sporadic crash. There is one answer where one solved the problem by reconfiguring the gc's survivor ratio.
精彩评论