Quick question
Does the jmap heap dump include only the old generation, or also the young generation ?开发者_如何学JAVA
Long explanation
I have 2 heap dump (jmap -heap:format=b 9999
) :
- one of my server with no load (no HTTP request)
- one while working 50% CPU, a high load (benchmarking)
Now, the first dump shows a heap size bigger than the second (which I thought was weird).
Could that be because the Young generation (at high load) is changing often because the Garbage collector is running often (yes, the JVM is almost full) ? Old generation is full at 99%, I've noticed the young generation space usage vary a lot.
So that would mean that I made the second dump right after the GC did his job, this is why its size is smaller. Am I right ?
Additionnal informations :
Java args :
-XX:+UseParallelGC -XX:+AggressiveHeap
-Xms2048m -Xmx4096m -XX:NewSize=64m
-XX:PermSize=64m -XX:MaxPermSize=512m
Quick Answer
Both - The Heap is made up of the Young and Old Generation. So when you take a heap dump, the contents contains both. The Stats of the heap dump should be seperated. Try removing the binary portion of your command and look at it in plain text. You will see a summary of your configuration, and then a break down of each generation. On the flip side, a -histo would just show all objects on the heap with no distinction
Long Answer
It could be that a garbage collection had just finished for the 2nd process. Or the opposite, the first process may not have had a full collection in a while and was sitting at higher memory. Was this application/server just restarted when you took the capture? If you look at an idle process using a tool like jvisualvm
, you will see the memory allocation graphs move up and down even though your process isn't doing any work. This is just the JVM doing its own thing.
Typically your Full GC should kick off well before it reaches a 99% mark in the Old Gen. The JVM will decide when to run the full GC. Your Young Gen will fluctuate alot as this is where objects are created/removed the fastest. There will be many partial GC's done to clean out the young gen before a full GC get run. The difference between the two will mean a pause in your JVM activity. Seeing a partial GC will not hurt your application. But the pause times of a full GC will stop your application while it runs. So you will want to minimize those the best you can.
If you are looking for memory leaks or just profiling to see how your application(s) gc is working, I would recommend using the startup flags to print the garbage collection stats.
-XX:+PrintGCDetails -verbose:gc -Xloggc:/log/path/gc.log
run your program for a while and then load the captured log in a tool to help visualize the results. I personally use the Garbage Collection and Memory Visualizer offered in the IBM Support Assistant Workbench. It will provide you with a summary of the captured garbage collection stats as well as a dynamic graph which you can use to see how the memory in your application has been behaving. This will not give you what objects were on your heap.
At problem times in your application, if you can live with a pause, I would modify your jmap command to do the following
jmap -dump:format=b,file=/file/location/dump.hprof <pid>
Using a Tool like MAT, you will be able to see all of the objects, leak suspects, and various other stats about the generations, references of the heap.
Edit For Tuning discussion
Based on your start-up parameters there are a few things you can try
- Set your -Xms equal to the same value as your -Xmx. By doing this the JVM doesn't have to spend time allocating more heap. If you expect your application to take 4gb, then give it all right away
- Depending on the number of processors
on the system you are running this
application you can set the flag for
-XX:ParallelGCThreads=##
. - I havent tried this one but the
documentation shows a parameter for
-XX:+UseParallelOldGC
which shows some promise to reduce time of old GC collection. - Try changing your new generation size to 1/4 of the Heap. (1024 instead of 64) This could be forcing too much to the old generation. The default size is around 30%, you have your application configured to use around 2% for young gen. Though you have no max size configured, I worry that too much data would be moved to old gen because the new gen is too small to handle all of the requests. Thus you have to perform more full (paused) GC's in order to clean up the memory.
I do believe overall if you are allocating this much memory so fast, then it could be a generation problem where the memory is allocated to the Old generation prematurely. If a tweak to your new gen size doesn't work, then you need to either add more heap via the -Xmx configuration, or take a step back and find exactly what is holding onto the memory. The MAT tool which we discussed earlier can show you the references holding onto the objects in memory. I would recommend trying bullet points 1 & 4 first. This will be trial and error for you to get the right values
精彩评论