开发者

Java: Do all processes run under the same JVM?

开发者 https://www.devze.com 2022-12-16 18:04 出处:网络
We have a linux server that starts around 20 java programs. These java programs are all identical, except that they are using a differen port. These programs run fine. However, after a while all 20 pr

We have a linux server that starts around 20 java programs. These java programs are all identical, except that they are using a differen port. These programs run fine. However, after a while all 20 programs crash at the exact same time. Each of these programs are allocated 2 gig of memory, by starting them up like this:

java -jar -Xmx 2000m

However, as far as we know, these programs do not get anywhere close near using these amounts of memories. The entire system has 4 gig of memory.

SO, the question is, could on java program be responsible for crashing all 9 other ones? Is the VM shared, so that when it crashes, ALL java programs crash? Is there a log file I could possible check for a reason why java crashed? The java output did not show any error.

EDIT: The strange t开发者_运维百科hing is that this happened after a longer time, like 3 hours. These 20 processes had been running for quite some time before suddenly ALL crashing at the same time. And why do they ALL crash, if the java runtime starts it's own process for each program?


It is difficult to say precisely why the java processes are all exiting at the same time. It is not even clear why they are dying. To diagnose those problems, I would:

  • turn some GC logging; e.g. add the "-verbose:gc" command line option,

  • make sure that the application catches and logs exceptions that might be killing the 'main' thread, and

  • look in the processes current directory to see if they are leaving crash dumps.

But independent of that, when you run java -jar -Xmx2000m ... 20 times, you are staring 20 OS processes each of which could use in excess of 2Gb of virtual memory. On a machine with 4Gb of physical memory, this is simply crazy. Even if you have enough swap space (40Gb or more) to support that much virtual memory, the chances are that you will cause virtual memory thrashing before the heaps get anywhere like that big. When the system starts thrashing, system performance will drop through the floor.

To avoid this you need to make sure that the total virtual requirements of all active processes on the system is not much more than the physical memory that you have. Reduce the number of Java processes, reduce their maximum heap sizes or both. (And bear in mind that a JVM uses non-heap memory as well.)

Another thing that you should consider is rearchitecting your system so that it uses multi-threading within one JVM rather than using multiple JVMs. If implemented correctly, you should be able to get more throughput in a multi-threaded architecture because:

  • this avoids having multiple copies of the code and common data structures, so you have relatively more memory for useful heap objects,

  • the threads in a multi-threaded architecture can share caches of previously fetched objects, previously computed values, etc and

  • GC is more efficient with a single large heap than lots of smaller heaps,


You obviously can't really allocate 2GB of memory to each of 20 processes if the system only has 4GB of memory. Even if my crystal ball is not able to access your log files right now, I would assume that the entire system runs out of memory, causing most running processes to crash nearly at the same time.


No - java.exe starts a new runtime environment. So if you execute it 20 times, you will have 20 JVMs running, each of which is allowed up to 2GB of memory. Since you only have 4GB total, it's quite possible that you're running out of memory simply by trying to allocate too much.


Each java program will have its own process. Code in memory will be shared (that is, the VM binaries) but that should not be a problem.

You should really check whether the java applications are dying due to memory problems. In linux, if you ask the OS for memory it will tell you that it is ok, but will not actually reserve it. It is when you do use that memory that the OS will actually fetch it from the system to your process.

It might be the case that all of the process request 2G, for a total of 40G of memory and the OS just allows that. If the processes behave in the same way and they keep growing in memory size, at one point they will hit the 200M size (I am disregarding other processes and stuff just to simplify the numbers), and trying to use more memory will kill the process with a segmentation fault.

Now, I would not expect this to kill all of your applications at once, since after the first application is killed, memory will be reclaimed by the OS and it will be available for other processes, and so they would be dying one by one and at least 1 or 2 should be alive at the end. In some rare circumstances it might be the case that while the first process is still being cleanup after (core being dumped, and all the other OS related stuff...) the rest of the processes do hit the same problem and they all die before memory from the first one is actually available... but I would not really count on that as a high probability event.


Note that starting Sun's JVM with -Xmx2000m only limits the heap to 2 GB. There are other memory areas which have to be independently sized.

It might indeed be possible that all of these systems encounter an out of memory error simultaneously, and that they all stopped as a result.

There are JVMs other than Sun's that allow for different memory sizing parameters; perhaps you'd prefer an alternative JVM.

Different Java programs log their output differently, so it isn't really possible to tell you where a log might be without having more knowledge about the program.

There may be some kind of output laying around that might give you an idea of why the application crashed. I'd look for them by getting a listing of the directory from which the applications were launched.


java -Xmx only determines the maximum heap size. To actually get them to grab 2gb immediately, you'd have to use -Xms. Both are legal together, i.e. java.-Xmx2000m -Xms2000m. Type "java -X" for more details.

If you're jvm's are running out of memory they should be reporting an out-of-memory. Run them to output to a log. You can also set them to run a heap dump when they die, as there's probably something else going on here.


Sounds like the work of the dreaded linux OOM Killer (no, this isn't a joke). as others have mentioned, you are probably overallocating the memory on the box, and the OOM killer tends to target processes with large resident memory (like a jvm).

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号