What is Azul "Zing" platform?
Visiting Azul site (link) turned into a marketing horror - and after wading through every little bit of it, I still don't have a clue.Does anyone have any experience with it? What are the requirement for your application to be "Zing"-ed? (Zing-able?) If, for example, I have an application that loads an object graph into memory and constantly traverses huge chunks of it (so most of it is "warm" - can't store parts in slow data-stores) - can Azul help me? (I already know Terracotta BigMemory can't...)
I want to clarify - I'm looking for feedback from someone who actually "zingified" their product and put it on the Azul VM successfully (or saw that it doesn't work).
Ran.
[Edit 1 - added page link] [Edit 2- Experience wanted]
Remember what Azul used to do: make customized multicore Java appliances. An Azul machine might have 60 or 100 cores and there was all sorts of cleverness to take advantage of the parallelization (the one that impressed me was the optimistic locking: a thread that was supposed to obtain a lock just assumed that it had the lock and went forward and if it turned out later that, no, it was supposed to have blocked, it somehow unwound all its changes and went back and waited).
The problem is, of course, that custom hardware is a graveyard. Azul had spent all this time making software for hardware no one would buy. So, as a corporation they imitated their own product: they backed up, unwound their changes, and ported all their clevernesses (the optimistic locking, the hypervisor, other stuff) from custom hardware to commodity multicore machines so instead of paying $100,000 for an 80-core machine, you can spent $20,000 for 10 eight-core machines in a cloud*.
[ * All numbers surgically extracted from my anatomy. ]
Is it a good idea? Does it work? I don't know, but I hope so. I met all the Azul guys at the 2003 JavaOne and they really impressed me.
I used to read research papers on Garbage Collection, for fun (I feel much better now, thanks for asking). A common thread through them was, "These algorithms would be faster/feasible if we had hardware support for write barriers".
There is a read-write lock problem with GC. You can't figure out what's garbage if the app keeps moving pointers around while you're trying to take inventory. One trick people have tried over and over is changing how writing pointers works to keep track of the changes. This is called a Write Barrier because you cannot write without doing the bookkeeping. This allows the app and GC to run at the same time, but in many cases turned out to make the app run too slowly.
Intel had to solve a similar Write Barrier problem to make Virtualization work smoothly - how do I run an app that's doing Virtual Memory inside an OS that's already doing Virtual Memory? Zing reportedly uses these features to make the JVM into a literal Virtual Machine, and leverage those features to make GC fast. The faster the GC, the bigger the Heap you can manage.
We're currently running Zing on our big 256GB RAM machines. This is very new to us at the moment and we're confident that things will be get better.
Currently our system is much slower than it used to be. BUT this is extremely early days and already we can tell you that the Zing support is already proving to be excellent. The use of ZVision is already giving us clues to our slowdown.
We are already able to make use of the extra RAM, but we need to update our Linux kernel to make use of more than 16 cores.
We got the same initial slowness when running redhat enterprise. Now we're running the KVM under Ubuntu server 10.04. So far we see no difference (which is a big cost saving).
As we get more experience over the next week I'll pass on our findings.
In a nutshell - it is a "special" JVM that is very performant. I.e. instead of using the sun JVM, you use Zing. Without any code-changes. So, in theory, all applications are "zingable". I can't tell you whether the claims of improved performance are true, though.
精彩评论