My Question is regarding the performance of Java versus compiled code, for example, C++/fortran/assembly in high-performance numerical applications. I know this is a contentious topic, but I am looking for specific answers/examples. Also community wiki. I have asked similar questions before, but I think I put it broadly and did not get answers I was looking for.
double precision matrix-matrix multiplication, commonly known as dgemm in blas library, is able to achieve nearly 100 percent peak CPU performance (in terms of floating operations per second).
There are several factors which allow achieving that performance:cache blocking, to achieve m开发者_运维百科aximum memory locality
loop unrolling to minimize control overhead
vector instructions, such as SSE
memory prefetching
guarantee no memory aliasing
I have have seen lots of benchmarks using assembly, C++, Fortran, Atlas, vendor BLAS (typical cases are a matrix of dimension 512 and above). On the other hand, I have heard that the principle byte compiled languages/implementations such as Java can be fast or nearly as fast as machine compiled languages. However, I have not seen definite benchmarks showing that it is so. On the contrary, it seems (from my own research) byte compiled languages are much slower.
Do you have good matrix-matrix multiplication benchmarks for Java/C #? does just-in-time compiler (actual implementation, not hypothetical) able to produce instructions which satisfy points I have listed?
Thanks
with regards to performance:
every CPU has peak performance, depending on the number of instructions processor can execute per second. For example, modern 2 GHz Intel CPU can achieve 8 billion double precision add/multiply a second, resulting in 8 gflops peak performance. Matrix-matrix multiply is one of the algorithms which is able to achieve nearly full performance with regards number of operations per second, the main reason being a higher ratio of computing over memory operations (N^3/N^2)
. Numbers I am interested in a something on the order N > 500
.
with regards to implementation: higher-level details such as blocking is done at the source code level. Lower-level optimization is handled by the compiler, perhaps with compiler hints with regards to alignment/alias. Byte compiled implementation can be written using block approach as well, so in principle source code details for decent implementation will be very similar.
A comparison of VC++/.NET 3.5/Mono 2.2 in a pure matrix multiplication scenario:
SourceMono with Mono.Simd goes a long way towards closing the performance gap with the hand-optimized C++ here, but the C++ version is still clearly the fastest. But Mono is at 2.6 now and might be closer and I would expect that if .NET ever gets something like Mono.Simd, it could be very competitive as there's not much difference between .NET and the sequential C++ here.
All factors your specify is probably done by manual memory/code optimization for your specific task. But JIT compiler haven't enough information about your domain to make code optimal as you make it by hand, and can apply only general optimization rules. As a result it will be slower that C/C++ matrix manipulation code (but can utilize 100% of CPU, if you want it :)
Addressing the SSE issue: Java is using SSE instructions since J2SE 1.4.2.
in a pure math scenario (calculating 25 types or algebraic surfaces 3d coords) c++ beats java in a 2.5 ratio
Java cannot compete to C in matrix multiplications, one reason is that it checks on each array access whether the array bounds are exceeded. Further Java's math is slow, it does not use the processor's sin(), cos().
精彩评论