开发者

Can managed code perform computations as fast as unmamanged?

开发者 https://www.devze.com 2023-03-10 06:40 出处:网络
I\'ve been interested in different chess en开发者_运维知识库gines lately. There are many open and closed sources project in this field. They are all (most of them anyway) written in C/C++. This is kin

I've been interested in different chess en开发者_运维知识库gines lately. There are many open and closed sources project in this field. They are all (most of them anyway) written in C/C++. This is kind of an obvious thing - you have a computationally intensive task, you use C/C++ so you get both portability and speed. This seems like a no-brainier.

However, I would like to question that idea. When .NET first appeared, there were many people saying that .NET idea would not work because .NET programs were doomed to be super-slow. In reality this did not happen. Somebody did a good job with the VM, JIT, etc and we have decent performance for most tasks now. But not all. Microsoft never promised that .NET will be suitable for all task and admitted for some tasks you will still need C/C++.

Going back to the question of computationally heavy task - is there a way to write a .NET program so that it does not perform computations considerably worse that an unmanaged code using the same algorithms? I'd be happy with "constant" speed loss, but anything worse than that is going to be a problem.

What do you think? Can we be close in speed to unmanaged code for computations in managed code, or unmanaged code is the only viable answer? If we can, how? If we can't why?

Update: a lot of good feedback here. I'm going to accept the most up-voted answer.


"Can" it? Yes, of course. Even without unsafe/unverifiable code, well-optimized .NET code can outperform native code. Case in point, Dr. Jon Harrop's answer on this thread: F# performance in scientific computing

Will it? Usually, no, unless you go way out of your way to avoid allocations.


.NET is not super-super slow- but nor is it in the same realm as a native language. The speed differential is something you could easily suck up for a business app that prefers safety and shorter development cycles. If you're not using every cycle on the CPU then it doesn't matter how many you use, and the fact is that many or even most apps simply don't need that kind of performance. However, when you do need that kind of performance, .NET won't offer it.

More importantly, it's not controllable enough. In C++ then you destroy every resource, manage every allocation. This is a big burden when you really don't want to have to do it- but when you need the added performance of fine-tuning every allocation, it's impossible to beat.

Another thing to consider is the compiler. I mean, the JIT has access to more information about both the program and the target CPU. However, it does have to re-compile from scratch every time, and do so under far, far greater time constraints than the C++ compiler, inherently limiting what it's capable of. The CLR semantics, like heap allocation for every object every time, are also fundamentally limiting of it's performance. Managed GC allocation is plenty fast, but it's no stack allocation, and more importantly, de-allocation.

Edit: Of course, the fact that .NET ships with a different memory control paradigm to (most) native languages means that for an application for which garbage collection is particularly suited, then .NET code may run faster than native code. This isn't, however, anything to do with managed code versus native code, just picking the right algorithm for the right job, and it doesn't mean that an equivalent GC algorithm used from native code wouldn't be faster.


Simple answer is no. To some commenters below: it will be slower most of the time, not always. Computationally intensive applications where each millisecond counts will still be written in unmanaged languages such as C, C++. GC slows a lot when collecting.

For example nobody writes 3D engines in C# or XNA. There are some, but nothing is close to CryEngine or Unreal.


Short answer yes, long answer with enough work.

There are high frequency trading applications written in managed C# .NET. Very few other applications ever approach the time criticalness as a trading engine requires. The overall concept is you develop software that is extremely efficient that your application will not need the garbage collector to ever invoke itself for non generation 0 objects. If at any point the garbage collector kicks in you have a massive (in computing terms of time) lag lasting dozens or hundreds of milliseconds which would be unacceptable.


You can use unsafe and pointers to get "raw" memory access, which can give you a significant speed boost at the cost of more responsibility for your stuff (remember to pin your objects). At that point, you're just shifting bytes.

Garbage Collection pressure could be an interesting thing, but there are tactics around that as well (object pooling).


This seems like an overly broad question.

There are some gratuitous hints to throw around: Yes you can use unsafe, unchecked, arrays of Structs and most importantly C++/CLI.

There is never going to be a match for C++'s inlining, compiletime template expansion (and ditto optimizations) etc.

But the bottom line is: it depends on the problem. What is computations anyway. Mono has nifty extensions to use SIMD instructions, on Win32 you'd have to go native to get those. Interop is cheating.

In my experience, though, porting toy projects (such as parsers and a Chess engine) is going to result in at least an order of magnitude speed difference, no matter how much you do optimize the .NET side of things. I reckon this has to do, mainly, with the Heap management and the service routines (System.String, System.IO).

There can be big pitfalls in .NET (overusing Linq, lambdas, accidentally relying on Enum.HasFlag to perform like a bitwise operation...) etc. YMMV and choose your weapons carefully


In general, managed code will have at least some speed loss compared to compiled code, proportional to the size of your code.This loss comes in when the VM first JIT compiles your code. Assuming the JIT compiler is just as good as a normal compiler, after that, the code will perform the same.

However, depending on how it's written, it's even possible that the JIT compiler will perform better than a normal compiler. The JIT compiler knows many more things about the target platform than a normal compiler would--it knows what code is "hot", it can cache results for proven pure functions, it knows which instruction set extensions the target platform supports, etc, whereas depending on the compiler, and how specializing you (and your intended application) allow it to be, the compiler may not be able to optimize nearly as well.

Honestly, it completely depends on your application. Especially with something so algorithmic as a chess engine, it's possible that code in a JIT compiled language, following expected semantics and regular patterns, may run faster than the equivalent in C/C++.

Write something and test it out!

0

精彩评论

暂无评论...
验证码 换一张
取 消