开发者

Why are some programming languages faster than others?

开发者 https://www.devze.com 2023-03-18 08:16 出处:网络
I know that ASM is basically the fastest one can get, but is what makes HLLs slower than ASM the abstraction? What I mean by abstraction is that for instance in C++ you have a class, data needs to be

I know that ASM is basically the fastest one can get, but is what makes HLLs slower than ASM the abstraction? What I mean by abstraction is that for instance in C++ you have a class, data needs to be stored about what is stored in the class, what it derives from, private/public accessors, and other things. When this code is compiled, is there actual assembly code that does work to figure out information about the class? Like CPython is built 开发者_JAVA百科upon C so there is even more abstraction and instructions to be run at run-time than C. Is any of what I am saying true? I think I have answered my own question but I would like to get an answer from a more experienced individual other than myself.

EDIT: I understand that Python is interpreted but wouldn't it still be slower than C if it were compiled?


That's a broad question.

Fundamentally, compiled languages get translated into machine instructions (op codes) just as ASM does (ASM is a layer of abstraction, too). A good compiler is actually likely to out-perform an average ASM coder's result because it can examine a large patch of code and apply optimization rules that most programmers could not do by hand (ordering instructions for optimal execution, etc).

In that regard, all compiled languages are created "equal". However, some are more equal than others. How well the compiled code performs depends fundamentally on how good the compiler is, and much less on the specific language. Certain features such as virtual methods incur a performance penalty (last time I checked virtual methods were implemented using a table of function pointers, though my knowledge may be dated here).

Interpreted languages fundamentally examine human-readable language as the program executes, essentially necessitating the equivalent of both the compile and execution stages during program runtime. Therefore they will almost always be somewhat slower than a compiled counterpart. Smart implementations will incrementally interpret parts of the code as executed (to avoid interpreting branches that are never hit), and cache the result so that a given portion of code is only interpreted once.

There's a middle ground as well, in which human-readable language is translated into pseudo-code (sometimes called P-code or byte code). The purpose of this is to have a compact representation of the code that is fast to interpret, yet portable across many operating systems (you still need a program to interpret the P-code on each platform). Java falls into this category.


Actually, your premise isn't necessarily true.

Many would say that a good optimizing compiler can outperform hand-coded assembly.

Others might say that just-in-time compilers like those for Java and .Net can take advantage of runtime heuristics and hence outperform any statically compiled code.

Among compilers and interpreters, I assure you there is not necessarily any correlation between how high-level the language is and runtime efficiency. Very high level languages can produce extremely efficient code.

IMHO ...


As a rule of thumb, the more abstract (and usually the more convenient for programmers) the language is, the slower it will be. A C compiler will generate assembly code, which is why it is so system-dependent. Languages like Java run in a Virtual Machine, which itself is a compiled program. But that abstraction will slow things down in general.

But that's not to say that there aren't exceptions. Like paulsm4 said, high level languages can end up being more efficient than low level ones because they can take advantage of various patterns (I don't know the details).


When you talk about the speed of a language, the first thing to ask is if it's compiled or interpreted. An interpreted language will typically run one to two orders of magnitude slower than a compiled language.

But that may not matter. An interpreted language may have other advantages, and what you want to do with it may not require blinding speed - if it had the speed, you wouldn't notice.

For example, command-line shell languages are all interpreted (to my knowledge), and that's fine, because they just execute one time-consuming operating system command followed by the next. Cycles shaved in getting between commands would never be noticed.

Even in fast compiled languages, programs that just tie together one library call followed by another and another, with just a little raw data manipulation in between, are getting little benefit from the speed of the language, because all the time is being spent down in the basement.

Where language speed matters is in that basement stuff. If you're only writing higher-level code, the speed of compiled code matters little. What matters a lot is if your code calls subordinate routines more than it really needs to. The compiler can't help you with that. Here's an example of how to fix that kind of problem.

0

精彩评论

暂无评论...
验证码 换一张
取 消