In my interpreter, code like the following
x=(y+4)*z
echo x
parses and "optimizes" down to four single operations performed 开发者_如何学编程by the interpreter, pretty much assembly-like:
add 4 to y
multiply <last operation result> with z
set x to <last operation result>
echo x
In modern interpreters (for example: CPython, Ruby, PHP), how simplified are the "opcodes" for which are in end-effect run by the interpreter?
Could I achieve better performance when trying to keep the structures and commands for the interpreter more complex and high-level? That would be surely a lot harder, or?
In Python's case, you can have it tell you the bytecode for a given function with the dis module.
from dis import dis
def foo():
x=(y+4)*z
print x
dis(foo)
gives you:
2 0 LOAD_GLOBAL 0 (y)
3 LOAD_CONST 1 (4)
6 BINARY_ADD
7 LOAD_GLOBAL 1 (z)
10 BINARY_MULTIPLY
11 STORE_FAST 0 (x)
3 14 LOAD_FAST 0 (x)
17 PRINT_ITEM
18 PRINT_NEWLINE
19 LOAD_CONST 0 (None)
22 RETURN_VALUE
Some of that is extraneous (e.g. the LOAD_CONST and RETURN_VALUE at the end are for the implicit return None
in foo()
), but Python appears to push y and 4 onto the stack, add, push z, multiply, and write to x. Then it pushes x and prints
- About last result: effectively, you created a register machine with one register: "last operation result". Which is a blocker for parallelism.
- Eval/assign kind of opcodes are usually a layer lower than, yours. Take a look at Python opcodes.
- Higher level commands could yield more performance, because they can allow you to spend more time inside (hopefully) fast interpreter. But they can also be a pain because you will need another high-level opcode for this and that.
Try it and see :) it really depends on lots of factors you didn't provide (and the area and task is so huge that if you provided enough factors, they would contain a few obvious answers). One of such major factors are if (and how) are you going to implement some language features (or, to put it in other words, if you are going to make these things first-class), for example:
- structured programming. What i mean is, would opcodes contain a notion of "function call", "function call with N arguments" or will it be a blind push,push,push,call,ret sequence.
- lambdas (particularily — function bodies, defined inside other functions). Which will put a closure decision: whether functions should "capture" variables of outside scope and, if yes, how.
Try modeling your opcodes like they would mimic the internal workings of your interpreter. This page has an article about how .NET generates an interpreted language out of regexes. In .NET the regex is first compiled to an intermediate language. Then that intermediate code will be interpreted. The intermediate code looks very much like the internal data structures of a specific, uhh, regex engine.
A rule of thumb: if there are repeating patterns in your bytecode (e.g., a common pattern for every GC-controlled heap allocation), there should be a special high level operation for every pattern.
Any way, nowdays, with all that .NET, JVM, LLVM stuff available, it's really cheap and easy to plug in a proper JIT compiler, if you're really interested in a performance of your interpreter.
精彩评论