From everything I have read, it seems the only thing mentioned that prevents JavaScript from being compile-once, run-everywhere (or at least, 开发者_C百科run-per-browser) is the silly eval()
statement. Is there anything else about JavaScript that makes this an impossible feat?
Instead of wasting all this time, on every single computer and every browser, JIT-ting and downloading multiple files of source code, I would love to see a browser run JavaScript using less bytes, using less connections, and with blazing fast, optimized code.
How can this be done?
It's not really wasting that much time though. JavaScript engines spend far more time running the JavaScript than parsing it. Sure in theory you could compile JavaScript into machine code, it would be sort of like embedding the JavaScript engine along with the script itself, like a self-extracting ZIP file or something. I don't see the advantage of this though.
This question is flawed on several levels.
- If you mean any binary format (say, bytecode), then you'd need all the JS implementation out there to man up, get together and settle for a standard. Not gonna happen. And even if, it's not guaranteed that you'd get any, let alone significant, improvements in file size over minified JS. Comparing the size of some .py source file with the size of cached .pyc bytecode files I have at hand shows that at least the bytecode of CPython is usually larger than the source files. And the source files aren't even minified!
- If you mean native executables... well, it's hard to compile JS to native code AOT, especially without building a whole implementation while doing so, but it might be possible in theory. It wouldn't give much performance gains though, as you can't eliminate the dynamicness (at least not statically - a JIT compiler can do much better), so you have to pay for it. It also has numerous downsides that render it wholly worthless:
- The resulting binary is platform-specific. So either you'd have to send all possible combinations of binaries (which is wasteful and requires additonal special handling by the browsers) or you'd have to do sniffing and send whatever binary you think might fit - which would screw anyone not willing to give out quite a bit of private information for free (by sending a wrong/totally worthless executable).
- You propably lose sandboxing. Native code is for the most part a huge black box. Running it in the browser would also mean the browser would go and run executable code it gets from some untrusted source - an invitation for malware. Unless of course you'd integrate a whole top-of-the-line anti-malware suite into every browser. Do I really need to comment further?
- Unless you create a standard for all implementations that allows the compiled JS to access all the builtins etc. of the implementation, or otherwise create some way of sharing code, you end up with lots of redundancy (statically linking a whole JS implementation into the binary). The result will propably be way larger than compressed JS.
- You still need some standard interface (unless of course you want to go from
architectures * OSs
toarchitectures * OSs * browsers
executables) for the DOM integration. - Not even mentioning that all browsers would have to support this. Either in a standard way or each on their own. It has been hard (and long) enough convincing all of them to support most of HTML, CSS and JS in source form. Or you'd need to introduce some standard way of detecting support for this.
- Also (as already indicated), for dynamic languages, a JIT can perform as well as or better than a static compiler. So it's quite possible that (very impropably, given some of the previous points) you'd save a bit of time on the download (which can already be made quite small using minification, gzipping, caching, etc.) and parsing (which isn't the big performance killer anyway). But the actual run time wouldn't improve significantly either.
精彩评论