I mean, in the terms of the SQL queries, Are they compiled or interpreted in a low level?. How it works internal开发者_如何学编程ly, It is a SQL statement interpreted or compiled?.
It's typically working like this:
SQL String ---[Optimizer]---> Execution Plan ---[Execution]---> Result
I personally like to see the optimizer (query planner) as something very similar to a compiler. It's transforming the SQL statement into something that is more easily executable. However, it's not nativity executable on the chip. This "compilation" is rather expensive--just like compiling C++ code. That's the part where different execution variants are evaluated; the join order, which index to use, and so on. It's a good practice to avoid this whenever possible by using bind parameters.
The execution plan is then taken up for execution by the database. However, the strategy is already fixed. the execution is just doing it. This part is kind of interpreting the execution plan, not the SQL.
After all, it is, somehow, similar to Java or .NET where the compilation turns the source code into a binary form that can be interpreted more easily. If we ignore JIT for this argument, the execution of a Java program is interpreting this meta-code.
I have used this way to explain the benefit of using bind parameters for (Oracle) performance in my free eBook "Use The Index, Luke".
In modern SQL environments it is a staged approach where you make a decision at a certain level of the workflow whether or not you want to re-use and existing compiled block or start all stages again if you get a better plan for a certain combination of arguments.
I think it is a pay off between (re-) compiletime and execution time of the (then compiled to executable code) result. Depending on complexity of the query, a recompile applying the specifics of the given arguments at runtime may not be worth the effort if execution time of the existing code is already low because of foreseeable minimum resource consumption (e.g. read two rows and return).
With higher query complexity and estimated resource consumption(many huge tables involved, crucial index choice, possible table scan), the granularity of your statistics comes into play. i.e. if you have selectivities, outliers, range selectivities, avg. field sizes, physical map sizes, etc. the optimizer may come to very different conclusions with different sets of arguments.
Calculating the best plan for a 25-join statement with 10++ variable arguments might take its time and resources. If the result is faster and more efficient than the one-for-all version it is worth the effort. Especially it the given set of arguments may contain game changers and the query will be re-executed frequently.
Finally, your mileage may vary with each vendor ;)
精彩评论