开发者

sql Optimizer for Large Db Table

开发者 https://www.devze.com 2023-04-08 22:42 出处:网络
i have table that has millions of rows that i need to join to do selects. the response time is not so good, how can i improve its response?i have tried adding index to the columns i do select by, is t

i have table that has millions of rows that i need to join to do selects. the response time is not so good, how can i improve its response? i have tried adding index to the columns i do select by, is there a tool i can use to optimize the sql or how can i diagnose on the bottlenecks of the sql and improve it? any开发者_如何学C suggestion would be very appreciated. i am using oracle server 10g and using asp.net as my client. is there any other kind of indexing helpful on tables with millions of rows?


You should probably start with EXPLAIN PLAN.

Use the EXPLAIN PLAN statement to determine the execution plan Oracle Database follows to execute a specified SQL statement. This statement inserts a row describing each step of the execution plan into a specified table. You can also issue the EXPLAIN PLAN statement as part of the SQL trace facility.

This statement also determines the cost of executing the statement. If any domain indexes are defined on the table, then user-defined CPU and I/O costs will also be inserted.

Then edit your question, and post the SQL statement and the output of EXPLAIN PLAN.

Later . . .

I'm not going to be much help to you on that query. 269 lines, at least 29 SELECTs, parallel queries, remote databases, outer joins (old style), and so on.

The best advice I can give you is

  • get more information from EXPLAIN PLAN, and
  • simplify the problem.

The plan table has more columns than are commonly posted. The columns COST, CARDINALITY, BYTES, and TIME might be useful in prioritizing your tuning effort.

You've got 10 full table scans in that query. ("TABLE ACCESS FULL" in the query plan.) That's usually a bad sign; full table scans often take a relatively long time to run. It's not always a bad sign. A full scan of a tiny table might be faster than an index scan.

Start by getting EXPLAIN PLAN output for each of the 29 SELECT statements in your query. If any of them show a full table scan, you can probably improve their performance with suitable indexes. (Oracle supports many different kinds of indexes. Don't overlook opportunities for multi-column indexes.) In any case, EXPLAIN PLAN output will help you identify the slowest of the 29 SELECTs.


There are potentially thousands of problems that may exist with this query and plan, only a local expert can truly help you.

But for what it's worth, the first thing I noticed is that only 1/5th of your plan uses parallelism. Normally you want to have all of the steps run in parallel or none of them run in parallel.

If your query only returns a small amount of data the overhead of parallelism probably isn't worth it. It may take a few extra seconds for Oracle to set up the parallel processes, coordinate them, and perform additional steps to optimize the plan (e.g. increase dynamic sampling). A seriel index read would probably work much better than a parallel full table scan. You may need to change the DEGREE of a table, or use a NOPARALLEL hint.

If your query returns a large amount of data you'll probably want to use as many parallel hash joins as possible to efficiently join everything. For really large queries, the worst performance usually occurs when Oracle underestimates the cardinality and uses nested loops and indexes. Look at your cardinality, and find the first part in the plan where the estimate is drastically lower than the actual, that will get you closer to the problem.

0

精彩评论

暂无评论...
验证码 换一张
取 消