I have a table with about 1 billion rows. I'm开发者_如何学运维 the sole user so there's no contention on locks, etc. I noticed that when I run something like this:
DECLARE
CURSOR cur IS SELECT col FROM table where rownum < N;
BEGIN
OPEN cur;
LOOP
dbms_output.put_line("blah")
END LOOP;
CLOSE cur;
END;
there is a lag between the time when I hit enter and the time the output begins to flow in. If N is small then it's insignificant. For large N (or no WHERE clause) this lag is on the order of hours.
I'm new to oracle as you can tell, and I assumed that cursors just keep a pointer in the table which they update on every iteration of the loop. So I didn't expect a lag proportional to the size of the table over which iteration is performed. Is this wrong? Do cursors load the entire query result prior to iterating over it?
Is there a way to iterate over a table row by row without an initial overhead?
What you are seeing is that the output from DBMS_OUTPUT.PUT_LINE is not displayed until the program has finished. It doesn't tell you anything about how fast the query returned a first row. (I assume you intended to actually fetch data in your example).
There are many ways you can monitor a session, one is like this:
DECLARE
CURSOR cur IS SELECT col FROM table;
l_col table.col%ROWTYPE;
BEGIN
OPEN cur;
LOOP
FETCH cur INTO l_col;
EXIT WHEN cur%NOTFOUND;
dbms_application_info.set_module('TEST',l_col);
END LOOP;
CLOSE cur;
END;
While that is running, from another session run:
select action from v$session where module='TEST';
You will see that the value of ACTION keeps changing as the cursor fetches rows.
I also like to monitor v$session_longops for operations deemed by the Oracle optimizer to be "long operations":
select message, time_remaining from v$session_longops where time_remaining > 0;
精彩评论