I'm currently doing an academic benchmark, TPC-H, and I have some big tables that I want to load using direct path (which workde great and was very fast) and conventional path.
For that, I used this batch script:
for /F "tokens=1,2" %%A in (table4_records.txt) do (
sqlldr userid='tpch/tpch' control=%%A.ctl rows=%%B bindsize=? readsize=? SILENT=HEADER log=bulkload_logs\sf4\bulk_%%A%1.log
)
The problem is that, no matter what values I give to the bindsize and/or readsize options, it always commit from 65534 to 65534 rows. I already pass the %%B size which is the exactly number of rows per table.
In direct load, I just used the rows and the commit as REALLY done after the hole table was loaded.
I want to do something like that, but with conventional load path - I know that is not faster, but that's the poin开发者_如何学Ct.
Could you please tell me how can I give the correct parameters so I can: 1- load as much data, at a time, as I can;
2- commit less frequently, preferably at the end of the table's load.Here are the tables' names and number of rows:
lineitem 23996604 -> is the biggest and has aprox. 3GB on disk
orders 6000000 partsupp 3200000 part 800000 customer 600000 supplier 40000 nation 25 region 5You won't get a 3GB file to use a conventional path load and commit at the end. From the Oracle docs:
"When SQL*Loader sends the Oracle database an INSERT command, the entire array is inserted at one time. After the rows in the bind array are inserted, a COMMIT statement is issued."
"In the conventional path method, the bind array is limited by the size of the read buffer. Therefore, the advantage of a larger read buffer is that more data can be read before a commit operation is required."
Maximise the READSIZE and BINDSIZE until it tells you that you've hit the maximum for your platform.
Ooops:
It turns out ROWS maximum number, in Conventional Path, is 65534 exactly, so I could keep increasing my bindarray size! LOL
Sorry, I've just found it in Oracle's documentation
Burleson's site has a possible way to do this.
精彩评论