I am planning to store like 5 TB data in a table. Fol开发者_运维百科lowing is the default script I have generated to create the table :
CREATE TABLE measurements
(
measurementtime DATE NOT NULL,
height number,
offset number
)
PCTFREE 10
PCTUSED
INITRANS 1
MAXTRANS 255
TABLESPACE mytablespace
STORAGE (
INITIAL 262144
NEXT
PCTINCREASE
MINEXTENTS 1
MAXEXTENTS 2147483645
FREELIST GROUPS 0
FREELISTS 0
)
Can you please tell if I need to modify any of the parameters?
It's difficult to give good advice without knowing the entire system. The ideas below are guesses based on your script and on your previous question How do I design a table which will store very large data? But don't believe everything you read. You'll need to do a lot of testing. You can easily spend many days trying to optimize a table like this.
PCTFREE 0: If you have 5 TB and no indexes I assume you won't be doing any updates. If that's true, you can change PCTFREE to 0 and save 10% of your space.
NOLOGGING: If you don't need your data to be recoverable, and if your database is in archivelog mode, then you may want to add NOLOGGING. NOLOGGING with APPEND inserts will not generate redo. (This may be unnecessary if your tablespace is already set to NOLOGGING.)
PARALLEL: You'll definitely want to use parallelism with this table. Since this table is likely unusual for your system, it's probably better to define parallelism at the statment or session level. But if you won't have control over all of the statements run against this table you may want to consider definining the degree here to make sure that statements don't run in serial.
Remove unknown options: I think you should remove all of the options that you didn't specifically set, or don't understand. If you used a tool like TOAD or DBMS_METADATA to generate the script they will always list every option, but usually you should leave most of that stuff out and let Oracle use whatever it likes.
Compression/Partitioning: Like Gary mentioned, partitioning may be very useful. But in your previous question you mentioned storing up to 200TB and only 5 TB per day. Are you using a poor man's partitioning; re-naming and recreating the table every day? If this is just a day's worth of data then compression should be very helpful with measurementtime.
Block size: I'm not sure what the settings of "mytablespace" are, but since you didn't mention anything about it I'm guessing you're using the standard block size. It might be worth using a large block size for this table so you can get even better compression (since compression is done per block, the more data in a block the more Oracle can compress). I'm relucant to suggest this, because 99% of the time when people change the block size it doesn't do what they think it should. But with such a large amount of data it might be worth considering.
精彩评论