开发者

Setting workarea_size_policy to manual vs automatic

开发者 https://www.devze.com 2023-03-05 04:21 出处:网络
I am working on a data warehouse system which was upgraded about a year ago to Oracle 10g (now 10.2.0.5).

I am working on a data warehouse system which was upgraded about a year ago to Oracle 10g (now 10.2.0.5).

The database is set up with workarea_size_policy=auto and pga_aggregate_target=1G. Most of the ETL process is written in PL/SQL and this code generally sets workarea_size_policy=manual and sets the SORT_AREA_SIZE and HASH_AREA_SIZE for particular sessions when building specific parts of the warehouse.

The values chosen for the SORT_AREA_SIZE and HASH_AREA_SIZE are different for different parts of the build. These sizes are probably based on the expected amount of data that will be processed in each area.

The problem I am having is that this code is starting to cause a number of ORA-600 errors to occur. It is making me wonder if we should even be overriding the automatic settings at all.

The code that sets the manual settings was written many years ago by a developer who is no longer here. It was probably originally written for Oracle 8 with an amendment for Oracle 9 to set the workarea_size_policy to manual. No one really knows how the values used for HASH_AREA_SIZE and SORT开发者_如何学运维_AREA_SIZE were found. They could be completely inappropriate for all I know.

After that long preamble, I've got a few questions.

  1. How do I know when (if ever) I should be overriding the manual settings with workarea_size_policy=manual?
  2. How do I find appropriate values for HASH_AREA_SIZE, SORT_AREA_SIZE, etc?
  3. How do I benchmark that particular settings are actually providing any sort of benefit?

I'm aware that this is a pretty broad question but help would be appreciated.


I suggest you comment out the manual settings and do a test run only with automatic (dynamic) settings, like PGA_AGGREGATE_TARGET. Management of Sort and Hash memory areas has improved a lot since Oracle 8!

It's hard to predetermine the memory requirements of your procedures, so the best is to test them with representative volumes of data and see how it goes.
You can then create an AWR report covering the timeframe of the execution of the procedures. There's a section in the report named PGA Memory Advisory. That will tell you if you need more memory assigned to PGA_AGGREGATE_TARGET, based on your current data volumes.

See sample here:

Setting workarea_size_policy to manual vs automatic

In this case you can clearly see that there's no need to go over the current 103 MB assigned, and you could actually stay at 52 MB without impacting the application.

Depending on the volumes we're talking about, if you can't assign more memory, some Sort or Hash operations might spill to a TEMPORARY tablespace, so make sure you have a properly sized one and possibly spread across as many disks / volumes as possible (see SAME configuration, also here).

0

精彩评论

暂无评论...
验证码 换一张
取 消