SYSAUX Growing rapidly!!! What can be done

Recently i have been working on cleaning up SYSAUX tablespace for few of clients, so thought to put down my steps which might be helpful to some of you out there.

Why does SYSAUX tablespace grows much larger than expected?

There could be number of potential reasons:

1. ASH data has grown too large (SM/AWR)
2. High Retention Period
3. Segment Advisor has grown too large
4. Increase in older version of Optimizer Statistics (SM/OPTSTAT)
5. Bugs Bugs Bugs!!!!!

How do we identify the SYSAUX space Usage?

There are basically 2 ways to identify that i know of

1. Running @?/rdbms/admin/awrinfo.sql –> Detailed Info like Schema breakdown, SYSAUX occupants space usage etc.


COLUMN "Space Used (GB)" FORMAT 999.99
COLUMN "Move Procedure" FORMAT A40

SELECT  occupant_name "Item",
   space_usage_kbytes/1048576 "Space Used (GB)",
   schema_name "Schema",
   move_procedure "Move Procedure"
   FROM v$sysaux_occupants

 col owner for a6
 col segment_name for a50
  select * from
 (select owner,segment_name||'~'||partition_name segment_name,bytes/(1024*1024) size_m
 from dba_segments
 where tablespace_name = 'SYSAUX' ORDER BY BLOCKS desc) where rownum < 11;

In my case, below 2 were occupying most of the space :-


SM/AWR — It refers to Automatic Workload Repository.Data in this section is retained for a certain amount of time (default 8 days). Setting can be checked through DBA_HIST_WR_CONTROL.

SM/OPSTAT — Stores older data of optimizer statistics.Setting can be checked through dbms_stats.get_stats_history_retention. This is not a part of AWR and is not controlled by AWR retention.

When looking at the top segments, i saw WRH$_ACTIVE_SESSION_HISTORY occupying most of the space. Sometimes AWR tables are not purged to settings in sys.wrm$_wr_control.

As per Oracle :-

Oracle decides what rows need to be purged based on the retention policy. There is a special mechanism which is used in the case of the large AWR tables where we store the snapshot data in partitions. One method of purging data from these tables is by removing partitions that only contain rows that have exceeded the retention criteria. During the nightly purge task, we only drop the partition if all the data in the partition has expired. If the partition contains at least one row which, according to the retention policy shouldn’t be removed, then the partition won’t be dropped and as such the table will contain old data.

If partition splits do not occur (for whatever reason), then we can end up with a situation where we have to wait for the latest entries to expire before the partition that they sit in can be removed. This can mean that some of the older entries can be retained significantly past their expiry date. The result of this is that the data is not purged as expected.

Diagnose and Reduce Used Space of SYSAUX.

Once the major occupants and top segments is identified as discussed above, we can start with the steps to rectify it.

Expecting SM/AWR occupying most of the space , i think we can follow 3 methods. In this blog i will be posting one of the method only🙂

To check Orphaned ASH rows :-

 SELECT COUNT(1) Orphaned_ASH_Rows FROM wrh$_active_session_history a
  FROM wrm$_snapshot
  WHERE snap_id       = a.snap_id
  AND dbid            = a.dbid
  AND instance_number = a.instance_number

Check minimum snap_id in ASH table and then compare to the minimum snap_id in dba_hist_snapshot.

select min(snap_id) from WRH$_ACTIVE_SESSION_HISTORY;
select min(snap_id) from dba_hist_snapshot;

Example :-

select min(snap_id),MAX(snap_id) from dba_hist_snapshot;

------------ ------------
       17754        18523

select min(snap_id),MAX(snap_id) from WRH$_ACTIVE_SESSION_HISTORY;

------------ ------------
           1        18523

Above as per the retention period, we should have data from snap_id 17754 till 18523, but the WRH$_ASH table has data from snap_id 1.

From Oracle MOS Doc :-

A potential solution to this issue is to manually split the partitions of the partitioned AWR objects such that there is more chance of the split partition being purged.You will still have to wait for all the rows in the new partitions to reach their retention time but with split partitions there is more chance of this happening. you can manually split the partitions using the following undocumented command:

alter session set “_swrf_test_action” = 72;

select table_name, count(*) from dba_tab_partitions where table_name like 'WRH$%' and table_owner = 'SYS'
group by table_name order by 1;

TABLE_NAME                                           COUNT(*)
-------------------------------------------------- ----------
WRH$_ACTIVE_SESSION_HISTORY                                 2
WRH$_DB_CACHE_ADVICE                                        2
WRH$_DLM_MISC                                               2
WRH$_EVENT_HISTOGRAM                                        2
WRH$_FILESTATXS                                            11
WRH$_INST_CACHE_TRANSFER                                    2
WRH$_INTERCONNECT_PINGS                                     2
25 rows selected.

SQL>  alter session set "_swrf_test_action"=72; 

Session altered.

SQL>  select table_name,partition_name from dba_tab_partitions where table_name = 'WRH$_ACTIVE_SESSION_HISTORY';

TABLE_NAME                                         PARTITION_NAME
------------------------------  -------------------------------------------------------
WRH$_ACTIVE_SESSION_HISTORY                        WRH$_ACTIVE_1798927129_0
WRH$_ACTIVE_SESSION_HISTORY                        WRH$_ACTIVE_1798927129_18531  --> New Partition created 

col table_name for a80
select table_name, count(*) from dba_tab_partitions where table_name like 'WRH$%' and table_owner = 'SYS' group by table_name order by 1

TABLE_NAME                                   COUNT(*)
------------------------------------------- ----------
WRH$_ACTIVE_SESSION_HISTORY                     3
WRH$_DB_CACHE_ADVICE                            3
WRH$_DLM_MISC                                   3
WRH$_EVENT_HISTOGRAM                            3

25 rows selected.

In the above example, WRH$_ACTIVE_1798927129_18531 is the new partition created where 1798927129 being the DBID and 18531 is the max(snap_id) when it was partitioned. So, now we can start dropping the snapshots range,which in my case is from 1 to 17753 as 17754 is the min(snap_id) in dba_hist_snapshot.

SQL> EXEC dbms_workload_repository.drop_snapshot_range(1,17753,1798927129);

It can generate good amount of redo and use undo. So keep monitoring undo tablespace and make sure you have sufficient space.

So, what happens when run the above :-

SQL> @sqlid ft7m07stk3dws
old   9:        sql_id = ('&1')
new   9:        sql_id = ('ft7m07stk3dws')

SQL_ID                                  HASH_VALUE SQL_TEXT
--------------------------------------- ---------- ------------------------------------------------------------------------------------------------------------------------------------------------------
ft7m07stk3dws                            857847704 delete from WRH$_SYSTEM_EVENT tab where (:beg_snap <= tab.snap_id and         tab.snap_id = b.start_snap_id) and
                                                   (tab.snap_id  @sqlid 854knbb15976z
old   9:        sql_id = ('&1')
new   9:        sql_id = ('854knbb15976z')

SQL_ID                                  HASH_VALUE SQL_TEXT
--------------------------------------- ---------- ------------------------------------------------------------------------------------------------------------------------------------------------------
854knbb15976z                           3260325087 delete from WRH$_SQLSTAT tab where (:beg_snap <= tab.snap_id and         tab.snap_id = b.start_snap_id) and
                                                   (tab.snap_id <= b.end_snap_id))

So, internally oracle runs delete command which cause high redo and undo generation🙂

Once the procedure is completed successfully, check the min(snap_id) in WRH$_ACTIVE_SESSION_HISTORY and perform shrink space cascade.

elect owner,segment_name,round(sum(bytes/1024/1024),2)MB, tablespace_name from dba_segments where segment_name = upper('WRH$_ACTIVE_SESSION_HISTORY') group by owner,segment_name,tablespace_name

OWNER       SEGMENT_NAME                      MB           TABLESPACE_NAME
-------  ---------------------------------- -----------  -------------------
SYS        WRH$_ACTIVE_SESSION_HISTORY        3538.06          SYSAUX

SQL> alter table WRH$_ACTIVE_SESSION_HISTORY shrink space cascade;

Table altered.

OWNER       SEGMENT_NAME                      MB           TABLESPACE_NAME
-------  ---------------------------------- -----------  -------------------
SYS        WRH$_ACTIVE_SESSION_HISTORY        46.75          SYSAUX

In similar fashion, other WRH$ tables can be shrink ed to free up space in SYSAUX.

Hope this helps!!!

Reference :-

WRH$_ACTIVE_SESSION_HISTORY Does Not Get Purged Based Upon the Retention Policy (Doc ID 387914.1)
Suggestions if Your SYSAUX Tablespace Grows Rapidly or Too Large (Doc ID 1292724.1)

7 thoughts on “SYSAUX Growing rapidly!!! What can be done

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s