Primary on FileSystem and Standby on ASM

For one of the client, standby server went down. We had another standby server which was kept down for more than a month. Decision was taken to start the server and apply incremental SCN based backup on the standby database.

The standby was on ASM and the Primary on filesystem.Incremental backup was started from the SCN reported by below query

select min(fhscn) from x$kcvfh;

Once the backup completed, it was transferred to standby, standby was mounted (using the old controlfile), backups were cataloged and recovery performed using ‘recover database noredo’.

The recovery was going on, and was handed over to me. After the recovery completed I restored the latest controlfile from Primary and mounted the standby. At this point the controlfile was with the information of the filesystem as they were on the Primary side. The next step was to register everything we had on Standby side:

[oracle@oracle3:~ (db)]$ rman target /

Recovery Manager: Release 11.2.0.3.0 - Production on Thu Jan 8 04:57:01 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ABCD (DBID=1463580380, not open)

RMAN> catalog  start with '+DATA/adbc_oracle3/DATAFILE/';

searching for all files that match the pattern +DATA/adbc_oracle3/DATAFILE/

List of Files Unknown to the Database
=====================================
File Name: +data/adbc_oracle3/DATAFILE/TEST.256.844395985
File Name: +data/adbc_oracle3/DATAFILE/TEST.257.844397067
.................
.................
File Name: +data/adbc_oracle3/DATAFILE/TEST.416.865953683

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: +data/adbc_oracle3/DATAFILE/TEST.256.844395985
File Name: +data/adbc_oracle3/DATAFILE/TEST.257.844397067
................
...............
File Name: +data/adbc_oracle3/DATAFILE/TEST.416.865953683

RMAN> report schema;

RMAN-06139: WARNING: control file is not current for REPORT SCHEMA
Report of database schema for database with db_unique_name ABCD_ORACLE3

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    0        SYSTEM               ***     /oradata/abcd_oracle2/datafile/o1_mf_system_825xkscr_.dbf
2    0        SYSAUX               ***     /oradata/abcd_oracle2/datafile/o1_mf_sysaux_825y451r_.dbf
3    0        TEST                 ***     /oradata/abcd_oracle2/datafile/o1_mf_test_825s84mw_.dbf
4    0        TEST                 ***     /oradata2/abcd_oracle2/datafile/o1_mf_test_8dr1v332_.dbf
................
................
................
147  0        TEST                   ***     /oradata4/abcd_oracle2/datafile/o1_mf_test_b8k8hcrh_.dbf
148  0        TEST                   ***     /oradata4/abcd_oracle2/datafile/o1_mf_test_b8k8hdhf_.dbf
149  0        TEST                   ***     /oradata4/abcd_oracle2/datafile/o1_mf_test_b8k8hf6o_.dbf
150  0        TEST                   ***     /oradata4/abcd_oracle2/datafile/o1_mf_test_b8k8hg1j_.dbf
151  0        TEST                   ***     /oradata4/abcd_oracle2/datafile/o1_mf_test_bb318bhs_.dbf
152  0        TEST                   ***     /oradata4/abcd_oracle2/datafile/o1_mf_test_bb318cff_.dbf
153  0        TEST_INDEX             ***     /oradata4/abcd_oracle2/datafile/o1_mf_test_index_bb318pmy_.dbf
154  0        TEST_NOLOGGING         ***     /oradata3/abcd_oracle2/datafile/o1_mf_test_nolog_bbm2s7vk_.dbf
155  0        TESTINDEX             ***     /oradata3/abcd_oracle2/datafile/o1_mf_test_index_bbm2z7nv_.dbf
156  0        PERFSTAT             ***     /oradata3/abcd_oracle2/datafile/o1_mf_perfstat_bbm312pf_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
3    15104    TEMP                 32767       /oradata4/abcd_oracle2/datafile/o1_mf_temp_b633ppbr_.tmp
4    25600    TEMP                 32767       /oradata4/abcd_oracle2/datafile/o1_mf_temp_b633ppcf_.tmp

After catalog completed, it was time to switch database to copy and it failed with below error

RMAN> switch database to copy;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of switch to copy command at 01/08/2015 05:00:25
RMAN-06571: datafile 151 does not have recoverable copy

After some analysis found, it was due to missing datafile on standby. The datafile was created on Primary after the standby was down and the recovery using the incremental backup was done with the older controlfile which had no information about the new datafiles.

Datafile # 151 – 156 were missing, so cataloged the backup pieces again as the controlfile was restored, and started restoring the datafile

RMAN> restore datafile 151;

Starting restore at 08-JAN-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00151 to /oradata4/ABCD_ORACLE2/datafile/o1_mf_ct_bb318bhs_.dbf
channel ORA_DISK_1: reading from backup piece +BACKUP/ABCD_ORACLE3/backupset/restore/incr_standby_5sps58rd_1_1

channel ORA_DISK_1: piece handle=+BACKUP/ABCD_ORACLE3/backupset/restore/incr_standby_5sps58rd_1_1 tag=TAG20150107T192825
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:19:05
Finished restore at 08-JAN-15

RMAN>

After the restoration completed, report schema showed

RMAN> report schema;

RMAN-06139: WARNING: control file is not current for REPORT SCHEMA
Report of database schema for database with db_unique_name adbc_oracle3

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    0        SYSTEM               ***     /oradata/ABCD_ORACLE2/datafile/o1_mf_system_825xkscr_.dbf
2    0        SYSAUX               ***     /oradata/ABCD_ORACLE2/datafile/o1_mf_sysaux_825y451r_.dbf
.................
151  30720    TEST                  ***     +DATA/adbc_oracle3/datafile/test.417.868424659 
152  30720    TEST                  ***     +DATA/adbc_oracle3/datafile/test.418.868424659 
................

Tried to perform “switch database to copy” which again failed with the same error “RMAN-06571: datafile 151 does not have recoverable copy” . At this point I though to use “switch datafile to copy” for which generated dynamic sql from primary and ran in to standby.

Generated switch command sql from Primary :-

SQL> select 'switch datafile '||file#||' to copy;' from v$datafile;

[oracle@oracle3:~ (db)]$ vi swtch_copy.rman
[oracle@oracle3:~ (db)]$ rman target /

Recovery Manager: Release 11.2.0.3.0 - Production on Thu Jan 8 06:09:46 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ABCD (DBID=1463580380, not open)

RMAN> @swtch_copy.rman

RMAN> switch datafile 2 to copy;
using target database control file instead of recovery catalog
datafile 2 switched to datafile copy "+DATA/adbc_oracle3/datafile/sysaux.277.844418721"

............................
...........................
...........................
RMAN> switch datafile 149 to copy;
datafile 149 switched to datafile copy "+DATA/adbc_oracle3/datafile/ct.415.865953681"

RMAN> switch datafile 156 to copy;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of switch to copy command at 01/08/2015 06:28:54
RMAN-06571: datafile 156 does not have recoverable copy

RMAN>
RMAN> **end-of-file**

Performed ‘recover database noredo’ again and this time it was pretty quick and then tried recover standby database

[oracle@oracle3:~/working/anand (abcd)]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Thu Jan 8 06:31:38 2015

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

SQL> recover standby database;
ORA-00279: change 38328244436 generated at 01/07/2015 19:28:32 needed for thread 1
ORA-00289: suggestion : +FRA
ORA-00280: change 38328244436 for thread 1 is in sequence #98501


Specify log: {=suggested | filename | AUTO | CANCEL}
^C
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '+DATA/adbc_oracle3/datafile/system.357.844454911'


Did few changes in DGMGRL parameters and enabled standby configuration. After few hours, standby was insync with Primary. Stopped the MRP , dropped the standby
redo logfiles as they showed filesystem, and created on ASM. Opened the standby in read only mode and started the MRP.

ORA-00600: [ktbdchk1: bad dscn]

Last week I had performed switchover activity of database on version 11.2.0.3 The switchover was performed using dgmgrl “swicthover to standby” command. After sometime we started receiving “ORA-00600: [ktbdchk1: bad dscn]” on the primary database.

Tue Dec 16 10:33:26 2014
Errors in file /ora_software/diag/rdbms/db02_dbv/dbv/trace/db02_ora_16271.trc  (incident=434103):
ORA-00600: internal error code, arguments: [ktbdchk1: bad dscn], [], [], [], [], [], [], [], [], [], [], []
Incident details in: /ora_software/diag/rdbms/db02_dbv/dbv/incident/incdir_434103/db02_ora_16271_i434103.trc

The trace file showed

*** ACTION NAME:() 2014-12-16 10:33:26.857

Dump continued from file: /ora_software/diag/rdbms/db02_dbv/dbv/trace/db02_ora_16271.trc
ORA-00600: internal error code, arguments: [ktbdchk1: bad dscn], [], [], [], [], [], [], [], [], [], [], []

========= Dump for incident 434103 (ORA 600 [ktbdchk1: bad dscn]) ========
----- Beginning of Customized Incident Dump(s) -----
[ktbdchk] -- ktbgcl4 -- bad dscn
dependent scn: 0x0008.f197f24e recent scn: 0x0008.c7313a4c current scn: 0x0008.c7313a4c
----- End of Customized Incident Dump(s) -----
*** 2014-12-16 10:33:26.961
dbkedDefDump(): Starting incident default dumps (flags=0x2, level=3, mask=0x0)
----- Current SQL Statement for this session (sql_id=dmrpvzvbbsuy5) -----
INSERT INTO MSG_OPEN( SITE_ID, CLIENT_ID, CAMP_ID, MESSAGE_ID, ID, USER_ID, ADDRESS, DATE_OPENED) VALUES( :B7 , :B6 , :B5 , :B4 , LOG_SEQ.NEXTVAL, :B3 , :B2 , :B1 )
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0x32276ee18       611  package body TP.EM_PKG
0x31cbac388         3  anonymous block
----- Call Stack Trace -----
calling              call     entry                argument values in hex
location             type     point                (? means dubious value)
-------------------- -------- -------------------- ----------------------------
skdstdst()+36        call     kgdsdst()            000000000 ? 000000000 ?
                                                   7FFFFDA13028 ? 000000001 ?
                                                   000000001 ? 000000002 ?
ksedst1()+98         call     skdstdst()           000000000 ? 000000000 ?
                                                   7FFFFDA13028 ? 000000001 ?
                                                   000000000 ? 000000002 ?
ksedst()+34          call     ksedst1()            000000000 ? 000000001 ?
                                                   7FFFFDA13028 ? 000000001 ?
                                                   000000000 ? 000000002 ?
dbkedDefDump()+2741  call     ksedst()             000000000 ? 000000001 ?
                                                   7FFFFDA13028 ? 000000001 ?
                                                   000000000 ? 000000002 ?
........................

Searching for the issue on Metalink pointed to the below Document:-

ALERT Description and fix for Bug 8895202: ORA-1555 / ORA-600 [ktbdchk1: bad dscn] ORA-600 [2663] in Physical Standby after switchover (Doc ID 1608167.1)

As per metalink

In a Data Guard environment with Physical Standby (including Active Data Guard), invalid SCNs can be introduced in index blocks after a switchover.

Symptoms ORA-1555 / ORA-600 [ktbdchk1: bad dscn] / ktbGetDependentScn / Dependent scn violations as the block ITL has higher COMMIT SCN than block SCN. DBVERIFY reports the next error when the fix of Bug 7517208 is present; reference Note 7517208.8 for interim patches: itl[] has higher commit scn(aaa.bbb) than block scn (xx.yy) Page failed with check code 6056 There is NO DATA CORRUPTION in the block.

To Resolve

The fix of Bug 8895202 is the workaround.

Although the fix of Bug 8895202 is included in patchset 11.2.0.2 and later, the fix needs to be enabled by setting parameter _ktb_debug_flags = 8.

SQL> alter system set "_ktb_debug_flags"=8 scope=both sid='*';

System altered.

SQL> exit

If you are using Oracle version less than 11.2.0.2, then rebuilding index is the option, as we did for one of the client on 11.1.0.7 version.

One thing to note is —

In rare cases blocks healed by this fix may cause queries to fail with an ORA-600 [ktbgcl1_KTUCLOMINSCN_1] as described in Note 13513004.8 / Bug 13513004.

For more detail Metalink Doc 1608167.1

Upgrading to 11.2.0.4 – Dictionary View Performing Poor

Just a quick blog post on things you might see after upgrading to 11.2.0.4. We recently upgraded database from 11.2.0.3 to 11.2.0.4 and query on some data dictionary views ran too slow.

1. Performace of query on dba_free_space degraded
2. Performance of query involving dba_segments is slow

DEV01> select ceil(sum(b.bytes)/1024/1024) b from sys.dba_free_space b;

Elapsed: 01:31:45.78

Searching MOS pointed to these Doc Ids :-

Insert Statement Based Upon a Query Against DBA_SEGMENTS is Slow After Applying 11.2.0.4 Patchset (Doc ID 1915426.1)

Query Against DBA_FREE_SPACE is Slow After Applying 11.2.0.4 (Doc ID 1904677.1)

DEV01> select ceil(sum(b.bytes)/1024/1024) b from sys.dba_free_space b;

Elapsed: 00:00:01.38

ORA-00600 – [kfnsInstanceReg00]

An interesting case of ORA-600 :)

I was paged for standby lag and started looking into the issue. The db version was 11.1.0.7.0 version. Standby was lagging behind by 36 archives

SQL> @mrp

   INST_ID PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK# DELAY_MINS
---------- --------- ------------ ---------- ---------- ---------- ----------
         1 MRP0      WAIT_FOR_LOG          1     568330          0          0 


   INST_ID PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK# DELAY_MINS
---------- --------- ------------ ---------- ---------- ---------- ----------
         1 RFS       IDLE                  0          0          0          0
         1 RFS       IDLE                  0          0          0          0
         1 RFS       WRITING               1     568330          1          0 
         1 RFS       IDLE                  0          0          0          0
         1 RFS       IDLE                  1     568366     309529          0

After sometime checked back again and still on the same sequence#. Status of MRP and RFS

SQL> @mrp

   INST_ID PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK# DELAY_MINS
---------- --------- ------------ ---------- ---------- ---------- ----------
         1 MRP0      WAIT_FOR_LOG          1     568330          0          0


   INST_ID PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK# DELAY_MINS
---------- --------- ------------ ---------- ---------- ---------- ----------
         1 RFS       IDLE                  0          0          0          0
         1 RFS       IDLE                  0          0          0          0
         1 RFS       WRITING               1     568330          1          0 
         1 RFS       IDLE                  0          0          0          0
         1 RFS       IDLE                  1     568366     795006          0

No error was reported in alert log or was in v$archive_dest_status.Archive log genertaion on Primary was usual like any other day, nothing changed.Seq# 568330 was present on Primary and size was 497Mb whereas the same seq# size on standby was 513Mb. I am not sure why!!!

So, i thought to copy the archive from the primary, stop mrp , perform manual recovery using ‘recover standby standby’ as other 35 archives were present on disk and then start real time apply.

Once the copy completed, i stopped the MRP and started manual recovery

SQL> alter database recover managed standby database cancel;

Database altered.

SQL> select name,open_mode,database_role from v$database;

NAME      OPEN_MODE  DATABASE_ROLE
--------- ---------- ----------------
TEST      MOUNTED    PHYSICAL STANDBY

SQL> recover standby database;
ORA-00279: change 69956910076 generated at 10/26/2014 21:46:36 needed for
thread 1
ORA-00289: suggestion :
/oraclebackup/fra/TEST/archivelog/2014_10_26/o1_mf_1_568330_b4vdl546_.arc
ORA-00280: change 69956910076 for thread 1 is in sequence #568330


Specify log: {=suggested | filename | AUTO | CANCEL}
AUTO
ORA-16145: archival for thread# 1 sequence# 568330 in progress 
SQL> @mrp

no rows selected


   INST_ID PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK# DELAY_MINS
---------- --------- ------------ ---------- ---------- ---------- ----------
         1 RFS       IDLE                  0          0          0          0
         1 RFS       IDLE                  0          0          0          0
         1 RFS       WRITING               1     568330          1          0 
         1 RFS       IDLE                  0          0          0          0
         1 RFS       IDLE                  1     568372     563688          0

At this point i thought to bounce the standby and then perform recovery. I issued shutdown immediate and it was hung. Almost after 10mins alert log showed

Active call for process 45660 user 'oracle' program 'oracle@test.com'
SHUTDOWN: waiting for active calls to complete.

I checked the PID 45660 and as it showed LOCAL=NO, did kill -9

[oracle@test:~/app/oracle (test)]$ ps -ef | grep 45660
oracle   41463 16050  0 00:43 pts/0    00:00:00 grep 45660
oracle   45660     1  1 Oct23 ?        01:09:22 oracletest (LOCAL=NO)
[oracle@test:~/app/oracle (test)]$ kill -9 45660 
[oracle@test:~/app/oracle (test)] ps -ef | grep 45660
oracle   42181 16050  0 00:44 pts/0    00:00:00 grep 45660
oracle   45660     1  1 Oct23 ?        01:09:22 oracletest (LOCAL=NO)

Kill -9 didn’t kill the process. As it was standby, i thought to go ahead and Abort the instance and this is what happend when i tried startup mount

SQL> startup mount
ORACLE instance started.

Total System Global Area 6.1466E+10 bytes
Fixed Size                  2174600 bytes
Variable Size            2.0535E+10 bytes
Database Buffers         4.0802E+10 bytes
Redo Buffers              126640128 bytes

ORA-03113: end-of-file on communication channel
Process ID: 47331
Session ID: 995 Serial number: 3

Alert log showed

Mon Oct 27 00:48:09 2014
ALTER DATABASE MOUNT
Mon Oct 27 00:53:10 2014
System State dumped to trace file /home/oracle/app/oracle/diag/rdbms/test_TEST_db/test/trace/test_ckpt_47252.trc
Mon Oct 27 00:53:11 2014
Errors in file /home/oracle/app/oracle/diag/rdbms/test_TEST_db/test/trace/test_asmb_47260.trc (incident=491160):
ORA-00600: internal error code, arguments: [600], [ORA_NPI_ERROR], [ORA-00600: internal error code, arguments: [kfnsInstanceReg00], [test:test_TEST_DB], [30000], [], [], [], [], [], [], [], [], []
], [], [], [], [], [], [], [], [], []
Incident details in: /home/oracle/app/oracle/diag/rdbms/test_TEST_db/test/incident/incdir_491160/test_asmb_47260_i491160.trc
Errors in file /home/oracle/app/oracle/diag/rdbms/test_TEST_db/test/trace/test_asmb_47260.trc:
ORA-15064: communication failure with ASM instance

ORA-00600: internal error code, arguments: [600], [ORA_NPI_ERROR], [ORA-00600: internal error code, arguments: [kfnsInstanceReg00], [test:test_TEST_DB], [30000], [], [], [], [], [], [], [], [], []
], [], [], [], [], [], [], [], [], []
Mon Oct 27 00:53:12 2014
Mon Oct 27 00:53:12 2014
Sweep Incident[491160]: completed
Mon Oct 27 00:53:12 2014
Trace dumping is performing id=[cdmp_20141027005312]
Errors in file /home/oracle/app/oracle/diag/rdbms/test_TEST_db/prod/trace/test_ckpt_47252.trc:
ORA-15083: failed to communicate with ASMB background process
CKPT (ospid: 47252): terminating the instance due to error 15083
Mon Oct 27 00:53:20 2014
Instance terminated by CKPT, pid = 47252

The CKPT trace showed


15083: Timeout waiting for ASMB to be ready (state=1)
ASMB is ALIVE
—– Abridged Call Stack Trace —–

SYSTEM STATE (level=10, with short stacks)
————
System global information:
processes: base 0xe90ea0088, size 900, cleanup 0xea0ec6598
allocation: free sessions 0xeb8f53d68, free calls (nil)
control alloc errors: 0 (process), 0 (session), 0 (call)
PMON latch cleanup depth: 0
seconds since PMON's last scan for dead processes: 1
system statistics:
22 logons cumulative
19 logons current
5 opened cursors cumulative
1 opened cursors current
0 user commits
0 user rollbacks
51 user calls
19 recursive calls
0 recursive cpu usage
0 session logical reads
0 session stored procedure space
0 CPU used when call started
0 CPU used by this session
1 DB time

Looking at the ASM instance alert log

System State dumped to trace file /home/oracle/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_33024.trc
Errors in file /home/oracle/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_33024.trc:

more /home/oracle/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_33024.trc


*** ACTION NAME:() 2014-10-27 02:18:39.197

kfnmsg_wait: KSR failure reaping message (0xffffff83)

*** 2014-10-27 02:18:39.197
Communication failure sending message to UFGs (0xffffff83)
kfnmsg_wait: opcode=35 msg=0x6b3f0928
2014-10-27 02:18:39.197762 :kfnmsg_Dump(): kfnmsg_SendtoUFGs: (kfnmsg) op=35 (KFNMS_GROUP_RLS) (gnum) [1.0]
CommunicationFailure 0xffffff83 after 300s

The server was hosting 2 standby db instances and 1 ASM instance. We thought to bring down the other standby db instance and then the ASM instance and then finally restart all. The shutdown immediate for 2nd standby instance was hung for long time so we aborted it. After ASM was brought down, still we could see below 5 pids

$ ps -ef | grep oracle
oracle 12032 16050 0 02:44 pts/0 00:00:00 grep arc
oracle 23135 1 0 Oct18 ? 00:02:15 ora_arc5_test
oracle 25390 1 0 Oct18 ? 00:01:19 ora_arc1_test2
oracle 25392 1 0 Oct18 ? 00:00:50 ora_arc2_test2

oracle 45660 1 1 Oct23 ? 01:09:22 oracleprod (LOCAL=NO)

Even after all the instances were down on server we could see few archiver Pids and a Pid with LOCAL=NO. Kill -9 also didn’t kill the pid.

Started the ASM instance and then when i started the standby instance TEST , ASM crashed

NOTE: check client alert log.
NOTE: Trace records dumped in trace file /home/oracle/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_57380.trc
Errors in file /home/oracle/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_57380.trc (incident=280159):
ORA-00600: internal error code, arguments: [kfupsRelease01], [], [], [], [], [], [], [], [], [], [], []
Incident details in: /home/oracle/app/oracle/diag/asm/+asm/+ASM/incident/incdir_280159/+ASM_ora_57380_i280159.trc

At this point escalated to client for server reboot. After the server reboot was done, all the instances came up properly. Started MRP and standby came insync.

Also you can reference to the below Metalink documents

Database startup Reports ORA-00600:[kfnsInstanceReg00] (Doc ID 1552422.1)
Bug 6934636 – Hang possible for foreground processes in ASM instance (Doc ID 6934636.8)
ORA-600 [kfnsInstanceReg00] and long waits on “ASM FILE METADATA OPERATION” (Doc ID 1480320.1)

Sangam 2014

I am back from AIOUG meet “SANGAM 14 – Meeting of Minds” and it was a wonderful experience. Had really nice time meeting some old friends and making few new ones :). I think i should mention here, finally i met Amit Bansal , mind behind http://askdba.org/ . It was fun meeting him :)

It was 3 day conference with

1st day :- Optimizer Master Class by Tom Kyte. It was full day seminar. Learned some new optimizer features of 12c. We all were very thankful to Tom for being in India for the 3rd time to present in Sangam.

2nd day :- The day started with “What You Need To Know About Oracle Database In-Memory Option” by Maria Colgan. If i need to describe in a word i would say “Awesome!!!!” . I loved every minute of the presentation and was well presented and very informative. Rest of the day i moved from room to room, attending some good sessions on 12c and optimization.

3rd day: – I wasn’t much serious on the 3rd day :D . Mostly spent time meeting people around and discussing about oracle. What i loved on 3rd day was an hour session on “Time to Reinvent Yourself – Through Learning, Leading, and Failing” by Dr. Rajdeep Manwani. He shared his life experiences and some truth about human nature. It was amazing.

Overall it was nice to be part of Sangam 14 and learn some really new things mostly on 12c. Thanks to all the speakers and organizing committee for the efforts and valuable time they have put in.

Thanks all!!!!!

Purging SYSAUX

In continuation to my previous post “SYSAUX Growing Rapidly” , here i wanted to present the second method of purging sysaux.

Basically i tried to perform the steps as mentioned in previous post and drop_snapshot_range was taking too long (> 24hrs) and still running on test db.Again WRH$_ACTIVE_SESSION_HISTORY was in top of the list occupying most of the SYSAUX space.

SYS01> EXEC dbms_workload_repository.drop_snapshot_range(25155,26155,3179571572);


From Another session after some time 

SYS01> @asw

USERNAME		      SID    SERIAL# SPID	EVENT			       LAST_CALL_ET  WAIT_TIME SECONDS_IN_WAIT STATE		   SQL_ID	 PLAN_HASH_VALUE
------------------------- ------- ---------- ---------- ------------------------------ ------------ ---------- --------------- ------------------- ------------- ---------------
SYS			     7654	8641 47879	db file sequential read 		 28	    -1		     0 WAITED SHORT TIME   fqq01wmb4hgt8       763705880

SYS01> @orax fqq01wmb4hgt8
old   7:       sql_id  = '&&1'
new   7:       sql_id  = 'fqq01wmb4hgt8'

SQL_ID
-------------
SQL_FULLTEXT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
fqq01wmb4hgt8
delete from WRH$_FILESTATXS tab where (:beg_snap <= tab.snap_id and	    tab.snap_id = b.start_snap_id) and			  (tab.snap_id <= b.end_snap_id))

SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR('fqq01wmb4hgt8',NULL,'typical +peeked_binds allstats last'))

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID	fqq01wmb4hgt8, child number 0
-------------------------------------
delete from WRH$_FILESTATXS tab where (:beg_snap <= tab.snap_id and
    tab.snap_id =
b.start_snap_id) and			      (tab.snap_id <=
b.end_snap_id))

Plan hash value: 763705880

---------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation			 | Name 	      | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | Pstart| Pstop |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT		 |		      |        |       |   325K(100)|	       |       |       |       |       |	  |
|   1 |  DELETE 			 | WRH$_FILESTATXS    |        |       |	    |	       |       |       |       |       |	  |
|*  2 |   FILTER			 |		      |        |       |	    |	       |       |       |       |       |	  |
|   3 |    MERGE JOIN ANTI		 |		      |    494M|    23G|   325K  (1)| 01:05:08 |       |       |       |       |	  |
|   4 |     PARTITION RANGE ITERATOR	 |		      |    494M|  8957M|   325K  (1)| 01:05:08 |   KEY |   KEY |       |       |	  |
|*  5 |      INDEX RANGE SCAN		 | WRH$_FILESTATXS_PK |    494M|  8957M|   325K  (1)| 01:05:08 |   KEY |   KEY |       |       |	  |
|*  6 |     FILTER			 |		      |        |       |	    |	       |       |       |       |       |	  |
|*  7 |      SORT JOIN			 |		      |      1 |    33 |     2	(50)| 00:00:01 |       |       | 73728 | 73728 |	  |
|*  8 |       TABLE ACCESS BY INDEX ROWID| WRM$_BASELINE      |      1 |    33 |     1	 (0)| 00:00:01 |       |       |       |       |	  |
|*  9 |        INDEX RANGE SCAN 	 | WRM$_BASELINE_PK   |      1 |       |     1	 (0)| 00:00:01 |       |       |       |       |	  |
---------------------------------------------------------------------------------------------------------------------------------------------------

Peeked Binds (identified by position):
--------------------------------------

   1 - (NUMBER): 0
   2 - (NUMBER): 95781
   3 - (NUMBER): 3179571572

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(:BEG_SNAP=:BEG_SNAP AND "TAB"."SNAP_ID"="B"."START_SNAP_ID" AND "TAB"."SNAP_ID"=:BEG_SNAP AND "B"."START_SNAP_ID"<=:END_SNAP))
   9 - access("B"."DBID"=:DBID)

SYS01> col name format a10;
col VALUE_STRING format a30;
select name, position, datatype_string, was_captured, value_string,
anydata.accesstimestamp(value_anydata) from v$sql_bind_capture where sql_id = '&sqlid';
SYS01>   2  Enter value for sqlid: fqq01wmb4hgt8
old   2: anydata.accesstimestamp(value_anydata) from v$sql_bind_capture where sql_id = '&sqlid'
new   2: anydata.accesstimestamp(value_anydata) from v$sql_bind_capture where sql_id = 'fqq01wmb4hgt8'

NAME	     POSITION DATATYPE_STRING						   WAS VALUE_STRING
---------- ---------- ------------------------------------------------------------ --- ------------------------------
ANYDATA.ACCESSTIMESTAMP(VALUE_ANYDATA)
---------------------------------------------------------------------------
:BEG_SNAP	    1 NUMBER							   YES 0


:END_SNAP	    2 NUMBER							   YES 95781


:DBID		    3 NUMBER							   YES 3179571572

Interestingly, looking at the bind values shows value_string 0 and 95781 for BEG_SNAP and END_SNAP respectively, though the input range for drop snapshot was between 25155 and 26155.

The database was refreshed by client (so my session was no more) and so i thought not to take drop_snapshot_range approach. After going through few blogs and MOS documents, i thought we had 2 approaches :-

1. “Recreate the AWR tables as in the MOS note 782974.1″ , which would basically drop all WRH$* table and then recreate. The AWR tables contains wealth of important performance data which can be very useful in performance tuning trend analysis and also in comparing performance between two separate periods of time.Hence recreating AWR,I believe should be the last resort. The activity needs to be done in startup restrict mode so requires downtime.

And if you plan to go forward with it, I would recommend to export the AWR snapshot data using @?/rdbms/admin/awrextr.sql and keep the dump. In future it can used by simply importing to some other repository db to get the AWR data.

2. Simply delete the Orphaned rows from WRH$_ACTIVE_SESSION_HISTORY table and perform shrink space cascade.

I went ahead with 2nd approach and performed the below steps (Note: – DB was a single instance db)


SYS01> SELECT COUNT(1) Orphaned_ASH_Rows
FROM wrh$_active_session_history a
WHERE NOT EXISTS
  (SELECT 1
  FROM wrm$_snapshot
  WHERE snap_id       = a.snap_id
  AND dbid            = a.dbid
  AND instance_number = a.instance_number
  );

ORPHANED_ASH_ROWS
-----------------
        301206452

SYS01> alter table wrh$_active_session_history parallel 4;

Table altered.

SYS01> alter session force parallel dml;

Session altered.

SYS01> DELETE /*+ PARALLEL(a,4) */
FROM wrh$_active_session_history a
WHERE NOT EXISTS
  (SELECT 1
  FROM wrm$_snapshot
  WHERE snap_id       = a.snap_id
  AND dbid            = a.dbid
  AND instance_number = a.instance_number
  );


From Another session :-

SYS01> @asw

USERNAME		      SID    SERIAL# SPID	EVENT			       LAST_CALL_ET  WAIT_TIME SECONDS_IN_WAIT STATE
------------------------- ------- ---------- ---------- ------------------------------ ------------ ---------- --------------- -------------------
SQL_ID	      PLAN_HASH_VALUE
------------- ---------------
SYS			      921	1329 107213	db file sequential read 		 60	     0		     0 WAITING
144bpj4qg68m1	   2217072169

SYS			     1227	 889 107215	db file sequential read 		 60	     0		     0 WAITING
144bpj4qg68m1	   2217072169

SYS			     9181	3277 107211	db file sequential read 		 60	     1		     0 WAITED KNOWN TIME
144bpj4qg68m1	   2217072169

SYS			     3370	 455 107727	SQL*Net message to client		  0	    -1		     0 WAITED SHORT TIME
8tfjp8cd2xtd1	    193683216

SYS			     1840	 809 107217	PX Deq Credit: need buffer		 60	     0		     0 WAITING
144bpj4qg68m1	   2217072169

SYS			     8875	3889 107209	db file sequential read 		 60	     1		     0 WAITED KNOWN TIME
144bpj4qg68m1	   2217072169

SYS			     8266	3139 90257	PX Deq: Execute Reply	 60	     0		    60 WAITING
144bpj4qg68m1	   2217072169


SYS01> @parallel_sess

Username     QC/Slave SlaveSet SID					Slave INS STATE    WAIT_EVENT			  QC SID QC INS Req. DOP Actual DOP
------------ -------- -------- ---------------------------------------- --------- -------- ------------------------------ ------ ------ -------- ----------
SYS	     QC 	       8266					1	  WAIT	   PX Deq: Execute Reply	  8266
 - p000      (Slave)  1        8875					1	  WAIT	   db file sequential read	  8266	 1	       4	  4
 - p003      (Slave)  1        1227					1	  WAIT	   db file sequential read	  8266	 1	       4	  4
 - p001      (Slave)  1        9181					1	  WAIT	   db file sequential read	  8266	 1	       4	  4
 - p002      (Slave)  1        921					1	  WAIT	   db file sequential read	  8266	 1	       4	  4
 - p004      (Slave)  2        1840					1	  WAIT	   PX Deq Credit: send blkd	  8266	 1	       4	  4
 - p007      (Slave)  2        2757					1	  WAIT	   PX Deq: Execution Msg	  8266	 1	       4	  4
 - p006      (Slave)  2        2450					1	  WAIT	   PX Deq: Execution Msg	  8266	 1	       4	  4
 - p005      (Slave)  2        2147					1	  WAIT	   PX Deq: Execution Msg	  8266	 1	       4	  4

SYS01> SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR('144bpj4qg68m1',NULL,'typical +peeked_binds allstats last'))

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID	144bpj4qg68m1, child number 0
-------------------------------------
DELETE /*+ PARALLEL(a,4) */ FROM wrh$_active_session_history a WHERE
NOT EXISTS   (SELECT 1	 FROM wrm$_snapshot   WHERE snap_id	  =
a.snap_id   AND dbid		= a.dbid   AND instance_number =
a.instance_number   )

Plan hash value: 2217072169

-----------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation			 | Name 			  | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | Pstart| Pstop |	TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT		 |				  |	   |	   |   106K(100)|	   |	   |	   |	    |	   |		|
|   1 |  PX COORDINATOR 		 |				  |	   |	   |		|	   |	   |	   |	    |	   |		|
|   2 |   PX SEND QC (RANDOM)		 | :TQ10001			  |    298M|	11G|   106K  (1)| 00:21:14 |	   |	   |  Q1,01 | P->S | QC (RAND)	|
|   3 |    DELETE			 | WRH$_ACTIVE_SESSION_HISTORY	  |	   |	   |		|	   |	   |	   |  Q1,01 | PCWP |		|
|   4 |     PX RECEIVE			 |				  |    298M|	11G|   106K  (1)| 00:21:14 |	   |	   |  Q1,01 | PCWP |		|
|   5 |      PX SEND HASH (BLOCK ADDRESS)| :TQ10000			  |    298M|	11G|   106K  (1)| 00:21:14 |	   |	   |  Q1,00 | P->P | HASH (BLOCK|
|   6 |       NESTED LOOPS ANTI 	 |				  |    298M|	11G|   106K  (1)| 00:21:14 |	   |	   |  Q1,00 | PCWP |		|
|   7 |        PX PARTITION RANGE ALL	 |				  |    298M|  7404M|   106K  (1)| 00:21:14 |	 1 |	 3 |  Q1,00 | PCWC |		|
|   8 | 	INDEX FULL SCAN 	 | WRH$_ACTIVE_SESSION_HISTORY_PK |    298M|  7404M|   106K  (1)| 00:21:14 |	 1 |	 3 |  Q1,00 | PCWP |		|
|*  9 |        INDEX UNIQUE SCAN	 | WRM$_SNAPSHOT_PK		  |	 1 |	15 |	 0   (0)|	   |	   |	   |  Q1,00 | PCWP |		|
-----------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   9 - access("DBID"="A"."DBID" AND "SNAP_ID"="A"."SNAP_ID" AND "INSTANCE_NUMBER"="A"."INSTANCE_NUMBER")

The deletion of rows (301206452 rows) completed with elapsed time of 12:59:38.44.


301206452 rows deleted.

Elapsed: 12:59:38.44

SYS01> alter table wrh$_active_session_history noparallel ;

Table altered.

SYS01> select degree from dba_tables where table_name=upper('wrh$_active_session_history');

DEGREE
----------
         1

SYS01> SELECT COUNT(1) Orphaned_ASH_Rows FROM wrh$_active_session_history a
WHERE NOT EXISTS
  (SELECT 1
  FROM wrm$_snapshot
  WHERE snap_id       = a.snap_id
  AND dbid            = a.dbid
  AND instance_number = a.instance_number
  );

ORPHANED_ASH_ROWS
-----------------
	   309984

SYS01> DELETE /*+ PARALLEL(a,4) */ FROM wrh$_active_session_history a
WHERE NOT EXISTS
  (SELECT 1
  FROM wrm$_snapshot
  WHERE snap_id       = a.snap_id
  AND dbid            = a.dbid
  AND instance_number = a.instance_number
  );  

309984 rows deleted.

Elapsed: 00:00:19.08
SYS01> commit;

Commit complete.

Elapsed: 00:00:00.07

SYS01> SELECT COUNT(1) Orphaned_ASH_Rows FROM wrh$_active_session_history a
WHERE NOT EXISTS
  (SELECT 1
  FROM wrm$_snapshot
  WHERE snap_id       = a.snap_id
  AND dbid            = a.dbid
  AND instance_number = a.instance_number
  );

ORPHANED_ASH_ROWS
-----------------
		0

SYS01> alter table wrh$_active_session_history shrink space cascade;

Table altered.

Elapsed: 06:47:21.36

Before this activity SM/AWR was occupying 339Gb which reduced to 209Gb. Also had SM/OPTSTAT occupying 143Gb space and after confirmation from client purged the stats as it was test db.

SYS01> exec DBMS_STATS.PURGE_STATS(DBMS_STATS.PURGE_ALL);

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.54

SYS01> COLUMN "Item" FORMAT A25
 COLUMN "Space Used (GB)" FORMAT 999.99
 COLUMN "Schema" FORMAT A25
 COLUMN "Move Procedure" FORMAT A40

    SELECT  occupant_name "Item",
    space_usage_kbytes/1048576 "Space Used (GB)",
    schema_name "Schema",
    move_procedure "Move Procedure"
    FROM v$sysaux_occupants
    WHERE occupant_name in  ('SM/AWR','SM/OPTSTAT')
    ORDER BY 1
    /23:47:31 EMTSYS01> 

Item                      Space Used (GB) Schema                    Move Procedure
------------------------- --------------- ------------------------- ----------------------------------------
SM/AWR                             209.16 SYS
SM/OPTSTAT                          19.72 SYS

Saving in SYSAUX


TABLESPACE_NAME                  TSP_SIZE USED_SPACE FREE_SPACE   PCT_FREE
------------------------------ ---------- ---------- ---------- ----------
SYSAUX                             505856     496310       9546       1.89  --> Before Size

SYSAUX                             505856     237833     268023      52.98  --> After Size

Hope this helps :)

SYSAUX Growing rapidly!!! What can be done

Recently i have been working on cleaning up SYSAUX tablespace for few of clients, so thought to put down my steps which might be helpful to some of you out there.

Why does SYSAUX tablespace grows much larger than expected?

There could be number of potential reasons:

1. ASH data has grown too large (SM/AWR)
2. High Retention Period
3. Segment Advisor has grown too large
4. Increase in older version of Optimizer Statistics (SM/OPTSTAT)
5. Bugs Bugs Bugs!!!!!

How do we identify the SYSAUX space Usage?

There are basically 2 ways to identify that i know of

1. Running @?/rdbms/admin/awrinfo.sql –> Detailed Info like Schema breakdown, SYSAUX occupants space usage etc.

2.

COLUMN "Item" FORMAT A25
COLUMN "Space Used (GB)" FORMAT 999.99
COLUMN "Schema" FORMAT A25
COLUMN "Move Procedure" FORMAT A40

SELECT  occupant_name "Item",
   space_usage_kbytes/1048576 "Space Used (GB)",
   schema_name "Schema",
   move_procedure "Move Procedure"
   FROM v$sysaux_occupants
   ORDER BY 2
   /

 col owner for a6
 col segment_name for a50
  select * from
 (select owner,segment_name||'~'||partition_name segment_name,bytes/(1024*1024) size_m
 from dba_segments
 where tablespace_name = 'SYSAUX' ORDER BY BLOCKS desc) where rownum < 11;

In my case, below 2 were occupying most of the space :-

1. SM/AWR
2. SM/OPTSTAT

SM/AWR — It refers to Automatic Workload Repository.Data in this section is retained for a certain amount of time (default 8 days). Setting can be checked through DBA_HIST_WR_CONTROL.

SM/OPSTAT — Stores older data of optimizer statistics.Setting can be checked through dbms_stats.get_stats_history_retention. This is not a part of AWR and is not controlled by AWR retention.

When looking at the top segments, i saw WRH$_ACTIVE_SESSION_HISTORY occupying most of the space. Sometimes AWR tables are not purged to settings in sys.wrm$_wr_control.

As per Oracle :-

Oracle decides what rows need to be purged based on the retention policy. There is a special mechanism which is used in the case of the large AWR tables where we store the snapshot data in partitions. One method of purging data from these tables is by removing partitions that only contain rows that have exceeded the retention criteria. During the nightly purge task, we only drop the partition if all the data in the partition has expired. If the partition contains at least one row which, according to the retention policy shouldn’t be removed, then the partition won’t be dropped and as such the table will contain old data.

If partition splits do not occur (for whatever reason), then we can end up with a situation where we have to wait for the latest entries to expire before the partition that they sit in can be removed. This can mean that some of the older entries can be retained significantly past their expiry date. The result of this is that the data is not purged as expected.

Diagnose and Reduce Used Space of SYSAUX.

Once the major occupants and top segments is identified as discussed above, we can start with the steps to rectify it.

Expecting SM/AWR occupying most of the space , i think we can follow 3 methods. In this blog i will be posting one of the method only :)

To check Orphaned ASH rows :-

 SELECT COUNT(1) Orphaned_ASH_Rows FROM wrh$_active_session_history a
  WHERE NOT EXISTS
  (SELECT 1
  FROM wrm$_snapshot
  WHERE snap_id       = a.snap_id
  AND dbid            = a.dbid
  AND instance_number = a.instance_number
  );

Check minimum snap_id in ASH table and then compare to the minimum snap_id in dba_hist_snapshot.

select min(snap_id) from WRH$_ACTIVE_SESSION_HISTORY;
select min(snap_id) from dba_hist_snapshot;

Example :-

select min(snap_id),MAX(snap_id) from dba_hist_snapshot;

MIN(SNAP_ID) MAX(SNAP_ID)
------------ ------------
       17754        18523

select min(snap_id),MAX(snap_id) from WRH$_ACTIVE_SESSION_HISTORY;

MIN(SNAP_ID) MAX(SNAP_ID)
------------ ------------
           1        18523

Above as per the retention period, we should have data from snap_id 17754 till 18523, but the WRH$_ASH table has data from snap_id 1.

From Oracle MOS Doc :-

A potential solution to this issue is to manually split the partitions of the partitioned AWR objects such that there is more chance of the split partition being purged.You will still have to wait for all the rows in the new partitions to reach their retention time but with split partitions there is more chance of this happening. you can manually split the partitions using the following undocumented command:

alter session set “_swrf_test_action” = 72;



select table_name, count(*) from dba_tab_partitions where table_name like 'WRH$%' and table_owner = 'SYS'
group by table_name order by 1;

TABLE_NAME                                           COUNT(*)
-------------------------------------------------- ----------
WRH$_ACTIVE_SESSION_HISTORY                                 2
WRH$_DB_CACHE_ADVICE                                        2
WRH$_DLM_MISC                                               2
WRH$_EVENT_HISTOGRAM                                        2
WRH$_FILESTATXS                                            11
WRH$_INST_CACHE_TRANSFER                                    2
WRH$_INTERCONNECT_PINGS                                     2
........................
25 rows selected.

SQL>  alter session set "_swrf_test_action"=72; 

Session altered.

SQL>  select table_name,partition_name from dba_tab_partitions where table_name = 'WRH$_ACTIVE_SESSION_HISTORY';

TABLE_NAME                                         PARTITION_NAME
------------------------------  -------------------------------------------------------
WRH$_ACTIVE_SESSION_HISTORY                        WRH$_ACTIVE_1798927129_0
WRH$_ACTIVE_SESSION_HISTORY                        WRH$_ACTIVE_1798927129_18531  --> New Partition created 
WRH$_ACTIVE_SESSION_HISTORY                        WRH$_ACTIVE_SES_MXDB_MXSN

col table_name for a80
select table_name, count(*) from dba_tab_partitions where table_name like 'WRH$%' and table_owner = 'SYS' group by table_name order by 1

TABLE_NAME                                   COUNT(*)
------------------------------------------- ----------
WRH$_ACTIVE_SESSION_HISTORY                     3
WRH$_DB_CACHE_ADVICE                            3
WRH$_DLM_MISC                                   3
WRH$_EVENT_HISTOGRAM                            3
......................

25 rows selected.

In the above example, WRH$_ACTIVE_1798927129_18531 is the new partition created where 1798927129 being the DBID and 18531 is the max(snap_id) when it was partitioned. So, now we can start dropping the snapshots range,which in my case is from 1 to 17753 as 17754 is the min(snap_id) in dba_hist_snapshot.

SQL> EXEC dbms_workload_repository.drop_snapshot_range(1,17753,1798927129);

It can generate good amount of redo and use undo. So keep monitoring undo tablespace and make sure you have sufficient space.

So, what happens when run the above :-

SQL> @sqlid ft7m07stk3dws
old   9:        sql_id = ('&1')
new   9:        sql_id = ('ft7m07stk3dws')

SQL_ID                                  HASH_VALUE SQL_TEXT
--------------------------------------- ---------- ------------------------------------------------------------------------------------------------------------------------------------------------------
ft7m07stk3dws                            857847704 delete from WRH$_SYSTEM_EVENT tab where (:beg_snap <= tab.snap_id and         tab.snap_id = b.start_snap_id) and
                                                   (tab.snap_id  @sqlid 854knbb15976z
old   9:        sql_id = ('&1')
new   9:        sql_id = ('854knbb15976z')

SQL_ID                                  HASH_VALUE SQL_TEXT
--------------------------------------- ---------- ------------------------------------------------------------------------------------------------------------------------------------------------------
854knbb15976z                           3260325087 delete from WRH$_SQLSTAT tab where (:beg_snap <= tab.snap_id and         tab.snap_id = b.start_snap_id) and
                                                   (tab.snap_id <= b.end_snap_id))

So, internally oracle runs delete command which cause high redo and undo generation :)

Once the procedure is completed successfully, check the min(snap_id) in WRH$_ACTIVE_SESSION_HISTORY and perform shrink space cascade.


elect owner,segment_name,round(sum(bytes/1024/1024),2)MB, tablespace_name from dba_segments where segment_name = upper('WRH$_ACTIVE_SESSION_HISTORY') group by owner,segment_name,tablespace_name

OWNER       SEGMENT_NAME                      MB           TABLESPACE_NAME
-------  ---------------------------------- -----------  -------------------
SYS        WRH$_ACTIVE_SESSION_HISTORY        3538.06          SYSAUX

SQL> alter table WRH$_ACTIVE_SESSION_HISTORY shrink space cascade;

Table altered.


OWNER       SEGMENT_NAME                      MB           TABLESPACE_NAME
-------  ---------------------------------- -----------  -------------------
SYS        WRH$_ACTIVE_SESSION_HISTORY        46.75          SYSAUX

In similar fashion, other WRH$ tables can be shrink ed to free up space in SYSAUX.

Hope this helps!!!

Reference :-

WRH$_ACTIVE_SESSION_HISTORY Does Not Get Purged Based Upon the Retention Policy (Doc ID 387914.1)
Suggestions if Your SYSAUX Tablespace Grows Rapidly or Too Large (Doc ID 1292724.1)