ORA-00353: log corruption near block change time – 11.2.0.3

Recently I came across “ORA-00353: log corruption near block change time” error.

Errors in file /oracle/app/diag/rdbms/test/test/trace/test_ms00_54284.trc  (incident=1490189):
ORA-00353: log corruption near block  change  time
ORA-00353: log corruption near block 4631374 change 263470473806052 time 04/16/2015 06:53:16
ORA-00312: online log 43 thread 3: '+REDO02/test/onlinelog/group_43.258.875922665'
ORA-00312: online log 43 thread 3: '+REDO01/test/onlinelog/group_43.258.875922633'
Incident details in: /oracle/app/diag/rdbms/test/test/incident/incdir_1490189/test_ms00_54284_i1490189.trc
Thu Apr 16 08:14:42 2015
Sweep [inc][1490188]: completed
Thu Apr 16 08:14:55 2015
LOGMINER: WARNING: error 353 encountered, failed to read corrupt logfile +REDO01/test/onlinelog/group_43.258.875922633

As per ORA-354 ORA-353 and ORA-312 on Redo Log Group members (Doc ID 332672.1)

Possible corruption in Online Redo Log File Members in a Redo Log Group.

If Oracle detects a corruption when archiving a log member, it will try to read the same redo block from the second member of the group. If the block is found corrupt in all the members of the redo log group, archiving will not proceed.

I started validating the online redo logfile member and luckily none of the members were corrupt. Incase you find corrupt members please refer to above mentioned Doc “Doc ID 332672.1″ for solution.

As a precaution I validated all the online redo logfiles to confirm for corruption. If the redo log file is corrupt it should produce an error.

SQL> alter system dump logfile '+REDO01/test/onlinelog/group_43.258.875922633' validate;

System altered.

Elapsed: 00:01:21.89

SQL> alter system dump logfile '+REDO02/test/onlinelog/group_43.258.875922665' validate;

System altered.

Elapsed: 00:01:10.61

Interestingly I noticed trace file “test_ms00_54284.trc” of size 49Gb and continuously increasing. The file was being continuously written by GoldenGate process.

The issue seemed similar to one mentioned in

Bug 13840711 – ORA-353 in Standby / Streams Data Capture or ORA-272 in PRIMARY: Redo log corruption by ASYNC redo shipping (Doc ID 13840711.8)

Corrupt redo blocks are read from in memory log buffer (not from disk) by ASYNC redo shipping or hotmining clients.

ORA-353 may be reported by the process reading the redo log block from in memory log buffer (e.g. Logminer Reader – MSnn background process, Streams data capture).

Looking at the goldengate status showed

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
JAGENT      RUNNING
EXTRACT     RUNNING     DPUMP       00:00:00      00:00:03
EXTRACT     RUNNING     EXTL        02:05:36      00:00:07
REPLICAT    RUNNING     REPL        00:00:00      00:00:06

Meanwhile we received another error, this time the extract process which failed reading from online redo logfile, failed reading from archivelog file too. I checked the standby database, and it was insync with Primary.

Errors in file /oracle/app/diag/rdbms/test/test/trace/test_ms00_54284.trc (incident=1490190):
ORA-00353: log corruption near block 4631374 change 605163911323 time 01/06/1988 07:26:12
ORA-00334: archived log: ‘+FLASH01/test/archivelog/2015_04_16/thread_3_seq_9370.7790.877158359′

Incident details in: /oracle/app/diag/rdbms/test/test/incident/incdir_1490190/test_ms00_54284_i1490190.trc
Thu Apr 16 09:19:13 2015
Dumping diagnostic data in directory=[cdmp_20150416091913], requested by (instance=1, osid=54284 (MS00)), summary=[incident=1490190].
Thu Apr 16 09:19:27 2015
Errors in file /oracle/app/diag/rdbms/test/test/trace/test_ms00_54284.trc (incident=1490191):
ORA-00356: inconsistent lengths in change description
ORA-00353: log corruption near block 4631374 change 605163911323 time 01/06/1988 07:26:12
ORA-00334: archived log: ‘+FLASH01/test/archivelog/2015_04_16/thread_3_seq_9370.7790.877158359′
Incident details in: /oracle/app/diag/rdbms/test/test/incident/incdir_1490191/test_ms00_54284_i1490191.trc

At this point, client stopped all the GoldenGate processes, deleted the trace file and started the processes.I started monitoring the Extract Process

 send extract EXTL status

Sending STATUS request to EXTRACT EXTL ...


EXTRACT EXTL (PID 102390)
  Current status: In recovery[1]: Processing data

  Current read position:
  Redo thread #: 3
  Sequence #: 9351
  RBA: 4023516192
  Timestamp: 2015-04-16 03:37:35.000000
  SCN: 140.3838686301
  Current write position:
  Sequence #: 1637
  RBA: 1551
  Timestamp: 2015-04-16 09:31:56.423132
  Extract Trail: /oracle2/app/product/ogg12.1.2/dirdat/capture/sd

Monitored the Extract, and it was successfully able to read and extract from SEQ 9370, for which it had earlier failed.

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
JAGENT      RUNNING
EXTRACT     RUNNING     DPUMP       00:00:01      00:00:07
EXTRACT     RUNNING     EXTL        00:00:01      00:00:00
REPLICAT    RUNNING     REPL        00:00:00      00:00:06

send extract EXTL status

Sending STATUS request to EXTRACT EXTL ...


EXTRACT EXTL (PID 102390)
  Current status: Recovery complete: Processing data

  Current read position:
  Redo thread #: 3
  Sequence #: 9395
  RBA: 1721029512
  Timestamp: 2015-04-16 12:08:00.000000
  SCN: 140.3917853576
  Current write position:
  Sequence #: 1641
  RBA: 997030325
  Timestamp: 2015-04-16 12:07:57.056673
  Extract Trail: /oracle2/app/product/ogg12.1.2/dirdat/capture/sd

lag extract EXTL

Sending GETLAG request to EXTRACT EXT1OL ...
Last record lag 0 seconds.

lag extract DPUMP

Sending GETLAG request to EXTRACT DPUMP ...
Last record lag 2 seconds.
At EOF, no more records to process.

Current DB version is 11.2.0.3.13. As mentioned in the doc, fix for bug 13840711 is first included in 11.2.0.3.14 (Apr 2015) Database Patch Set Update (DB PSU) , we do plan to apply it in coming weeks.

ORA-01628: max # extents (32765) reached for rollback segment _SYSSMUxxx

Recently I came across “ORA-01628: max # extents (32765)” error. The database had resumable_timeout parameter set I received page for a session in resumable state. Looking into the alert log showed

statement in resumable session ‘User USER_E3(375), Session 9028, Instance 1′ was suspended due to
ORA-01628: max # extents (32765) reached for rollback segment _SYSSMU375_247595031$
Wed Apr 08 02:41:42 2015

Looking at the sid# 9028 details, INSERT sql was running and the session was on waiting on “statement suspended, wait error to be cl” as resumable_timeout parameter was set.

PRD04> @undo_stats
 
Undo Stats for Automatic Undo Management for Last 3 Hours
 
BEGIN_TIME         END_TIME           ACTIVEBLKS UNEXPIREDBLKS EXPIREDBLKS TUNED_UNDORETENTION   TXNCOUNT MAXQUERYLEN NOSPACEERRCNT
------------------ ------------------ ---------- ------------- ----------- ------------------- ---------- ----------- -------------
07-APR-15          08-APR-15              207496      22622976     6746320               46384     670317       11587             0
08-APR-15          08-APR-15              204488      22562776     6797160               46215     736330       10327             0
08-APR-15          08-APR-15              119832      22329656     7295296               46251     645728       10934             0
08-APR-15          08-APR-15              171992      22368920     7304336               46010     532932       11539             0
08-APR-15          08-APR-15              109968      22591672     7245352               46057     649534       12145             0
08-APR-15          08-APR-15              113016      22505688     7349552               45925     679114       12751             0
08-APR-15          08-APR-15              218048      22355184     7317048               46147     665002       13357             0
08-APR-15          08-APR-15              242368      22534704     7360800               46188     695974       13962             0
08-APR-15          08-APR-15              298368      22489016     7546968               45427     622305        5092             0
08-APR-15          08-APR-15               98792      22417864     7941104               44771     587418         702             0
08-APR-15          08-APR-15              151888      22744560     7860176               43790     703158        1308             0
08-APR-15          08-APR-15              353272      22606848     8085272               42483     670949        1913             0
08-APR-15          08-APR-15              679376      21995016     8654552               40691     662446        1245             0
08-APR-15          08-APR-15             1172864      21419400     8860792               39531     780894        1851             0
08-APR-15          08-APR-15             1093784      21696040     8664424               36463          0        2457             0
08-APR-15          08-APR-15             1374528      21315992     8761640               33362     733162        3062             0
08-APR-15          08-APR-15             1433128      21166848     8852944               31892     755982        3668           158
08-APR-15          08-APR-15             1454984      21040312     8960960               31767     692704        4274           293
08-APR-15          08-APR-15             1454984      21040312     8960960               31596     266300        4577           108
 
19 rows selected.

Transactions Using Rolback:
 
RBS        OS User         DB User         SID   Schema     Object Name          Type       Used RBS Blocks  # of Records
---------- --------------- --------------- ----- ---------- -------------------- ---------- ---------------- ------------------
...................
...................
_SYSSMU375 root            USER_E3         9028  ADMIN_E TESTS             TABLE PART 385255           23794634
_SYSSMU375 root            USER_E3         9028  ADMIN_E TESTS             TABLE      385255           23794634
...................
...................

26 rows selected.

 
PRD04> show parameter undo
 
NAME                                 TYPE       VALUE
------------------------------------ ---------- ------------------------------
undo_management                      string     AUTO
undo_retention                       integer    7200
undo_tablespace                      string     UNDOTBS4

PRD04> @undo_extent_stats
 
STATUS                                 MB       PERC
------------------------------ ---------- ----------
ACTIVE                              11741          3
EXPIRED                            225670         49
UNEXPIRED                          170579         37

Looking at the sessions with active transactions

PRD04> SELECT s.sid, s.serial#, s.username, u.segment_name, count(u.extent_id) "Extent Count", t.used_ublk, t.used_urec, s.program
FROM v$session s, v$transaction t, dba_undo_extents u
WHERE s.taddr = t.addr and u.segment_name like '_SYSSMU'||t.xidusn||'_%$' and u.status = 'ACTIVE'
GROUP BY s.sid, s.serial#, s.username, u.segment_name, t.used_ublk, t.used_urec, s.program
ORDER BY t.used_ublk desc, t.used_urec desc, s.sid, s.serial#, s.username, s.program;
 
    SID    SERIAL# USERNAME               SEGMENT_NAME                   Extent Count  USED_UBLK  USED_UREC PROGRAM
------- ---------- ---------------------- ------------------------------ ------------ ---------- ---------- ------------------------------------------------
   9028      15075 USER_E3                _SYSSMU375_247595031$                 32726     389285   24043132 php@script01.example.com
   2317      38231 USER_E2                _SYSSMU344_2152184027$                   11       8303     509953 php@script06.example.com
   8930      34713 USER_E3                _SYSSMU366_2128258416$                    6       4930     401890 php@script02.example.com
   5469       3267 USER_E4                _SYSSMU362_25371301$                      3       1704     116802 php@script02.example.com
..........
..........

13 rows selected

As per Metalink Doc “Troubleshooting ORA-1628 – max # extents (32765) reached for rollback segment (Doc ID 1580182.1)”

With automatic undo, you have no control over the extent sizes, nor do you have the ability to shrink them. This all happens automatically. The system automatically decides on the extent size, however it will generally start allocating larger extents if an undo segment is extending a lot. The large number of extents in the rollback segment is likely due to fragmentation in the undo tablespace: Oracle can probably only allocate extents of 64k due to fragmentation, so it is very probable to hit the max extents issue.

The maximum number of extents for undo segments is limited to 32K and a long/large running transaction can exhaust this limit by adding new extents if the next extent of the current one is not an expired one and finally will receive ORA-1628.

So, after getting the ORA-1628 errors in the transaction that extended the undo segment until its limit, future transactions will not be allowed to bind to the undo segment until it is not shrinked (you may see that the number of extents is decreased).

So, The two major causes of ORA-1628 issue are a very large transaction or undo tablespace fragmentation.

In case of large transaction, That is solved by splitting the large transaction to smaller ones (e.g. frequent commits).

In case of undo tablespace fragmentation, That is solved by recreating the undo tablespace (this is also the recommended solution of Bug 10330444 and Bug 10229998 which were filed for the same issue and closed as not a bug).

The UNDO tablespace had around 10Gb free, so adding space to UNDO wouldn’t resolve the issue. We informed the client’s Dev team and it was taken care.

Below are few sqls related to UNDO which I think can be useful while diagnosing issues


col "Parameter" format A32
col "Session Value" format A32
col "Instance Value" format A32
select a.inst_id, a.ksppinm "Parameter",
b.ksppstvl "Session Value",
c.ksppstvl "Instance Value"
from x$ksppi a, x$ksppcv b, x$ksppsv c
where a.indx = b.indx and a.indx = c.indx
and a.inst_id=b.inst_id and b.inst_id=c.inst_id
and a.ksppinm in ('_smu_debug_mode', '_highthreshold_undoretention')
order by 2;

   INST_ID Parameter			    Session Value		     Instance Value
---------- -------------------------------- -------------------------------- --------------------------------
	 1 _highthreshold_undoretention     4294967294			     4294967294
	 1 _smu_debug_mode		    0				     0


select segment_name,
   round(nvl(sum(act),0)/(1024*1024*1024),3 ) "ACT GB BYTES",
   round(nvl(sum(unexp),0)/(1024*1024*1024),3) "UNEXP GB BYTES",
   round(nvl(sum(exp),0)/(1024*1024*1024),3) "EXP GB BYTES",
   NO_OF_EXTENTS
   from ( select segment_name, nvl(sum(bytes),0) act,00 unexp, 00 exp, count(*) NO_OF_EXTENTS
   from DBA_UNDO_EXTENTS
   where status='ACTIVE' and tablespace_name = 'UNDOTBS4'
   group by segment_name
   union
   select segment_name,00 act, nvl(sum(bytes),0) unexp, 00 exp , count(*) NO_OF_EXTENTS
   from DBA_UNDO_EXTENTS
   where status='UNEXPIRED' and tablespace_name = 'UNDOTBS4'
   group by segment_name
   union
   select segment_name, 00 act, 00 unexp, nvl(sum(bytes),0) exp, count(*) NO_OF_EXTENTS
   from DBA_UNDO_EXTENTS
   where status='EXPIRED' and tablespace_name = 'UNDOTBS4'
   group by segment_name
   ) group by segment_name, NO_OF_EXTENTS having NO_OF_EXTENTS >= 30 order by 5 desc;

SEGMENT_NAME		       ACT GB BYTES UNEXP GB BYTES EXP GB BYTES NO_OF_EXTENTS
------------------------------ ------------ -------------- ------------ -------------
_SYSSMU375_247595031$			  0	    10.082	      0 	32765
_SYSSMU313_3664260743$			  0	     1.209	      0 	 2925
_SYSSMU316_1833322353$			  0		 0	   .607 	 2299
_SYSSMU367_1976067076$			  0		 0	   .788 	 2003
_SYSSMU577_2158293664$			  0		 0	  1.186 	 1710
_SYSSMU503_170622332$			  0		 0	   .604 	 1617
_SYSSMU330_231237204$			  0	      .899	      0 	 1523
_SYSSMU336_3955560249$			  0	     1.375	      0 	 1240
_SYSSMU5059_1468117272$ 		  0		.5	      0 	 1089
_SYSSMU306_1485117688$			  0	    12.145	      0 	  964

We can now take look at extent details for the specific undo segment which caused error. The undo segment was primarily made up of tiny (64 KB) extents

PRD04> break on report
PRD04> compute sum label Total of Extent_Count Extent_MB on report
PRD04> col Extent_MB format 999,999.00
PRD04> SELECT segment_name, bytes/1024 "Extent_Size_KB", count(extent_id) "Extent_Count", bytes * count(extent_id) / power(1024, 2) "Extent_MB" FROM dba_undo_extents WHERE segment_name = '_SYSSMU375_247595031$' group by segment_name, bytes order by 1, 3 desc;

SEGMENT_NAME		       Extent_Size_KB Extent_Count   Extent_MB
------------------------------ -------------- ------------ -----------
_SYSSMU375_247595031$			   64	     31858    1,991.13
_SYSSMU375_247595031$			 8192	       493    3,944.00
_SYSSMU375_247595031$			 4096	       178	712.00
_SYSSMU375_247595031$			 2048		23	 46.00
_SYSSMU375_247595031$			 1024		22	 22.00
_SYSSMU375_247595031$			16384		22	352.00
_SYSSMU375_247595031$			65536		17    1,088.00
_SYSSMU375_247595031$			 3072		12	 36.00
_SYSSMU375_247595031$			24576		12	288.00
_SYSSMU375_247595031$			12288		10	120.00
_SYSSMU375_247595031$			49152		 8	384.00
_SYSSMU375_247595031$			 1152		 7	  7.88
_SYSSMU375_247595031$			 1088		 6	  6.38
_SYSSMU375_247595031$			 3136		 6	 18.38
_SYSSMU375_247595031$			32768		 5	160.00
_SYSSMU375_247595031$			57344		 5	280.00
_SYSSMU375_247595031$			20480		 5	100.00
_SYSSMU375_247595031$			28672		 5	140.00
_SYSSMU375_247595031$			40960		 4	160.00
_SYSSMU375_247595031$			 1216		 4	  4.75
_SYSSMU375_247595031$			 6144		 4	 24.00
_SYSSMU375_247595031$			 3264		 3	  9.56
_SYSSMU375_247595031$			 3200		 3	  9.38
_SYSSMU375_247595031$			 5248		 2	 10.25
_SYSSMU375_247595031$			 2240		 2	  4.38
_SYSSMU375_247595031$			 3776		 2	  7.38
_SYSSMU375_247595031$			 2368		 2	  4.63
_SYSSMU375_247595031$			 2112		 2	  4.13
_SYSSMU375_247595031$			 5120		 2	 10.00
_SYSSMU375_247595031$			 1344		 2	  2.63
_SYSSMU375_247595031$			 3328		 2	  6.50
_SYSSMU375_247595031$			 2752		 2	  5.38
_SYSSMU375_247595031$			 3392		 2	  6.63
_SYSSMU375_247595031$			 2432		 2	  4.75
_SYSSMU375_247595031$			 7616		 1	  7.44
_SYSSMU375_247595031$			36864		 1	 36.00
_SYSSMU375_247595031$			 3584		 1	  3.50
_SYSSMU375_247595031$			 4032		 1	  3.94
_SYSSMU375_247595031$			 2304		 1	  2.25
_SYSSMU375_247595031$			 7872		 1	  7.69
_SYSSMU375_247595031$			 1280		 1	  1.25
_SYSSMU375_247595031$			11776		 1	 11.50
_SYSSMU375_247595031$			 5440		 1	  5.31
_SYSSMU375_247595031$			 7360		 1	  7.19
_SYSSMU375_247595031$			11264		 1	 11.00
_SYSSMU375_247595031$			 3712		 1	  3.63
_SYSSMU375_247595031$			35520		 1	 34.69
_SYSSMU375_247595031$			14336		 1	 14.00
_SYSSMU375_247595031$			59648		 1	 58.25
_SYSSMU375_247595031$			 6400		 1	  6.25
_SYSSMU375_247595031$			 2560		 1	  2.50
_SYSSMU375_247595031$			 3456		 1	  3.38
_SYSSMU375_247595031$			37888		 1	 37.00
_SYSSMU375_247595031$			 9536		 1	  9.31
_SYSSMU375_247595031$			 6336		 1	  6.19
_SYSSMU375_247595031$			15168		 1	 14.81
_SYSSMU375_247595031$			14400		 1	 14.06
_SYSSMU375_247595031$			 3520		 1	  3.44
_SYSSMU375_247595031$			 6528		 1	  6.38
_SYSSMU375_247595031$			19520		 1	 19.06
_SYSSMU375_247595031$			 1600		 1	  1.56
_SYSSMU375_247595031$			 2176		 1	  2.13
_SYSSMU375_247595031$			 6272		 1	  6.13
_SYSSMU375_247595031$			 9344		 1	  9.13
_SYSSMU375_247595031$			 5376		 1	  5.25
					      ------------ -----------
Total						     32765   10,324.25

65 rows selected.

ORA-17629 ORA-20079

Yesterday I received page for ” ORA-17629: Cannot connect to the remote database server”  reported in Primary Database alert log.

Fri Apr 10 06:47:41 2015
Errors in file /oracle/app/diag/rdbms/testdb/testdb/trace/testdb_ora_161991.trc:
ORA-17627:
ORA-17629: Cannot connect to the remote database server
Errors in file /oracle/app/diag/rdbms/testdb/testdb/trace/testdb_ora_161991.trc:
ORA-17629: Cannot connect to the remote database server
ORA-17627:
ORA-17629: Cannot connect to the remote database server
Fri Apr 10 06:48:22 2015

The error trace file shows

*** 2015-04-10 06:47:13.005
*** SESSION ID:(3024.45081) 2015-04-10 06:47:13.005

*** CLIENT ID:() 2015-04-10 06:47:13.005
*** SERVICE NAME:(testdb) 2015-04-10 06:47:13.005
*** MODULE NAME:(oracle@oracle2.testdbds.com (TNS V1-V3)) 2015-04-10 06:47:13.005

*** ACTION NAME:() 2015-04-10 06:47:13.005

krbmsrvgrcf:Expected db_unique_name: testdb
krbmsrvgrcf:mydbuname = :testdb:
krbmsrvgrcf:Instance with connect id testdbSBY requested controlfile.
krbmsrvgrcf:Filespec: /oracle/app/product/11.2_1/dbs/snappcf_testdb.file

*** 2015-04-10 06:47:41.467
OCI error val is 184283056 and errmsg is ”
ORA-17627:
ORA-17629: Cannot connect to the remote database server
******************** WARNING ***************************
The errors during Server autobackup are not fatal, as it
is attempted after sucessful completion of the command.
However, it is recomended to take an RMAN control file
backup as soon as possible because the Autobackup failed
with the following error:
ORA-17629: Cannot connect to the remote database server
ORA-17627:
ORA-17629: Cannot connect to the remote database server
******************** END OF WARNING *******************
ORA-17629: Cannot connect to the remote database server
ORA-17627:
ORA-17629: Cannot connect to the remote database server

The trace file mentions the SID,SERIAL# and the module name. The module shows “oracle@oracle2.testdbds.com” which is the standby database.

I thought to check the sid details from v$active_session_history and the session is coming in from standby database and definitely related to RMAN.

SQL> @ash_sid
Enter value for sid: 3024
Enter value for serial: 45081
old  17: where    session_id = &sid and session_serial# = &serial
new  17: where    session_id = 3024 and session_serial# = 45081

                                                                         SQL                                                                                        Dur   CPU DB Time
Sid,Serial<Blk  STIME    Module               SqlId:ChildNo    SqlExecId Start CPH S PRXJBZ Sta    SEQ# EVENT                            P1:P2:P3   WCLASS           ms    ms      ms
--------------- -------- -------------------- ---------------- --------- ----- ------------ ---- ------ ------------------------ ------------------ ----------- ------- ----- -------
3024,45081      06:47:13 oracle@oracle2.testdbds.com                           NNN N NNNNN  Wait    129 control file single writ 9999999:  1024:  9 System I/O      105     8     113
                06:47:14                                                       NNN N NNNNN  Wait    269 control file single writ 9999999:  5504:  9 System I/O
                06:47:15                                                       NNN N NNNNN  Wait    395 control file single writ 9999999:  9536:  9 System I/O
                06:47:16                                                       NNN N NNNNN  Wait    473 control file single writ 9999999: 12032:  9 System I/O
                06:47:17                                                       NNN N NNNNN  Wait    560 control file sequential  9999999: 14848:  9 System I/O
                06:47:18                                                       NNN N NNNNN  Wait    673 control file single writ 9999999: 18432:  9 System I/O
                06:47:19                                                       NNN N NNNNN  Wait    768 control file sequential  9999999: 21504:  9 System I/O
                06:47:20                                                       NNN N NNNNN  Wait    863 control file single writ 9999999: 24512:  9 System I/O
                06:47:21                                                       NNN N NNNNN  Wait    959 control file single writ 9999999: 27584:  9 System I/O
                06:47:22                                                       NNN N NNNNN  cpu    1044                          9999999: 30336:  9
                06:47:23                                                       NNN N NNNNN  cpu    1132                          9999999: 33152:  9
                06:47:24                                                       NNN N NNNNN  Wait   1220 control file sequential  9999999: 35968:  9 System I/O
                06:47:25                                                       NNN N NNNNN  Wait   1318 control file sequential  9999999: 39104:  9 System I/O
                06:47:26                                                       NNN N NNNNN  Wait   1405 control file single writ 9999999: 41856:  9 System I/O
                06:47:27                                                       NNN N NNNNN  Wait   1489 control file single writ 9999999: 44544:  9 System I/O
                06:47:28                                                       NNN N NNNNN  Wait   1567 control file single writ 9999999: 47040:  9 System I/O
                06:47:29                                                       NNN N NNNNN  Wait   1649 control file single writ 9999999: 49664:  9 System I/O
                06:47:30                                                       NNN N NNNNN  Wait   1733 control file single writ 9999999: 52352:  9 System I/O
                06:47:31                                                       NNN N NNNNN  cpu    1817                          9999999: 55040:  9
                06:47:32                                                       NNN N NNNNN  Wait   1895 control file single writ 9999999: 57536:  9 System I/O
                06:47:33                                                       NNN N NNNNN  Wait   1965 control file single writ 9999999: 59776:  9 System I/O
                06:47:34                                                       NNN N NNNNN  Wait   2041 control file single writ 9999999: 62208:  9 System I/O
                06:47:35                                                       NNN N NNNNN  Wait   2113 control file single writ 9999999: 64512:  9 System I/O
                06:47:36                                                       NNN N NNNNN  Wait   2179 control file single writ 9999999: 66624:  9 System I/O
                06:47:37                                                       NNN N NNNNN  Wait   2238 control file sequential  9999999: 68544:  9 System I/O
                06:47:38                                                       NNN N NNNNN  Wait   2307 control file single writ 9999999: 70720:  9 System I/O
                06:47:39                                                       NNN N NNNNN  Wait   2369 control file single writ 9999999: 72704:  9 System I/O
                06:47:40                                                       NNN N NNNNN  Wait   2444 control file sequential  9999999: 75136:  9 System I/O


28 rows selected.

Looking into MOS I found “RMAN-06820 ORA-17629 During Backup at Standby Site (Doc ID 1616074.1)”. It stated something interesting, so I thought to mention it here, which I did not know

Change in 11.2.0.4 onward

Per ‘unpublished’ Bug 8740124, as of 11.2.0.4, we now include the current standby redo log as part of an RMAN archivelog backup at the standby site. This is achieved by forcing a log switch at the primary site.

I looked at the archive log backup trace file on standby database and there was no error reported. Then I started looking into the backup script and saw, after the backup is completed, resync function is called which connects to rcat database and perform resync catalog.

cle2.testdbs logs]$ more testdb_backup_testdb_resync_201504100617_Fri.log

Recovery Manager: Release 11.2.0.3.0 – Production on Fri Apr 10 06:46:55 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

RMAN>
echo set on

RMAN>

RMAN> connect target *
connected to target database: testdb (DBID=3948365078)

RMAN> connect catalog *
connected to recovery catalog database

RMAN> resync catalog;
starting partial resync of recovery catalog
ORA-20079: full resync from primary database is not done

doing automatic resync from primary
resyncing from database with DB_UNIQUE_NAME testdb
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of resync command on default channel at 04/10/2015 06:47:13
ORA-17629: Cannot connect to the remote database server

ORA-17628: Oracle error 17629 returned by remote Oracle server

RMAN> exit;

Recovery Manager complete.

Tried “show all for db_unique_name testdb;” from standby

RMAN> show all for db_unique_name testdb;

ORA-20079: full resync from primary database is not done

doing automatic resync from primary
resyncing from database with DB_UNIQUE_NAME testdb
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of show command at 04/10/2015 07:53:40
RMAN-03014: implicit resync of recovery catalog failed
RMAN-03009: failure of partial resync command on default channel at 04/10/2015 07:53:40
ORA-17629: Cannot connect to the remote database server
ORA-17628: Oracle error 17629 returned by remote Oracle server

Connected to Primary Database and ran the same command

[oracle@oracle1 anand]$ rman target /

Recovery Manager: Release 11.2.0.3.0 – Production on Fri Apr 10 07:55:43 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

connected to target database: testdb (DBID=3948365078)

RMAN> connect catalog rcat/xxxxxxxx@rcat

connected to recovery catalog database

RMAN> show all for db_unique_name testdb;

starting full resync of recovery catalog

full resync complete
RMAN configuration parameters for database with db_unique_name testdb are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 10 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘%F';
CONFIGURE DEVICE TYPE DISK PARALLELISM 8 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT ‘/mnt/oracle-backup/testdb/%U';
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM ‘AES192′;
CONFIGURE COMPRESSION ALGORITHM ‘BASIC’ AS OF RELEASE ‘DEFAULT’ OPTIMIZE FOR LOAD FALSE;
CONFIGURE DB_UNIQUE_NAME ‘testdbSBY’ CONNECT IDENTIFIER ‘testdbSBY';
CONFIGURE DB_UNIQUE_NAME ‘testdb’ CONNECT IDENTIFIER ‘testdb';
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/oracle/app/product/11.2_1/dbs/snappcf_testdb.file';

RMAN>
RMAN> exit

The full resync completed on Primary database and then I performed “resync catalog” on standby as it completed successfully.

RMAN> resync catalog
2> ;

starting partial resync of recovery catalog
partial resync complete

RMAN> exit

Primary on FileSystem and Standby on ASM

For one of the client, standby server went down. We had another standby server which was kept down for more than a month. Decision was taken to start the server and apply incremental SCN based backup on the standby database.

The standby was on ASM and the Primary on filesystem.Incremental backup was started from the SCN reported by below query

select min(fhscn) from x$kcvfh;

Once the backup completed, it was transferred to standby, standby was mounted (using the old controlfile), backups were cataloged and recovery performed using ‘recover database noredo’.

The recovery was going on, and was handed over to me. After the recovery completed I restored the latest controlfile from Primary and mounted the standby. At this point the controlfile was with the information of the filesystem as they were on the Primary side. The next step was to register everything we had on Standby side:

[oracle@oracle3:~ (db)]$ rman target /

Recovery Manager: Release 11.2.0.3.0 - Production on Thu Jan 8 04:57:01 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ABCD (DBID=1463580380, not open)

RMAN> catalog  start with '+DATA/adbc_oracle3/DATAFILE/';

searching for all files that match the pattern +DATA/adbc_oracle3/DATAFILE/

List of Files Unknown to the Database
=====================================
File Name: +data/adbc_oracle3/DATAFILE/TEST.256.844395985
File Name: +data/adbc_oracle3/DATAFILE/TEST.257.844397067
.................
.................
File Name: +data/adbc_oracle3/DATAFILE/TEST.416.865953683

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: +data/adbc_oracle3/DATAFILE/TEST.256.844395985
File Name: +data/adbc_oracle3/DATAFILE/TEST.257.844397067
................
...............
File Name: +data/adbc_oracle3/DATAFILE/TEST.416.865953683

RMAN> report schema;

RMAN-06139: WARNING: control file is not current for REPORT SCHEMA
Report of database schema for database with db_unique_name ABCD_ORACLE3

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    0        SYSTEM               ***     /oradata/abcd_oracle2/datafile/o1_mf_system_825xkscr_.dbf
2    0        SYSAUX               ***     /oradata/abcd_oracle2/datafile/o1_mf_sysaux_825y451r_.dbf
3    0        TEST                 ***     /oradata/abcd_oracle2/datafile/o1_mf_test_825s84mw_.dbf
4    0        TEST                 ***     /oradata2/abcd_oracle2/datafile/o1_mf_test_8dr1v332_.dbf
................
................
................
147  0        TEST                   ***     /oradata4/abcd_oracle2/datafile/o1_mf_test_b8k8hcrh_.dbf
148  0        TEST                   ***     /oradata4/abcd_oracle2/datafile/o1_mf_test_b8k8hdhf_.dbf
149  0        TEST                   ***     /oradata4/abcd_oracle2/datafile/o1_mf_test_b8k8hf6o_.dbf
150  0        TEST                   ***     /oradata4/abcd_oracle2/datafile/o1_mf_test_b8k8hg1j_.dbf
151  0        TEST                   ***     /oradata4/abcd_oracle2/datafile/o1_mf_test_bb318bhs_.dbf
152  0        TEST                   ***     /oradata4/abcd_oracle2/datafile/o1_mf_test_bb318cff_.dbf
153  0        TEST_INDEX             ***     /oradata4/abcd_oracle2/datafile/o1_mf_test_index_bb318pmy_.dbf
154  0        TEST_NOLOGGING         ***     /oradata3/abcd_oracle2/datafile/o1_mf_test_nolog_bbm2s7vk_.dbf
155  0        TESTINDEX             ***     /oradata3/abcd_oracle2/datafile/o1_mf_test_index_bbm2z7nv_.dbf
156  0        PERFSTAT             ***     /oradata3/abcd_oracle2/datafile/o1_mf_perfstat_bbm312pf_.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
3    15104    TEMP                 32767       /oradata4/abcd_oracle2/datafile/o1_mf_temp_b633ppbr_.tmp
4    25600    TEMP                 32767       /oradata4/abcd_oracle2/datafile/o1_mf_temp_b633ppcf_.tmp

After catalog completed, it was time to switch database to copy and it failed with below error

RMAN> switch database to copy;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of switch to copy command at 01/08/2015 05:00:25
RMAN-06571: datafile 151 does not have recoverable copy

After some analysis found, it was due to missing datafile on standby. The datafile was created on Primary after the standby was down and the recovery using the incremental backup was done with the older controlfile which had no information about the new datafiles.

Datafile # 151 – 156 were missing, so cataloged the backup pieces again as the controlfile was restored, and started restoring the datafile

RMAN> restore datafile 151;

Starting restore at 08-JAN-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00151 to /oradata4/ABCD_ORACLE2/datafile/o1_mf_ct_bb318bhs_.dbf
channel ORA_DISK_1: reading from backup piece +BACKUP/ABCD_ORACLE3/backupset/restore/incr_standby_5sps58rd_1_1

channel ORA_DISK_1: piece handle=+BACKUP/ABCD_ORACLE3/backupset/restore/incr_standby_5sps58rd_1_1 tag=TAG20150107T192825
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:19:05
Finished restore at 08-JAN-15

RMAN>

After the restoration completed, report schema showed

RMAN> report schema;

RMAN-06139: WARNING: control file is not current for REPORT SCHEMA
Report of database schema for database with db_unique_name adbc_oracle3

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    0        SYSTEM               ***     /oradata/ABCD_ORACLE2/datafile/o1_mf_system_825xkscr_.dbf
2    0        SYSAUX               ***     /oradata/ABCD_ORACLE2/datafile/o1_mf_sysaux_825y451r_.dbf
.................
151  30720    TEST                  ***     +DATA/adbc_oracle3/datafile/test.417.868424659 
152  30720    TEST                  ***     +DATA/adbc_oracle3/datafile/test.418.868424659 
................

Tried to perform “switch database to copy” which again failed with the same error “RMAN-06571: datafile 151 does not have recoverable copy” . At this point I though to use “switch datafile to copy” for which generated dynamic sql from primary and ran in to standby.

Generated switch command sql from Primary :-

SQL> select 'switch datafile '||file#||' to copy;' from v$datafile;

[oracle@oracle3:~ (db)]$ vi swtch_copy.rman
[oracle@oracle3:~ (db)]$ rman target /

Recovery Manager: Release 11.2.0.3.0 - Production on Thu Jan 8 06:09:46 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ABCD (DBID=1463580380, not open)

RMAN> @swtch_copy.rman

RMAN> switch datafile 2 to copy;
using target database control file instead of recovery catalog
datafile 2 switched to datafile copy "+DATA/adbc_oracle3/datafile/sysaux.277.844418721"

............................
...........................
...........................
RMAN> switch datafile 149 to copy;
datafile 149 switched to datafile copy "+DATA/adbc_oracle3/datafile/ct.415.865953681"

RMAN> switch datafile 156 to copy;
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of switch to copy command at 01/08/2015 06:28:54
RMAN-06571: datafile 156 does not have recoverable copy

RMAN>
RMAN> **end-of-file**

Performed ‘recover database noredo’ again and this time it was pretty quick and then tried recover standby database

[oracle@oracle3:~/working/anand (abcd)]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Thu Jan 8 06:31:38 2015

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

SQL> recover standby database;
ORA-00279: change 38328244436 generated at 01/07/2015 19:28:32 needed for thread 1
ORA-00289: suggestion : +FRA
ORA-00280: change 38328244436 for thread 1 is in sequence #98501


Specify log: {=suggested | filename | AUTO | CANCEL}
^C
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '+DATA/adbc_oracle3/datafile/system.357.844454911'


Did few changes in DGMGRL parameters and enabled standby configuration. After few hours, standby was insync with Primary. Stopped the MRP , dropped the standby
redo logfiles as they showed filesystem, and created on ASM. Opened the standby in read only mode and started the MRP.

ORA-00600: [ktbdchk1: bad dscn]

Last week I had performed switchover activity of database on version 11.2.0.3 The switchover was performed using dgmgrl “swicthover to standby” command. After sometime we started receiving “ORA-00600: [ktbdchk1: bad dscn]” on the primary database.

Tue Dec 16 10:33:26 2014
Errors in file /ora_software/diag/rdbms/db02_dbv/dbv/trace/db02_ora_16271.trc  (incident=434103):
ORA-00600: internal error code, arguments: [ktbdchk1: bad dscn], [], [], [], [], [], [], [], [], [], [], []
Incident details in: /ora_software/diag/rdbms/db02_dbv/dbv/incident/incdir_434103/db02_ora_16271_i434103.trc

The trace file showed

*** ACTION NAME:() 2014-12-16 10:33:26.857

Dump continued from file: /ora_software/diag/rdbms/db02_dbv/dbv/trace/db02_ora_16271.trc
ORA-00600: internal error code, arguments: [ktbdchk1: bad dscn], [], [], [], [], [], [], [], [], [], [], []

========= Dump for incident 434103 (ORA 600 [ktbdchk1: bad dscn]) ========
----- Beginning of Customized Incident Dump(s) -----
[ktbdchk] -- ktbgcl4 -- bad dscn
dependent scn: 0x0008.f197f24e recent scn: 0x0008.c7313a4c current scn: 0x0008.c7313a4c
----- End of Customized Incident Dump(s) -----
*** 2014-12-16 10:33:26.961
dbkedDefDump(): Starting incident default dumps (flags=0x2, level=3, mask=0x0)
----- Current SQL Statement for this session (sql_id=dmrpvzvbbsuy5) -----
INSERT INTO MSG_OPEN( SITE_ID, CLIENT_ID, CAMP_ID, MESSAGE_ID, ID, USER_ID, ADDRESS, DATE_OPENED) VALUES( :B7 , :B6 , :B5 , :B4 , LOG_SEQ.NEXTVAL, :B3 , :B2 , :B1 )
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0x32276ee18       611  package body TP.EM_PKG
0x31cbac388         3  anonymous block
----- Call Stack Trace -----
calling              call     entry                argument values in hex
location             type     point                (? means dubious value)
-------------------- -------- -------------------- ----------------------------
skdstdst()+36        call     kgdsdst()            000000000 ? 000000000 ?
                                                   7FFFFDA13028 ? 000000001 ?
                                                   000000001 ? 000000002 ?
ksedst1()+98         call     skdstdst()           000000000 ? 000000000 ?
                                                   7FFFFDA13028 ? 000000001 ?
                                                   000000000 ? 000000002 ?
ksedst()+34          call     ksedst1()            000000000 ? 000000001 ?
                                                   7FFFFDA13028 ? 000000001 ?
                                                   000000000 ? 000000002 ?
dbkedDefDump()+2741  call     ksedst()             000000000 ? 000000001 ?
                                                   7FFFFDA13028 ? 000000001 ?
                                                   000000000 ? 000000002 ?
........................

Searching for the issue on Metalink pointed to the below Document:-

ALERT Description and fix for Bug 8895202: ORA-1555 / ORA-600 [ktbdchk1: bad dscn] ORA-600 [2663] in Physical Standby after switchover (Doc ID 1608167.1)

As per metalink

In a Data Guard environment with Physical Standby (including Active Data Guard), invalid SCNs can be introduced in index blocks after a switchover.

Symptoms ORA-1555 / ORA-600 [ktbdchk1: bad dscn] / ktbGetDependentScn / Dependent scn violations as the block ITL has higher COMMIT SCN than block SCN. DBVERIFY reports the next error when the fix of Bug 7517208 is present; reference Note 7517208.8 for interim patches: itl[] has higher commit scn(aaa.bbb) than block scn (xx.yy) Page failed with check code 6056 There is NO DATA CORRUPTION in the block.

To Resolve

The fix of Bug 8895202 is the workaround.

Although the fix of Bug 8895202 is included in patchset 11.2.0.2 and later, the fix needs to be enabled by setting parameter _ktb_debug_flags = 8.

SQL> alter system set "_ktb_debug_flags"=8 scope=both sid='*';

System altered.

SQL> exit

If you are using Oracle version less than 11.2.0.2, then rebuilding index is the option, as we did for one of the client on 11.1.0.7 version.

One thing to note is —

In rare cases blocks healed by this fix may cause queries to fail with an ORA-600 [ktbgcl1_KTUCLOMINSCN_1] as described in Note 13513004.8 / Bug 13513004.

For more detail Metalink Doc 1608167.1

Upgrading to 11.2.0.4 – Dictionary View Performing Poor

Just a quick blog post on things you might see after upgrading to 11.2.0.4. We recently upgraded database from 11.2.0.3 to 11.2.0.4 and query on some data dictionary views ran too slow.

1. Performace of query on dba_free_space degraded
2. Performance of query involving dba_segments is slow

DEV01> select ceil(sum(b.bytes)/1024/1024) b from sys.dba_free_space b;

Elapsed: 01:31:45.78

Searching MOS pointed to these Doc Ids :-

Insert Statement Based Upon a Query Against DBA_SEGMENTS is Slow After Applying 11.2.0.4 Patchset (Doc ID 1915426.1)

Query Against DBA_FREE_SPACE is Slow After Applying 11.2.0.4 (Doc ID 1904677.1)

DEV01> select ceil(sum(b.bytes)/1024/1024) b from sys.dba_free_space b;

Elapsed: 00:00:01.38

ORA-00600 – [kfnsInstanceReg00]

An interesting case of ORA-600 :)

I was paged for standby lag and started looking into the issue. The db version was 11.1.0.7.0 version. Standby was lagging behind by 36 archives

SQL> @mrp

   INST_ID PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK# DELAY_MINS
---------- --------- ------------ ---------- ---------- ---------- ----------
         1 MRP0      WAIT_FOR_LOG          1     568330          0          0 


   INST_ID PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK# DELAY_MINS
---------- --------- ------------ ---------- ---------- ---------- ----------
         1 RFS       IDLE                  0          0          0          0
         1 RFS       IDLE                  0          0          0          0
         1 RFS       WRITING               1     568330          1          0 
         1 RFS       IDLE                  0          0          0          0
         1 RFS       IDLE                  1     568366     309529          0

After sometime checked back again and still on the same sequence#. Status of MRP and RFS

SQL> @mrp

   INST_ID PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK# DELAY_MINS
---------- --------- ------------ ---------- ---------- ---------- ----------
         1 MRP0      WAIT_FOR_LOG          1     568330          0          0


   INST_ID PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK# DELAY_MINS
---------- --------- ------------ ---------- ---------- ---------- ----------
         1 RFS       IDLE                  0          0          0          0
         1 RFS       IDLE                  0          0          0          0
         1 RFS       WRITING               1     568330          1          0 
         1 RFS       IDLE                  0          0          0          0
         1 RFS       IDLE                  1     568366     795006          0

No error was reported in alert log or was in v$archive_dest_status.Archive log genertaion on Primary was usual like any other day, nothing changed.Seq# 568330 was present on Primary and size was 497Mb whereas the same seq# size on standby was 513Mb. I am not sure why!!!

So, i thought to copy the archive from the primary, stop mrp , perform manual recovery using ‘recover standby standby’ as other 35 archives were present on disk and then start real time apply.

Once the copy completed, i stopped the MRP and started manual recovery

SQL> alter database recover managed standby database cancel;

Database altered.

SQL> select name,open_mode,database_role from v$database;

NAME      OPEN_MODE  DATABASE_ROLE
--------- ---------- ----------------
TEST      MOUNTED    PHYSICAL STANDBY

SQL> recover standby database;
ORA-00279: change 69956910076 generated at 10/26/2014 21:46:36 needed for
thread 1
ORA-00289: suggestion :
/oraclebackup/fra/TEST/archivelog/2014_10_26/o1_mf_1_568330_b4vdl546_.arc
ORA-00280: change 69956910076 for thread 1 is in sequence #568330


Specify log: {=suggested | filename | AUTO | CANCEL}
AUTO
ORA-16145: archival for thread# 1 sequence# 568330 in progress 
SQL> @mrp

no rows selected


   INST_ID PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK# DELAY_MINS
---------- --------- ------------ ---------- ---------- ---------- ----------
         1 RFS       IDLE                  0          0          0          0
         1 RFS       IDLE                  0          0          0          0
         1 RFS       WRITING               1     568330          1          0 
         1 RFS       IDLE                  0          0          0          0
         1 RFS       IDLE                  1     568372     563688          0

At this point i thought to bounce the standby and then perform recovery. I issued shutdown immediate and it was hung. Almost after 10mins alert log showed

Active call for process 45660 user 'oracle' program 'oracle@test.com'
SHUTDOWN: waiting for active calls to complete.

I checked the PID 45660 and as it showed LOCAL=NO, did kill -9

[oracle@test:~/app/oracle (test)]$ ps -ef | grep 45660
oracle   41463 16050  0 00:43 pts/0    00:00:00 grep 45660
oracle   45660     1  1 Oct23 ?        01:09:22 oracletest (LOCAL=NO)
[oracle@test:~/app/oracle (test)]$ kill -9 45660 
[oracle@test:~/app/oracle (test)] ps -ef | grep 45660
oracle   42181 16050  0 00:44 pts/0    00:00:00 grep 45660
oracle   45660     1  1 Oct23 ?        01:09:22 oracletest (LOCAL=NO)

Kill -9 didn’t kill the process. As it was standby, i thought to go ahead and Abort the instance and this is what happend when i tried startup mount

SQL> startup mount
ORACLE instance started.

Total System Global Area 6.1466E+10 bytes
Fixed Size                  2174600 bytes
Variable Size            2.0535E+10 bytes
Database Buffers         4.0802E+10 bytes
Redo Buffers              126640128 bytes

ORA-03113: end-of-file on communication channel
Process ID: 47331
Session ID: 995 Serial number: 3

Alert log showed

Mon Oct 27 00:48:09 2014
ALTER DATABASE MOUNT
Mon Oct 27 00:53:10 2014
System State dumped to trace file /home/oracle/app/oracle/diag/rdbms/test_TEST_db/test/trace/test_ckpt_47252.trc
Mon Oct 27 00:53:11 2014
Errors in file /home/oracle/app/oracle/diag/rdbms/test_TEST_db/test/trace/test_asmb_47260.trc (incident=491160):
ORA-00600: internal error code, arguments: [600], [ORA_NPI_ERROR], [ORA-00600: internal error code, arguments: [kfnsInstanceReg00], [test:test_TEST_DB], [30000], [], [], [], [], [], [], [], [], []
], [], [], [], [], [], [], [], [], []
Incident details in: /home/oracle/app/oracle/diag/rdbms/test_TEST_db/test/incident/incdir_491160/test_asmb_47260_i491160.trc
Errors in file /home/oracle/app/oracle/diag/rdbms/test_TEST_db/test/trace/test_asmb_47260.trc:
ORA-15064: communication failure with ASM instance

ORA-00600: internal error code, arguments: [600], [ORA_NPI_ERROR], [ORA-00600: internal error code, arguments: [kfnsInstanceReg00], [test:test_TEST_DB], [30000], [], [], [], [], [], [], [], [], []
], [], [], [], [], [], [], [], [], []
Mon Oct 27 00:53:12 2014
Mon Oct 27 00:53:12 2014
Sweep Incident[491160]: completed
Mon Oct 27 00:53:12 2014
Trace dumping is performing id=[cdmp_20141027005312]
Errors in file /home/oracle/app/oracle/diag/rdbms/test_TEST_db/prod/trace/test_ckpt_47252.trc:
ORA-15083: failed to communicate with ASMB background process
CKPT (ospid: 47252): terminating the instance due to error 15083
Mon Oct 27 00:53:20 2014
Instance terminated by CKPT, pid = 47252

The CKPT trace showed


15083: Timeout waiting for ASMB to be ready (state=1)
ASMB is ALIVE
—– Abridged Call Stack Trace —–

SYSTEM STATE (level=10, with short stacks)
————
System global information:
processes: base 0xe90ea0088, size 900, cleanup 0xea0ec6598
allocation: free sessions 0xeb8f53d68, free calls (nil)
control alloc errors: 0 (process), 0 (session), 0 (call)
PMON latch cleanup depth: 0
seconds since PMON's last scan for dead processes: 1
system statistics:
22 logons cumulative
19 logons current
5 opened cursors cumulative
1 opened cursors current
0 user commits
0 user rollbacks
51 user calls
19 recursive calls
0 recursive cpu usage
0 session logical reads
0 session stored procedure space
0 CPU used when call started
0 CPU used by this session
1 DB time

Looking at the ASM instance alert log

System State dumped to trace file /home/oracle/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_33024.trc
Errors in file /home/oracle/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_33024.trc:

more /home/oracle/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_33024.trc


*** ACTION NAME:() 2014-10-27 02:18:39.197

kfnmsg_wait: KSR failure reaping message (0xffffff83)

*** 2014-10-27 02:18:39.197
Communication failure sending message to UFGs (0xffffff83)
kfnmsg_wait: opcode=35 msg=0x6b3f0928
2014-10-27 02:18:39.197762 :kfnmsg_Dump(): kfnmsg_SendtoUFGs: (kfnmsg) op=35 (KFNMS_GROUP_RLS) (gnum) [1.0]
CommunicationFailure 0xffffff83 after 300s

The server was hosting 2 standby db instances and 1 ASM instance. We thought to bring down the other standby db instance and then the ASM instance and then finally restart all. The shutdown immediate for 2nd standby instance was hung for long time so we aborted it. After ASM was brought down, still we could see below 5 pids

$ ps -ef | grep oracle
oracle 12032 16050 0 02:44 pts/0 00:00:00 grep arc
oracle 23135 1 0 Oct18 ? 00:02:15 ora_arc5_test
oracle 25390 1 0 Oct18 ? 00:01:19 ora_arc1_test2
oracle 25392 1 0 Oct18 ? 00:00:50 ora_arc2_test2

oracle 45660 1 1 Oct23 ? 01:09:22 oracleprod (LOCAL=NO)

Even after all the instances were down on server we could see few archiver Pids and a Pid with LOCAL=NO. Kill -9 also didn’t kill the pid.

Started the ASM instance and then when i started the standby instance TEST , ASM crashed

NOTE: check client alert log.
NOTE: Trace records dumped in trace file /home/oracle/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_57380.trc
Errors in file /home/oracle/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_57380.trc (incident=280159):
ORA-00600: internal error code, arguments: [kfupsRelease01], [], [], [], [], [], [], [], [], [], [], []
Incident details in: /home/oracle/app/oracle/diag/asm/+asm/+ASM/incident/incdir_280159/+ASM_ora_57380_i280159.trc

At this point escalated to client for server reboot. After the server reboot was done, all the instances came up properly. Started MRP and standby came insync.

Also you can reference to the below Metalink documents

Database startup Reports ORA-00600:[kfnsInstanceReg00] (Doc ID 1552422.1)
Bug 6934636 – Hang possible for foreground processes in ASM instance (Doc ID 6934636.8)
ORA-600 [kfnsInstanceReg00] and long waits on “ASM FILE METADATA OPERATION” (Doc ID 1480320.1)