12c Upgrade – It can be slow!!!!

This post is for some of you who are planning to upgrade to 12c.I worked on upgrading single instance database from 11.2.0.3.13 to 12.1.2.0.3, so thought to share my experience.

From 12c catupgrade.sql has been deprecated and Oracle has introduced catctl.pl script for upgrade. With catctl.pl script Oracle tries to perform as much work as possible in parallel, therefore minimizing downtime for upgrade.

With my experience, below are the points to consider for reducing the overall downtime.

1. Size of WRI$_OPTSTAT_HISTGRM_HISTORY and WRI$_OPTSTAT_HISTHEAD_HISTORY
2. Size of WRH$_ACTIVE_SESSION_HISTORY
3. HISTGRM$ add column

Lets take each point in more detail :-

1. Size of WRI$_OPTSTAT_HISTGRM_HISTORY and WRI$_OPTSTAT_HISTHEAD_HISTORY

WRI$_OPTSTAT_HISTGRM_HISTORY table was one of the main culprit during my first upgrade test. The table was ~35Gb with 2 indexes, each of ~70GB. As part of upgrade (catctl.pl — Phase 1) the table is altered and index is dropped and recreated without any parallel and nologging clause, which can take lots of time.

To reduce the downtime, make sure WRI$_OPTSTAT_HISTGRM_HISTORY table is purged. As in my case, the table was not getting automatically purged and it had 6mnts of data even though the retention was set to 28days.

You can follow steps mentioned in https://jhdba.wordpress.com/2011/08/23/939/ to purge the table. I tried the same, stopping the automatic mmon purging , dropping the indexes and running dbms_stats.purge_stats. The purge was quick (compared to with indexes and table size dropped to 6.5Gb from 35Gb), and then I ran create index command in parallel. The session was waiting on buffer busy wait for quite long and final_blocking_session showed DBWR sid. I killed my session after an hour and since it was test db, started the database in upgrade mode and created both the index which completed within 7mins.

On production, as no downtime was available, I ran dbms_stats.purge_stats (sysdate – 28) without dropping the indexes and it took ~42hrs. Basically behind the scene, oracle runs “alter table WRI$_OPTSTAT_HISTGRM_HISTORY drop partition partition_name update global indexes” to purge the data. During the upgrade catctl.pl run, index creation took ~14mins.

2. Size of WRH$_ACTIVE_SESSION_HISTORY

WRH$_ACTIVE_SESSION_HISTORY is table containing ASH data and has 1 index on it. In most of the databases, if automatic purging is not working, its one of the biggest segment in SYSAUX tablespace. To purge you can check :-

https://aprakash.wordpress.com/2014/09/21/sysaux-growing-rapidly-what-can-be-done/
https://aprakash.wordpress.com/2014/09/24/purging-sysaux/

While upgrade, the index is dropped and created using PARALLEL clause.

3. HISTGRM$ add column

During the upgrade, the below sql can take lot of time. In my first test, it took 40mins.

alter table histgrm$ add ep_repeat_count number default 0 not null; 
03:10:36 SQL> Rem Add columns to accommodate endpoint repeat count information
03:10:36 SQL> alter table histgrm$ add ep_repeat_count number default 0 not null;

Table altered.

Elapsed: 00:38:47.83
03:49:24 SQL> alter table finalhist$ add eprepcnt number default 0 not null; 

As per Oracle Support

In 12.1, optimizer group added a column to histgrm$ which is a clustered table. For upgrade from 11.2 to 12.1, this column ep_repeat_count is added as a default value column with a not null constraint. Since histgrm$ table on a database could have millions of rows, this addition of a default value column could take a long time since we need to update all existing rows with a default value. This in turn slows down the upgrade significantly

This Upgrade slowness while executing the sql alter table histgrm$ add ep_repeat_count number default 0 not null; is related to bug 19333670 and you need to apply the patch 19333670 on 12.1.0.2 Oracle Home before upgrading.

Incase you are using NFS, make sure to enable dNFS in 12c Oracle Home which is an optimized NFS (Network File System) client to provide faster and more scalable access to NFS storage located on NAS storage device.

Also, make sure to follow https://blogs.oracle.com/UPGRADE/ , if you are planning to upgrade to 12c :)

Some article related to upgrade –

https://blogs.oracle.com/UPGRADE/entry/sleeping_beauties_upgrade_to_11
https://blogs.oracle.com/UPGRADE/entry/invalid_table_data_before_upgrade
https://blogs.oracle.com/UPGRADE/entry/ouch_this_really_hurts_bug
Things to Consider Before Upgrading to 12.1.0.2 to Avoid Poor Performance or Wrong Results (Doc ID 2034610.1)

Hope it helps :)

ORA-00600 [kqlidchg0] | ORA-00001: unique constraint (SYS.I_PLSCOPE_SIG_IDENTIFIER$) violated

Recently after upgrading the dev database to 12c, I ran utlrp to compile the invalid objects and it failed with the below error

SQL> @utlrp

SQL> SELECT dbms_registry_sys.time_stamp('utlrp_bgn') as timestamp from dual;

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2015-06-30 03:50:27

SQL> DECLARE
  2     threads pls_integer := &&1;
  3  BEGIN
  4     utl_recomp.recomp_parallel(threads);
  5  END;
  6  /
DECLARE
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [kqlidchg0], [], [], [], [], [], [],
[], [], [], [], []
ORA-00604: error occurred at recursive SQL level 2
ORA-00001: unique constraint (SYS.I_PLSCOPE_SIG_IDENTIFIER$) violated
ORA-06512: at "SYS.DBMS_UTILITY", line 1294
ORA-06512: at line 1

SQL> SELECT dbms_registry_sys.time_stamp('utlrp_end') as timestamp from dual;

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END  2015-06-30 03:52:10

I enabled 10046 trace and Identified the sql for which error occurred.


insert into plscope_identifier$ (signature,symrep,obj#,type#) values (:1,:2,:3,:4)
END OF STMT
PARSE #140048740346032:c=0,e=270,p=0,cr=0,cu=0,mis=1,r=0,dep=2,og=4,plh=0,tim=20130922661094
BINDS #140048740346032:
 Bind#0
  oacdty=01 mxl=32(32) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=01 csi=178 siz=32 off=0
  kxsbbbfp=7f5fa3484968  bln=32  avl=32  flg=09
  value="097661E4DCCCB9DD39AA0F5F154C4DF3"
 Bind#1
  oacdty=01 mxl=32(31) mxlc=00 mal=00 scl=00 pre=00
  oacflg=10 fl2=0001 frm=01 csi=178 siz=32 off=0
  kxsbbbfp=7f5fa3484998  bln=32  avl=09  flg=09
  value="P_SALE_ID"
 Bind#2
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=1000001 frm=00 csi=00 siz=48 off=0
  kxsbbbfp=7f5fa3443fd0  bln=22  avl=04  flg=05
  value=166015
 Bind#3
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=00 fl2=1000001 frm=00 csi=00 siz=0 off=24
  kxsbbbfp=7f5fa3443fe8  bln=22  avl=02  flg=01
  value=7

Checked PLSCOPE_IDENTIFIER$ table details

SQL> @table_info.sql
Enter value for table_name: PLSCOPE_IDENTIFIER$
old   9: WHERE  table_name  like UPPER('%&table_name%')
new   9: WHERE  table_name  like UPPER('%PLSCOPE_IDENTIFIER$%')

TABLE_NAME                     OWNER      TABLESPACE_NAME   NUM_ROWS LAST_ANALYZED       AVG_ROW_LEN     BLOCKS EMPTY_BLOCKS
------------------------------ ---------- --------------- ---------- ------------------- ----------- ---------- ------------
PLSCOPE_IDENTIFIER$            SYS        SYSAUX                5074 06/30/2015 22:02:37          52         65            0

SQL> desc plscope_identifier$
 Name                Null?    Type
 ------------------ -------- -------------------
 SIGNATURE                      VARCHAR2(32)
 SYMREP                         VARCHAR2(128)
 OBJ#                           NUMBER
 TYPE#                          NUMBER

From the trace, I do have all bind values , so lets check the details

SQL> select owner,object_name,object_type,object_id,data_object_id,status from dba_objects where object_id=166015 or data_object_id=166015;

OWNER      OBJECT_NAME               OBJECT_TYPE         OBJECT_ID DATA_OBJECT_ID STATUS
---------- ------------------------- ------------------ ---------- -------------- -------
ADAM       GET_SALE_STATUS           FUNCTION               166015              0 INVALID

SQL> select * from plscope_identifier$ where SIGNATURE='097661E4DCCCB9DD39AA0F5F154C4DF3';

SIGNATURE                                SYMREP                     OBJ#      TYPE#
---------------------------------------- -------------------- ---------- ----------
097661E4DCCCB9DD39AA0F5F154C4DF3         P_SALE_ID                166015          7
SQL> desc adam.GET_SALE_STATUS
ERROR:
ORA-00600: internal error code, arguments: [kqlidchg0], [], [], [], [], [], [], [], [], [], [], []
ORA-00604: error occurred at recursive SQL level 1
ORA-00001: unique constraint (SYS.I_PLSCOPE_SIG_IDENTIFIER$) violated

SQL> alter session set current_schema=ADAM;

Session altered.

SQL> alter function GET_SALE_STATUS compile;

Function altered.


SQL> select owner,object_name,object_type,object_id,data_object_id,status from dba_objects where object_id=166015 or data_object_id=166015;

OWNER      OBJECT_NAME               OBJECT_TYPE         OBJECT_ID DATA_OBJECT_ID STATUS
---------- ------------------------- ------------------ ---------- -------------- -------
FREEDOM    GET_SALE_STATUS            FUNCTION               166015                VALID

SQL> desc GET_SALE_STATUS
FUNCTION GET_SALE_STATUS RETURNS VARCHAR2(30)
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 P_SALE_ID                      NUMBER(38)              IN

Re-ran the utlrp, and it ran successfully.

SQL> @?/rdbms/admin/utlrp	

TIMESTAMP
---------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2015-07-02 05:57:41

DOC>#

PL/SQL procedure successfully completed.


TIMESTAMP
--------------------------------------------------
COMP_TIMESTAMP UTLRP_END  2015-07-02 05:58:03

SQL to identify Invalid Dependent objects

col object_name for a45
col owner for a25
SELECT object_id, object_name, object_type, owner, status
FROM dba_objects
WHERE object_name IN (SELECT symrep FROM plscope_identifier$
WHERE obj# IN (SELECT obj# FROM plscope_identifier$
MINUS
SELECT object_id FROM dba_objects));

Move Table Partition between Tables

I was recently asked if we can move partition of one table into another? Basically a developers had truncated the partition and the team wanted to move the partition from one table to another.Both the table were structurally the same and had same data.

Here I will be showing how we can do that

1. Create a monthly partitioned table –

SQL> create table sales_part
    (product char(25),channel_id number,cust_id number,
    amount_sold number, time_id date)
    partition by range (time_id)
    (
    partition sale_jan values less than (to_date('01-02-2013','dd-mm-yyyy')),
    partition sale_feb values less than (to_date('01-03-2013','dd-mm-yyyy')),
    partition sale_mar values less than (to_date('01-04-2013','dd-mm-yyyy')),
    partition sale_apr values less than (to_date('01-05-2013','dd-mm-yyyy')),
    partition sale_may values less than (to_date('01-06-2013','dd-mm-yyyy')),
    partition sale_jun values less than (to_date('01-07-2013','dd-mm-yyyy')),
    partition sale_jul values less than (to_date('01-08-2013','dd-mm-yyyy')),
    partition sale_aug values less than (to_date('01-09-2013','dd-mm-yyyy')),
    partition sale_sep values less than (to_date('01-10-2013','dd-mm-yyyy')),
    partition sale_oct values less than (to_date('01-11-2013','dd-mm-yyyy')),
    partition sale_nov values less than (to_date('01-12-2013','dd-mm-yyyy')),
    partition sale_dec values less than (to_date('01-01-2014','dd-mm-yyyy'))
   ) TABLESPACE DEMO
 19  ;

Table created.

2. Insert some data into the table

SQL> insert into sales_part
    select
    'Oracle Enterprise Edition' as product,
     mod(rownum,5) as channel_id,
     mod(rownum,1000) as cust_id ,
     5000 as amount_sold,
     to_date
     ('01.' || lpad(to_char(mod(rownum,6)+4),2,'0') || '.2013' ,'dd.mm.yyyy')
     as time_id
     from dual connect by level commit;

Commit complete.

3. Create Index

SQL> create index idx_sale_local on sales_part (cust_id,time_id)local tablespace DEMO;

Index created.

SQL>
SQL> create index idx_sale_global on sales_part (channel_id)  tablespace DEMO;

Index created.

4. Gathers stats

SQL> exec dbms_stats.gather_table_stats('ANAND','SALES_PART', cascade => true);

PL/SQL procedure successfully completed.

Elapsed: 00:00:08.56
SQL> @partition_table
Enter value for table_name: SALES_PART
old  12: WHERE  a.table_name  = UPPER('&table_name')
new  12: WHERE  a.table_name  = UPPER('SALES_PART')
Enter value for owner: ANAND
old  13: AND    a.table_owner = Upper('&owner')
new  13: AND    a.table_owner = Upper('ANAND')

TABLE_NAME      PARTITION_NAME            HIGH_VALUE                LAST_ANAL TABLESPACE_NAME                INITIAL_EXTENT NEXT_EXTENT PCT_INCREASE   NUM_ROWS AVG_ROW_LEN
--------------- ------------------------- ------------------------- --------- ------------------------------ -------------- ----------- ------------ ---------- -----------
.....................
.....................
SALES_PART      SALE_APR                  TO_DATE(' 2013-05-01 00:0 14-AUG-15 DEMO                                  8388608     1048576                  165000          44
                                          0:00', 'SYYYY-MM-DD HH24:
                                          MI:SS', 'NLS_CALENDAR=GRE
                                          GORIAN')

SALES_PART      SALE_MAY                  TO_DATE(' 2013-06-01 00:0 14-AUG-15 DEMO                                  8388608     1048576                  165000          44
                                          0:00', 'SYYYY-MM-DD HH24:
                                          MI:SS', 'NLS_CALENDAR=GRE
                                          GORIAN')

SALES_PART      SALE_JUN                  TO_DATE(' 2013-07-01 00:0 14-AUG-15 DEMO                                  8388608     1048576                  165000          44
                                          0:00', 'SYYYY-MM-DD HH24:
                                          MI:SS', 'NLS_CALENDAR=GRE
                                          GORIAN')

SALES_PART      SALE_JUL                  TO_DATE(' 2013-08-01 00:0 14-AUG-15 DEMO                                  8388608     1048576                  165000          44
                                          0:00', 'SYYYY-MM-DD HH24:
                                          MI:SS', 'NLS_CALENDAR=GRE
                                          GORIAN')

SALES_PART      SALE_AUG                  TO_DATE(' 2013-09-01 00:0 14-AUG-15 DEMO                                  8388608     1048576                  165000          44
                                          0:00', 'SYYYY-MM-DD HH24:
                                          MI:SS', 'NLS_CALENDAR=GRE
                                          GORIAN')

SALES_PART      SALE_SEP                  TO_DATE(' 2013-10-01 00:0 14-AUG-15 DEMO                                  8388608     1048576                  165000          44
                                          0:00', 'SYYYY-MM-DD HH24:
                                          MI:SS', 'NLS_CALENDAR=GRE
                                          GORIAN')
......................
.....................


12 rows selected.

5. Lets create another table, exactly the same

SQL> create table sales_part_new
    (product char(25),channel_id number,cust_id number,
    amount_sold number, time_id date)
    partition by range (time_id)
    (
    partition sale_jan values less than (to_date('01-02-2013','dd-mm-yyyy')),
    partition sale_feb values less than (to_date('01-03-2013','dd-mm-yyyy')),
    partition sale_mar values less than (to_date('01-04-2013','dd-mm-yyyy')),
    partition sale_apr values less than (to_date('01-05-2013','dd-mm-yyyy')),
    partition sale_may values less than (to_date('01-06-2013','dd-mm-yyyy')),
    partition sale_jun values less than (to_date('01-07-2013','dd-mm-yyyy')),
    partition sale_jul values less than (to_date('01-08-2013','dd-mm-yyyy')),
    partition sale_aug values less than (to_date('01-09-2013','dd-mm-yyyy')),
    partition sale_sep values less than (to_date('01-10-2013','dd-mm-yyyy')),
    partition sale_oct values less than (to_date('01-11-2013','dd-mm-yyyy')),
    partition sale_nov values less than (to_date('01-12-2013','dd-mm-yyyy')),
    partition sale_dec values less than (to_date('01-01-2014','dd-mm-yyyy'))
   ) TABLESPACE DEMO
 19  ;

Table created.

6. Inserted the data using the same insert command,gathered the stats and created index as previously done in step 2,3 and 4.

7. Truncated one of the partition SALE_JUN, from SALES_PART_NEW

SQL> alter table SALES_PART_NEW truncate partition SALE_JUN;

Table truncated.

Now that the data is truncated from partition SALE_JUN of table SALES_PART_NEW, can we move same table partition of SALES_PART to SALES_PART_NEW.

We can use EXCHANGE PARTITION for the same

1. Create an interim table – Same structure without any partition

SQL> create table sales_part_interim
     (product char(25),
     channel_id number,
     cust_id number,
     amount_sold number,
     time_id date)
     TABLESPACE DEMO
    ;

Table created.

SQL> select count(1) from sales_part_interim;

  COUNT(1)
----------
         0

2. Perform Exchange Partition between Interim Table and SALE_JUN partition of SALES_PART.

SQL> alter table sales_part exchange partition SALE_JUN with table sales_part_interim;

Table altered.

Elapsed: 00:00:00.12
SQL> select count(1) from sales_part_interim;

  COUNT(1)
----------
    165000

SQL> @partition_table
Enter value for table_name: SALES_PART
old   8: WHERE  a.table_name  = UPPER('&table_name')
new   8: WHERE  a.table_name  = UPPER('SALES_PART')
Enter value for owner: ANAND
old   9: AND    a.table_owner = Upper('&owner')
new   9: AND    a.table_owner = Upper('ANAND')

TABLE_NAME      PARTITION_NAME            HIGH_VALUE                                              LAST_ANAL TABLESPACE_NAME                  NUM_ROWS
--------------- ------------------------- ------------------------------------------------------- --------- ------------------------------ ----------
SALES_PART      SALE_MAY                  TO_DATE(' 2013-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS 14-AUG-15 DEMO                               165000
                                          ', 'NLS_CALENDAR=GREGORIAN')

SALES_PART      SALE_JUN                  TO_DATE(' 2013-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS           DEMO
                                          ', 'NLS_CALENDAR=GREGORIAN')

SALES_PART      SALE_JUL                  TO_DATE(' 2013-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS 14-AUG-15 DEMO                               165000
                                          ', 'NLS_CALENDAR=GREGORIAN')
.................

3. Rebuild the global indexes on SALES_PART as it would be UNUSABLE.

4. Now the 2nd Exchange Partition to where INTERIM table will be exchanged with the Partition SALE_JUN of SALE_PART_NEW table

SQL> alter table sales_part_new exchange partition SALE_JUN with table  sales_part_interim;

Table altered.

Elapsed: 00:00:00.27
SQL> select count(1) from sales_part_interim;

  COUNT(1)
----------
         0

Elapsed: 00:00:00.01
SQL>
SQL> select count(*) from sales_part_new partition (sale_jun);

  COUNT(*)
----------
    165000

Elapsed: 00:00:00.03
SQL>

5. Check the index status and rebuild the required ones.

Exchange Partition – Exchanging Table Partitions

Recently I came across a requirement of moving table to new tablespace. The table had 12 range based partitions and each partition had ~8K list subpartitions and was storing historic data. After few discussion, it was decided to use exchange partition instead of dbms_redefinition.

As I did not have much practical experience in exchange partition, I thought to test it and I am just posting my test here in the blog

CREATE TABLE composite_rng_list (
cust_id NUMBER(10),
cust_name VARCHAR2(25),
cust_state VARCHAR2(2),
time_id DATE)
PARTITION BY RANGE(time_id)
SUBPARTITION BY LIST (cust_state)
SUBPARTITION TEMPLATE(
SUBPARTITION west VALUES ('OR', 'WA') TABLESPACE demo,
SUBPARTITION east VALUES ('NY', 'CT') TABLESPACE demo,
SUBPARTITION cent VALUES ('OK', 'TX') TABLESPACE demo) (
PARTITION per1 VALUES LESS THAN (TO_DATE('01/01/2014','DD/MM/YYYY')),
PARTITION per2 VALUES LESS THAN (TO_DATE('01/01/2015','DD/MM/YYYY')),
PARTITION per3 VALUES LESS THAN (TO_DATE('01/01/2016','DD/MM/YYYY')),
PARTITION future VALUES LESS THAN(MAXVALUE));

Table created.

SQL> insert into composite_rng_list values (1,'A','NY',to_date('01/02/2015','DD/MM/YYYY'));

1 row created.

Elapsed: 00:00:00.06
SQL> insert into composite_rng_list values (2,'B','CT',to_date('09/09/2014','DD/MM/YYYY'));

1 row created.

Elapsed: 00:00:00.06
SQL> insert into composite_rng_list values (3,'C','TX',to_date('10/10/2015','DD/MM/YYYY'));

1 row created.

Elapsed: 00:00:00.01
SQL> insert into composite_rng_list values (4,'D','WA',to_date('11/11/2013','DD/MM/YYYY'));

1 row created.

Elapsed: 00:00:00.01
SQL> insert into composite_rng_list values (5,'E','OK',to_date('21/02/2014','DD/MM/YYYY'));

1 row created.

Elapsed: 00:00:00.01
SQL> insert into composite_rng_list values (6,'F','OR',to_date('07/07/2013','DD/MM/YYYY'));

1 row created.

Elapsed: 00:00:00.00
SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats('ANAND','COMPOSITE_RNG_LIST');

PL/SQL procedure successfully completed.


SQL>  select table_owner,table_name,partition_name,subpartition_count,num_rows,tablespace_name from dba_tab_partitions where table_owner='ANAND' and table_name='COMPOSITE_RNG_LIST'
;

TABLE_OWN TABLE_NAME             PARTITION_NAME  SUBPARTITION_COUNT   NUM_ROWS TABLESPACE_NAME
--------- ---------------------- --------------- ------------------ ---------- ------------------------------
ANAND     COMPOSITE_RNG_LIST     FUTURE                           3          0 DEMO
ANAND     COMPOSITE_RNG_LIST     PER1                             3          2 DEMO
ANAND     COMPOSITE_RNG_LIST     PER2                             3          2 DEMO
ANAND     COMPOSITE_RNG_LIST     PER3                             3          2 DEMO

SQL> select table_owner,table_name,partition_name,subpartition_count from dba_tab_partitions where table_owner='ANAND' and table_name='COMPOSITE_RNG_LIST';

TABLE_OWN TABLE_NAME             PARTITION_NAME  SUBPARTITION_COUNT
--------- ---------------------- --------------- ------------------
ANAND     COMPOSITE_RNG_LIST     FUTURE                           3
ANAND     COMPOSITE_RNG_LIST     PER1                             3
ANAND     COMPOSITE_RNG_LIST     PER2                             3
ANAND     COMPOSITE_RNG_LIST     PER3                             3

SQL> select table_owner,table_name,partition_name,subpartition_name,high_value,num_rows from dba_tab_subpartitions  where table_owner='ANAND' and table_name='COMPOSITE_RNG_LIST';

TABLE_OWN TABLE_NAME             PARTITION_NAME  SUBPARTITION_NA HIGH_VALUE      NUM_ROWS
--------- ---------------------- --------------- --------------- --------------- ----------
ANAND     COMPOSITE_RNG_LIST     FUTURE          FUTURE_CENT     'OK', 'TX'          0
ANAND     COMPOSITE_RNG_LIST     FUTURE          FUTURE_EAST     'NY', 'CT'          0
ANAND     COMPOSITE_RNG_LIST     FUTURE          FUTURE_WEST     'OR', 'WA'          0
ANAND     COMPOSITE_RNG_LIST     PER3            PER3_CENT       'OK', 'TX'          1
ANAND     COMPOSITE_RNG_LIST     PER3            PER3_EAST       'NY', 'CT'          1
ANAND     COMPOSITE_RNG_LIST     PER3            PER3_WEST       'OR', 'WA'          0
ANAND     COMPOSITE_RNG_LIST     PER2            PER2_CENT       'OK', 'TX'          1
ANAND     COMPOSITE_RNG_LIST     PER2            PER2_EAST       'NY', 'CT'          1
ANAND     COMPOSITE_RNG_LIST     PER2            PER2_WEST       'OR', 'WA'          0
ANAND     COMPOSITE_RNG_LIST     PER1            PER1_CENT       'OK', 'TX'          0
ANAND     COMPOSITE_RNG_LIST     PER1            PER1_EAST       'NY', 'CT'          0
ANAND     COMPOSITE_RNG_LIST     PER1            PER1_WEST       'OR', 'WA'          2

Created a local and a global index on the table

SQL> create index i_crl_state on composite_rng_list(cust_state);

Index created.

SQL> create index i_crl_time on composite_rng_list(time_id) local;

Index created.

Index details

SQL> @index
Enter value for owner: ANAND
Enter value for table_name: composite_rng_list

TABLE_OWNER     TABLE_NAME            INDEX_OWNER     INDEX_NAME            TABLESPACE_NAME   NUM_ROWS      CLUST STATUS   INDEX_TYPE
--------------- --------------------- --------------- --------------------- --------------- ---------- ---------- -------- ------------
ANAND           COMPOSITE_RNG_LIST    ANAND           I_CRL_STATE           DEMO                     6          6 VALID    NORMAL
ANAND           COMPOSITE_RNG_LIST    ANAND           I_CRL_TIME                                     6          5 N/A      NORMAL

SQL> @partition_index
Enter value for index_name: I_CRL_TIME
Enter value for owner: ANAND

INDEX_NAME            PARTITION_NA STATUS   TABLESPACE_NAME INITIAL_EXTENT NEXT_EXTENT PCT_INCREASE   NUM_ROWS
--------------------- ------------ -------- --------------- -------------- ----------- ------------ ----------
I_CRL_TIME            PER1         N/A                                                                    2
I_CRL_TIME            PER2         N/A                                                                    2
I_CRL_TIME            PER3         N/A                                                                    2
I_CRL_TIME            FUTURE       N/A                                                                    0

Now lets create a table with which we would be exchanging the partition. Below I have created EXP_TEMP_PER1 table, which will exchange partition PER1 from COMPOSITE_RNG_LIST table. Note table is created as list partition (based on subpartition of original table). In case you create the table same as source table (range-list partition) and try exchange partition you would be hit with

ORA-14292: Partitioning type of table must match subpartitioning type of composite partition


SQL> create table ANAND.EXP_TEMP_PER1 (
   CUST_ID NUMBER(10)
   , CUST_NAME VARCHAR2(25)
   , CUST_STATE VARCHAR2(2)
   , TIME_ID DATE
 ) tablespace TEST
 partition by LIST (CUST_STATE)
 (
     partition PER1_WEST values ('OR', 'WA')
   , partition PER1_EAST values ('NY', 'CT')
   , partition PER1_CENT values ('OK', 'TX')
 )
 ;

Table created.

Insert the data, into the newly created table.

SQL> insert into EXP_TEMP_PER1 select * from composite_rng_list partition (per1);

2 rows created.

Elapsed: 00:00:00.26
SQL> commit;

Commit complete.

SQL> select * from composite_rng_list partition (per1);

   CUST_ID CUST_NAME                 CU TIME_ID
---------- ------------------------- -- ---------
         4 D                         WA 11-NOV-13
         6 F                         OR 07-JUL-13
SQL> select * from EXP_TEMP_PER1 ;

   CUST_ID CUST_NAME                 CU TIME_ID
---------- ------------------------- -- ---------
         4 D                         WA 11-NOV-13
         6 F                         OR 07-JUL-13

Note, the table created has partitions, no subpartitions

SQL> select table_owner,table_name,partition_name,subpartition_count,num_rows,tablespace_name from dba_tab_partitions where table_owner='ANAND' and table_name='EXP_TEMP_PER1';

TABLE_OWN TABLE_NAME             PARTITION_NAME  SUBPARTITION_COUNT   NUM_ROWS TABLESPACE_NAME
--------- ---------------------- --------------- ------------------ ---------- ---------------
ANAND     EXP_TEMP_PER1          PER1_WEST                        0          2 TEST
ANAND     EXP_TEMP_PER1          PER1_EAST                        0          0 TEST
ANAND     EXP_TEMP_PER1          PER1_CENT                        0          0 TEST

Elapsed: 00:00:00.10
SQL> select table_owner,table_name,partition_name,subpartition_name,high_value,num_rows from dba_tab_subpartitions where table_owner='ANAND' and table_name='EXP_TEMP_PER1';

no rows selected

Lets create local indexes similar to source table on the new table

SQL> create index i_exptper1_time on EXP_TEMP_PER1 (time_id) local;

Index created.

SQL> @partition_index
Enter value for index_name: i_exptper1_time
Enter value for owner: ANAND

INDEX_NAME            PARTITION_NA STATUS   TABLESPACE_NAME INITIAL_EXTENT NEXT_EXTENT PCT_INCREASE   NUM_ROWS
--------------------- ------------ -------- --------------- -------------- ----------- ------------ ----------
I_EXPTPER1_TIME       PER1_WEST    USABLE   TEST                   8388608     1048576                       2
I_EXPTPER1_TIME       PER1_EAST    USABLE   TEST                                                             0
I_EXPTPER1_TIME       PER1_CENT    USABLE   TEST                                                             0

Lets try the exchange partition

SQL> alter table COMPOSITE_RNG_LIST exchange partition per1 with table EXP_TEMP_PER1 including indexes with validation update global indexes;

Table altered.

SQL> select table_owner,table_name,partition_name,subpartition_name,high_value,num_rows,tablespace_name from dba_tab_subpartitions  where table_owner='ANAND' and table_name='COMPOSITE_RNG_LIST' and subpartition_name='PER1';

TABLE_OWN TABLE_NAME             PARTITION_NA SUBPARTITION HIGH_VALUE   NUM_ROWS TABLESPACE_NAME
--------- ---------------------- ------------ ------------ ---------- ---------- ---------------
ANAND     COMPOSITE_RNG_LIST     PER1         PER1_WEST    'OR', 'WA'          2 TEST
ANAND     COMPOSITE_RNG_LIST     PER1         PER1_EAST    'NY', 'CT'          0 TEST
ANAND     COMPOSITE_RNG_LIST     PER1         PER1_CENT    'OK', 'TX'          0 TEST

SQL> select index_name,partition_name,subpartition_name,tablespace_name,status from dba_ind_subpartitions where index_owner='ANAND' and index_name='I_CRL_TIME';

INDEX_NAME            PARTITION_NA SUBPARTITION TABLESPACE_NAME STATUS
--------------------- ------------ ------------ --------------- --------
I_CRL_TIME            FUTURE       FUTURE_CENT  DEMO            USABLE
I_CRL_TIME            FUTURE       FUTURE_EAST  DEMO            USABLE
I_CRL_TIME            FUTURE       FUTURE_WEST  DEMO            USABLE
I_CRL_TIME            PER3         PER3_CENT    DEMO            UNUSABLE
I_CRL_TIME            PER3         PER3_EAST    DEMO            UNUSABLE
I_CRL_TIME            PER3         PER3_WEST    DEMO            UNUSABLE
I_CRL_TIME            PER2         PER2_CENT    DEMO            UNUSABLE
I_CRL_TIME            PER2         PER2_EAST    DEMO            UNUSABLE
I_CRL_TIME            PER2         PER2_WEST    DEMO            UNUSABLE
I_CRL_TIME            PER1         PER1_CENT    TEST            USABLE
I_CRL_TIME            PER1         PER1_EAST    TEST            USABLE
I_CRL_TIME            PER1         PER1_WEST    TEST            USABLE

SQL> select table_owner,table_name,partition_name,subpartition_count,num_rows,tablespace_name from dba_tab_partitions where table_owner='ANAND' and table_name='COMPOSITE_RNG_LIST';


TABLE_OWN TABLE_NAME             PARTITION_NA SUBPARTITION_COUNT   NUM_ROWS TABLESPACE_NAME
--------- ---------------------- ------------ ------------------ ---------- ---------------
ANAND     COMPOSITE_RNG_LIST     PER1                          3          2 DEMO
ANAND     COMPOSITE_RNG_LIST     PER2                          3          2 DEMO
ANAND     COMPOSITE_RNG_LIST     PER3                          3          2 DEMO
ANAND     COMPOSITE_RNG_LIST     FUTURE                        3          0 DEMO

As we can see above the local indexes status is “USABLE”. For partitions PER3 and PER2 we see status “UNUSABLE” because I did not use “including indexes” attribute in exchange partition for them.

From Oracle Documentation

When you exchange partitions, logging attributes are preserved. You can optionally specify if local indexes are also to be exchanged (INCLUDING INDEXES clause), and if rows are to be validated for proper mapping (WITH VALIDATION clause).

When you specify WITHOUT VALIDATION for the exchange partition operation, this is normally a fast operation because it involves only data dictionary updates. However, if the table or partitioned table involved in the exchange operation has a primary key or unique constraint enabled, then the exchange operation is performed as if WITH VALIDATION were specified to maintain the integrity of the constraints.
To avoid the overhead of this validation activity, issue the following statement for each constraint before doing the exchange partition operation:

ALTER TABLE table_name
DISABLE CONSTRAINT constraint_name KEEP INDEX
Then, enable the constraints after the exchange.

If you specify WITHOUT VALIDATION, then you must ensure that the data to be exchanged belongs in the partition you exchange.

Unless you specify UPDATE INDEXES, the database marks UNUSABLE the global indexes or all global index partitions on the table whose partition is being exchanged. Global indexes or global index partitions on the table being exchanged remain invalidated. (You cannot use UPDATE INDEXES for index-organized tables. Use UPDATE GLOBAL INDEXES instead.)

DBA_TAB_SUPARTITIONS shows TEST tablespace, but still the view dba_tab_partitions, show DEMO tablespace for the partition. What needs to be done, so dba_tab_partitions show TEST tablespace for the partition? Please do respond :)

Reference

Exchanging Partitions

http://psoug.org/reference/partitions.html

Groups – OSDBA OSOPER OSASM

This one is quick and short. I have been asked few time, how can we check what value was specified to OSDBA, OSOPER and OSASM,especially during upgrades (to know what was it set to which initial installation)

# The DBA_GROUP is the OS group which is to be granted OSDBA privileges.
# The OPER_GROUP is the OS group which is to be granted OSOPER privileges.
# The OSASM_GROUP is the OS group which is to be granted OSASM privileges.

The below is from GRID_HOME :-

[grid@testdb1 dbs]$ grep “define SS_” $ORACLE_HOME/rdbms/lib/config.c
#define SS_DBA_GRP “asmdba”
#define SS_OPER_GRP “”
#define SS_ASM_GRP “asmadmin”

Even installation logfiles can be checked.

Silent Upgrade Oracle GoldenGate 12.1.2.0.0 to 12.1.2.1.2

Today I worked on silent upgrade of Oracle GoldenGate from 12.1.2.0.0 to 12.1.2.1.2, so thought to blog it which might help some of you. These are the steps I performed

Current GoldenGate Home –> /oracle/app/product/ogg12.1.2

1. Gather the details of GoldenGate Processes before stopping them

GGSCI> INFO EXTRACT EXTL, SHOWCH
INFO EXTRACT DPUMP, SHOWCH
GGSCI> SEND EXTRACT EXTL, SHOWTRANS
GGSCI> STOP EXTRACT DPUMP
GGSCI> STOP EXTRACT EXTL
GGSCI> SEND REPLICAT REPL STATUS
GGSCI> STOP REPLICAT REPL
GGSCI> STOP JAGENT
GGSCI> STOP MANAGER

2. Backup the existing binaries and associated files needed for your environment.

3. Edit the response file and run the runInstaller in silent mode

In the response file

#-------------------------------------------------------------------------------
# Specify the installation option.
# Specify ORA12c for installing Oracle GoldenGate for Oracle Database 12c and
#         ORA11g for installing Oracle GoldenGate for Oracle Database 11g
#-------------------------------------------------------------------------------
INSTALL_OPTION=ORA11g

#-------------------------------------------------------------------------------
# Specify a location to install Oracle GoldenGate
#-------------------------------------------------------------------------------
SOFTWARE_LOCATION=/oracle2/app/product/ogg12.1.2

#-------------------------------------------------------------------------------
# Specify true to start the manager after installation.
#-------------------------------------------------------------------------------
START_MANAGER=false

#-------------------------------------------------------------------------------
# Specify a free port within the valid range for the manager process.
# Required only if START_MANAGER is true.
#-------------------------------------------------------------------------------
MANAGER_PORT=

#-------------------------------------------------------------------------------
# Specify the location of the Oracle Database.
# Required only if START_MANAGER is true.
#-------------------------------------------------------------------------------
DATABASE_LOCATION=

#-------------------------------------------------------------------------------
# Specify the location which holds the install inventory files.
# This is an optional parameter if installing on
# Windows based Operating System.
#-------------------------------------------------------------------------------
INVENTORY_LOCATION=/oracle/oraInventory

#-------------------------------------------------------------------------------
# Unix group to be set for the inventory directory.
# This parameter is not applicable if installing on
# Windows based Operating System.
#-------------------------------------------------------------------------------
UNIX_GROUP_NAME=oinstall

Run the runInstaller in Silent mode

[oracle@oracle gg]$ export ORACLE_HOME=/oracle/app/product/ogg12.1.2
[oracle@oracle gg]$ export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
[oracle@oracle gg]$ export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
[oracle@oracle Disk1]$ ./runInstaller -silent -showProgress -waitforcompletion -responseFile /opt/oracle/ogg_upg.rsp
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 181836 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 11999 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-04-21_12-23-03AM. Please wait ...You can find the log of this install session at:
 /oracle/oraInventory/logs/installActions2015-04-21_12-23-03AM.log

Prepare in progress.
..................................................   10% Done.

Prepare successful.

Copy files in progress.
..................................................   44% Done.
..................................................   50% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   72% Done.
....................
Copy files successful.

Link binaries in progress.
..........
Link binaries successful.
..................................................   82% Done.

Setup files in progress.
..................................................   100% Done.

Setup files successful.
The installation of Oracle GoldenGate Core was successful.
Please check '/oracle/oraInventory/logs/silentInstall2015-04-21_12-23-03AM.log' for more details.
Successfully Setup Software.
[oracle@oracle1 Disk1]$

4. Confirm GoldenGate has been upgraded

[oracle@oracle Disk1]$ cd $ORACLE_HOME
[oracle@oracle ogg12.1.2]$
[oracle@oracle ogg12.1.2]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Aug  7 2014 09:14:25
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.

5. Apply Patch 20265694: Oracle GoldenGate V12.1.2.1.2 for Oracle 11G

[oracle@oracle1 20265694]$ opatch apply
Invoking OPatch 11.2.0.1.7

Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation.  All rights reserved.


Oracle Home       : /oracle/app/product/ogg12.1.2
Central Inventory : /oracle/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 11.2.0.1.7
OUI version       : 11.2.0.3.0
Log file location : /oracle/app/product/ogg12.1.2/cfgtoollogs/opatch/opatch2015-04-21_00-24-15AM.log

Applying interim patch '20265694' to OH '/oracle/app/product/ogg12.1.2'
Verifying environment and performing prerequisite checks...

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/oracle/app/product/ogg12.1.2')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...

Patching component oracle.oggcore.ora11g, 12.1.2.1.0...
Patch 20265694 successfully applied
Log file location: /oracle/app/product/ogg12.1.2/cfgtoollogs/opatch/opatch2015-04-21_00-24-15AM.log

OPatch succeeded.
[oracle@oracle1 20265694]$
[oracle@oracle1 20265694]$ opatch lsinv
Invoking OPatch 11.2.0.1.7

Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation.  All rights reserved.
........................
........................
Patch  20265694     : applied on Tue Apr 21 00:24:27 EDT 2015
Unique Patch ID:  18593256
   Created on 3 Feb 2015, 15:03:21 hrs PST8PDT
   Bugs fixed:
     19818362, 19602692, 19241234, 17423191, 19781984, 19535319, 19724915
     19721652, 19516537, 19441114, 19132627, 19889991, 19681035

6. Confirm the GoldenGate version is now 12.1.2.1.2

[oracle@oracle ogg12.1.2]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.2 20133048 OGGCORE_12.1.2.1.0OGGBP_PLATFORMS_141228.0533_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Dec 28 2014 13:19:44
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

7. Start the GoldenGate Processes


GGSCI (oracle.test.com) 1> dblogin userid ggate, Password "xxxxxxxx"
Successfully logged into database.

GGSCI> START MANAGER
GGSCI> START JAGENT
GGSCI> START EXTRACT EXTL
GGSCI> START EXTRACT DPUMP
GGSCI> START REPLICAT REPL

ORA-00353: log corruption near block change time – 11.2.0.3

Recently I came across “ORA-00353: log corruption near block change time” error.

Errors in file /oracle/app/diag/rdbms/test/test/trace/test_ms00_54284.trc  (incident=1490189):
ORA-00353: log corruption near block  change  time
ORA-00353: log corruption near block 4631374 change 263470473806052 time 04/16/2015 06:53:16
ORA-00312: online log 43 thread 3: '+REDO02/test/onlinelog/group_43.258.875922665'
ORA-00312: online log 43 thread 3: '+REDO01/test/onlinelog/group_43.258.875922633'
Incident details in: /oracle/app/diag/rdbms/test/test/incident/incdir_1490189/test_ms00_54284_i1490189.trc
Thu Apr 16 08:14:42 2015
Sweep [inc][1490188]: completed
Thu Apr 16 08:14:55 2015
LOGMINER: WARNING: error 353 encountered, failed to read corrupt logfile +REDO01/test/onlinelog/group_43.258.875922633

As per ORA-354 ORA-353 and ORA-312 on Redo Log Group members (Doc ID 332672.1)

Possible corruption in Online Redo Log File Members in a Redo Log Group.

If Oracle detects a corruption when archiving a log member, it will try to read the same redo block from the second member of the group. If the block is found corrupt in all the members of the redo log group, archiving will not proceed.

I started validating the online redo logfile member and luckily none of the members were corrupt. Incase you find corrupt members please refer to above mentioned Doc “Doc ID 332672.1” for solution.

As a precaution I validated all the online redo logfiles to confirm for corruption. If the redo log file is corrupt it should produce an error.

SQL> alter system dump logfile '+REDO01/test/onlinelog/group_43.258.875922633' validate;

System altered.

Elapsed: 00:01:21.89

SQL> alter system dump logfile '+REDO02/test/onlinelog/group_43.258.875922665' validate;

System altered.

Elapsed: 00:01:10.61

Interestingly I noticed trace file “test_ms00_54284.trc” of size 49Gb and continuously increasing. The file was being continuously written by GoldenGate process.

The issue seemed similar to one mentioned in

Bug 13840711 – ORA-353 in Standby / Streams Data Capture or ORA-272 in PRIMARY: Redo log corruption by ASYNC redo shipping (Doc ID 13840711.8)

Corrupt redo blocks are read from in memory log buffer (not from disk) by ASYNC redo shipping or hotmining clients.

ORA-353 may be reported by the process reading the redo log block from in memory log buffer (e.g. Logminer Reader – MSnn background process, Streams data capture).

Looking at the goldengate status showed

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
JAGENT      RUNNING
EXTRACT     RUNNING     DPUMP       00:00:00      00:00:03
EXTRACT     RUNNING     EXTL        02:05:36      00:00:07
REPLICAT    RUNNING     REPL        00:00:00      00:00:06

Meanwhile we received another error, this time the extract process which failed reading from online redo logfile, failed reading from archivelog file too. I checked the standby database, and it was insync with Primary.

Errors in file /oracle/app/diag/rdbms/test/test/trace/test_ms00_54284.trc (incident=1490190):
ORA-00353: log corruption near block 4631374 change 605163911323 time 01/06/1988 07:26:12
ORA-00334: archived log: ‘+FLASH01/test/archivelog/2015_04_16/thread_3_seq_9370.7790.877158359′

Incident details in: /oracle/app/diag/rdbms/test/test/incident/incdir_1490190/test_ms00_54284_i1490190.trc
Thu Apr 16 09:19:13 2015
Dumping diagnostic data in directory=[cdmp_20150416091913], requested by (instance=1, osid=54284 (MS00)), summary=[incident=1490190].
Thu Apr 16 09:19:27 2015
Errors in file /oracle/app/diag/rdbms/test/test/trace/test_ms00_54284.trc (incident=1490191):
ORA-00356: inconsistent lengths in change description
ORA-00353: log corruption near block 4631374 change 605163911323 time 01/06/1988 07:26:12
ORA-00334: archived log: ‘+FLASH01/test/archivelog/2015_04_16/thread_3_seq_9370.7790.877158359’
Incident details in: /oracle/app/diag/rdbms/test/test/incident/incdir_1490191/test_ms00_54284_i1490191.trc

At this point, client stopped all the GoldenGate processes, deleted the trace file and started the processes.I started monitoring the Extract Process

 send extract EXTL status

Sending STATUS request to EXTRACT EXTL ...


EXTRACT EXTL (PID 102390)
  Current status: In recovery[1]: Processing data

  Current read position:
  Redo thread #: 3
  Sequence #: 9351
  RBA: 4023516192
  Timestamp: 2015-04-16 03:37:35.000000
  SCN: 140.3838686301
  Current write position:
  Sequence #: 1637
  RBA: 1551
  Timestamp: 2015-04-16 09:31:56.423132
  Extract Trail: /oracle2/app/product/ogg12.1.2/dirdat/capture/sd

Monitored the Extract, and it was successfully able to read and extract from SEQ 9370, for which it had earlier failed.

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
JAGENT      RUNNING
EXTRACT     RUNNING     DPUMP       00:00:01      00:00:07
EXTRACT     RUNNING     EXTL        00:00:01      00:00:00
REPLICAT    RUNNING     REPL        00:00:00      00:00:06

send extract EXTL status

Sending STATUS request to EXTRACT EXTL ...


EXTRACT EXTL (PID 102390)
  Current status: Recovery complete: Processing data

  Current read position:
  Redo thread #: 3
  Sequence #: 9395
  RBA: 1721029512
  Timestamp: 2015-04-16 12:08:00.000000
  SCN: 140.3917853576
  Current write position:
  Sequence #: 1641
  RBA: 997030325
  Timestamp: 2015-04-16 12:07:57.056673
  Extract Trail: /oracle2/app/product/ogg12.1.2/dirdat/capture/sd

lag extract EXTL

Sending GETLAG request to EXTRACT EXT1OL ...
Last record lag 0 seconds.

lag extract DPUMP

Sending GETLAG request to EXTRACT DPUMP ...
Last record lag 2 seconds.
At EOF, no more records to process.

Current DB version is 11.2.0.3.13. As mentioned in the doc, fix for bug 13840711 is first included in 11.2.0.3.14 (Apr 2015) Database Patch Set Update (DB PSU) , we do plan to apply it in coming weeks.