Interesting blogs after OOW

It has been long time since i have posted something. After the OOW week and 12c db version being on its way, there have been few blogs which i would recommend to read

http://portrix-systems.de/blog/brost/favourite-oracle-12c-database-features-of-openworld-bloggers/

http://hansforbrich.blogspot.sg/2012/10/toms-12-top-new-features.html

http://dbastreet.com/blog/?p=908

http://momendba.blogspot.in/2012/10/oow-2012-oracle-12c-new-features.html

http://momendba.blogspot.in/2012/10/oracle-rac-12c-new-featues.html

http://uhesse.com/2012/10/01/exadata-x3-key-points/

Happy reading :)

Loading SQL Plans into SPM using AWR

SPM (SQL Plan Management) has been an useful solution for performance issues due to change in execution plan.As mentioned in Oracle Optimizer team’s blog –

SQL Plan Management (SPM) allows database users to maintain stable yet optimal performance for a set of SQL statements. SPM incorporates the positive attributes of plan adaptability and plan stability, while simultaneously avoiding their shortcomings. It has two main objectives:

prevent performance regressions in the face of database system changes
offer performance improvements by gracefully adapting to database system changes

SQL Plan baselines can be loaded in multiple ways

1. Using SQL Tuning set (STS_
2. From Cursor Cache
3. Export and Import using Staging table
4. Automatically

Here we would be see how to manually load the plans from AWR to SQL Plan Baseline.To load from AWR we need to create sql tuning set

SQL> exec dbms_sqltune.create_sqlset(sqlset_name => 'b8rc6j0krxwdc_sqlset_test',description => 'sqlset descriptions');

Identify the snap_id in which the sql belongs using dba_hist_sqlstat view.Once identified

declare
baseline_ref_cur DBMS_SQLTUNE.SQLSET_CURSOR;
begin
open baseline_ref_cur for
select VALUE(p) from table(
DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY(&begin_snap_id, &end_snap_id,'sql_id='||CHR(39)||'&sql_id'||CHR(39)||'',NULL,NULL,NULL,NULL,NULL,NULL,'ALL')) p;
DBMS_SQLTUNE.LOAD_SQLSET('b8rc6j0krxwdc_sqlset_test', baseline_ref_cur);
end;
/

It would prompt for the begin_snapid , end_snapid and the sql_id for which you want to load the sqlset with. Along with sql_id we can give the specific plan_hash_value in DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY.

declare
baseline_ref_cur DBMS_SQLTUNE.SQLSET_CURSOR;
begin
open baseline_ref_cur for
select VALUE(p) from table(
DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY(&begin_snap_id, &end_snap_id,'sql_id='||CHR(39)||'&sql_id'||CHR(39)||' and plan_hash_value=1421641795',NULL,NULL,NULL,NULL,NULL,NULL,'ALL')) p;
DBMS_SQLTUNE.LOAD_SQLSET('b8rc6j0krxwdc_sqlset_test', baseline_ref_cur);
end;
/

By default, select_workload_repository does not include the SQL Plan so we pass TYPICAL or ALL as the ‘attribute_list’ parameter to get the plan. The default value of BASIC does not capture the plan.

attribute_list

List of SQL statement attributes to return in the result. The possible values are:

TYPICAL – BASIC + SQL plan (without row source statistics) and without object reference list (default)

BASIC – all attributes (such as execution statistics and binds) are returned except the plans. The execution context is always part of the result.

ALL – return all attributes

Comma-separated list of attribute names this allows to return only a subset of SQL attributes: EXECUTION_STATISTICS, SQL_BINDS, SQL_PLAN_STATISTICS (similar to SQL_PLAN + row source statistics).

To load all the sqls captured in AWR snapshot use “NULL” in basic_filter.

declare
baseline_ref_cur DBMS_SQLTUNE.SQLSET_CURSOR;
begin
open baseline_ref_cur for
select VALUE(p) from table(
DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY(&begin_snap_id, &end_snap_id,NULL,NULL,NULL,NULL,NULL,NULL,NULL,'ALL')) p;
DBMS_SQLTUNE.LOAD_SQLSET('all_sqlset_test', baseline_ref_cur);
end;
/

View the sqlset using DBA_SQLSET

05:58:44 SYS >  SELECT NAME,OWNER,CREATED,STATEMENT_COUNT FROM DBA_SQLSET where name='b8rc6j0krxwdc_sqlset_test';

NAME                                     OWNER      CREATED                     STATEMENT_COUNT
---------------------------------------- ---------- --------------------------- ---------------
b8rc6j0krxwdc_sqlset_test                SYS        04-JUL-12-05:58:44                        1

To verify the execution Plan of a SQL_ID in the STS

select * from table(dbms_xplan.display_sqlset('b8rc6j0krxwdc_sqlset_test','&sql_id'));

The baseline has not yet been loaded

SYS > select count(*) from dba_sql_plan_baselines;

  COUNT(*)
----------
         0

To load SQL execution plans from SQL set into SQL baseline –

set serveroutput on
declare
my_int pls_integer;
begin
my_int := dbms_spm.load_plans_from_sqlset (
sqlset_name => 'b8rc6j0krxwdc_sqlset_test',
basic_filter => 'sql_id="b8rc6j0krxwdc",
sqlset_owner => 'SYS',
fixed => 'NO',
enabled => 'YES');
DBMS_OUTPUT.PUT_line(my_int);
end;
/

SYS >  select count(*) from dba_sql_plan_baselines;

  COUNT(*)
----------
         2

To load sql_id with specific plan_has_value use basic_filter => ‘sql_id=”b8rc6j0krxwdc” and plan_hash_value =1306981985′,

To verify the plan baseline

06:41:23 SYS@ornct1 > SELECT SQL_HANDLE, PLAN_NAME, ORIGIN, ENABLED, ACCEPTED, FIXED, MODULE FROM   DBA_SQL_PLAN_BASELINES;

SQL_HANDLE                     PLAN_NAME                      ORIGIN         ENA ACC FIX MODULE
------------------------------ ------------------------------ -------------- --- --- --- ----------------------------------------------------------------
SQL_bc74153ac0be1578           SQL_PLAN_bsx0p7b0bw5bs02dad9b2 MANUAL-LOAD    YES YES NO  emagent_SQL_rac_database
SQL_bc74153ac0be1578           SQL_PLAN_bsx0p7b0bw5bsfcc37a57 MANUAL-LOAD    YES YES NO  emagent_SQL_rac_database

If you want a particular plan_hash_value to be fixed use ‘sql_id=”b8rc6j0krxwdc” and plan_hash_value =1306981985′, fixed => ‘YES’ in dbms_spm.load_plans_from_sqlset.

References and interesting reads -

http://avdeo.com/2012/06/20/fixing-sql-plans-the-hard-way-part-1/
http://avdeo.com/2012/07/04/fixing-sql-plans-the-hard-way-part-2/
http://oracle-randolf.blogspot.in/2009/03/plan-stability-in-10g-using-existing.html
https://blogs.oracle.com/optimizer/entry/sql_plan_management_part_1_of_4_creating_sql_plan_baselines
HOW TO LOAD SQL PLANS INTO SPM FROM AWR (Doc ID 789888.1)

ORA-06553: PLS-306: wrong number or types of arguments in call to ‘OGC_Y’ while exporting table

Recently i had faced “ORA-06553: PLS-306:” error while taking an export using expdp for table using “query” parameter.

$ expdp directory=TEST_DPUMP_DIR  dumpfile=test_url.dmp logfile=test_url.log tables=anand.test_delete_objects query=\"WHERE processed_flag = 'Y'\"

Export: Release 11.2.0.3.0 - Production on Tue May 29 00:24:25 2012

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: system
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** directory=TEST_DPUMP_DIR dumpfile=test_url.dmp logfile=test_url.log tables=anand.test_delete_objects query="WHERE processed_flag = Y"
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 112 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-31693: Table data object "ANAND"."TEST_DELETE_OBJECTS" failed to load/unload and is being skipped due to error:
ORA-06553: PLS-306: wrong number or types of arguments in call to 'OGC_Y'
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  /exp/kmportal_dpump_dir/test_url.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 00:25:50

The export completes successfully if we use parfile

$ expdp parfile=test_url.lst

Export: Release 11.2.0.3.0 - Production on Tue May 29 00:53:49 2012

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: system
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** parfile=test_url.lst
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 112 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "ANAND"."TEST_DELETE_OBJECTS"           10.09 KB      38 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  /exp/kmportal_dpump_dir/test_url.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 00:54:39


$ more test_url.lst
directory=TEST_DPUMP_DIR
dumpfile=test_url.dmp
logfile=test_url.log
tables=anand.test_delete_objects
query="WHERE processed_flag = 'Y'"

So,i was little confused until i came across Jonathan Lewis’s comments on the below blog

http://jonathanlewis.wordpress.com/2010/01/26/aliases/

Unfortunately, if you’ve installed Spatial, there are two functions (called ogc_x and ogc_y) which has been given public synonyms X and Y respectively that move Oracle into the function-call validity checks – which is where the pls error comes from.

SYS > select owner,SYNONYM_NAME,TABLE_OWNER,TABLE_NAME from dba_synonyms where SYNONYM_NAME='Y';

OWNER      SYNONYM_NAME                   TABLE_OWNER                    TABLE_NAME
---------- ------------------------------ ------------------------------ -------------------------
PUBLIC     Y                              MDSYS                          OGC_Y

1 row selected.

SYS > select owner,SYNONYM_NAME,TABLE_OWNER,TABLE_NAME from dba_synonyms where SYNONYM_NAME='X';

OWNER      SYNONYM_NAME                   TABLE_OWNER                    TABLE_NAME
---------- ------------------------------ ------------------------------ -------------------------
PUBLIC     X                              MDSYS                          OGC_X

1 row selected.

So to run it succsfully from the command line

$ expdp directory=TEST_DPUMP_DIR dumpfile=test_url_test.dmp logfile=test_url_test.log tables=anand.test_delete_objects query=\"WHERE processed_flag = \'Y\'\"

Export: Release 11.2.0.3.0 - Production on Tue May 29 01:11:44 2012

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: system
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** directory=TEST_DPUMP_DIR dumpfile=test_url_test.dmp logfile=test_url_test.log tables=anand.test_delete_objects query="WHERE processed_flag = 'Y'"
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 112 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "ANAND"."TEST_DELETE_OBJECTS"           10.09 KB      38 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  /exp/kmportal_dpump_dir/test_url_test.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 01:12:24

Mining DDL from LogMiner

One may sometimes need to find the ddl which ran on the database at some certain period of time.If auditing is enabled for the you can get the details using various *_audit_* views. In case, its not and the database is in archivelog mode you can try using logminer.Here is an example for it –

From SYS session –

14:57:56 SQL> select supplemental_log_data_min from v$database;

SUPPLEME
--------
YES

14:58:11 SQL>
14:58:11 SQL>
14:58:11 SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            D:\oracle\archive\catdb
Oldest online log sequence     24
Next log sequence to archive   26
Current log sequence           26
14:58:36 SQL>
14:58:36 SQL>
14:58:36 SQL> alter system switch logfile;

System altered.

14:58:41 SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            D:\oracle\archive\catdb
Oldest online log sequence     25
Next log sequence to archive   27
Current log sequence           27
14:58:43 SQL>

Logged in as ANAND user and did some DDLs

14:58:59 SQL> show user
USER is "ANAND"
14:59:01 SQL>
14:59:01 SQL>
14:59:01 SQL> create table test_log (a number,b number);

Table created.

14:59:15 SQL>
14:59:15 SQL> insert into test_log values (1,1);

1 row created.

14:59:23 SQL> insert into test_log values (1,2);

1 row created.

14:59:25 SQL> insert into test_log values (2,2);

1 row created.

14:59:27 SQL> insert into test_log values (2,1);

1 row created.

14:59:29 SQL>
14:59:30 SQL>
14:59:30 SQL> commit;

Commit complete.

14:59:31 SQL>
14:59:31 SQL> select * from test_log;

         A          B
---------- ----------
         1          1
         1          2
         2          2
         2          1

14:59:36 SQL>
14:59:37 SQL>
14:59:37 SQL>
15:01:35 SQL>
15:01:35 SQL> alter table test_log add (c number);

Table altered.

15:01:49 SQL>
15:01:49 SQL>
15:01:50 SQL> insert into test_log values (1,1,1);

1 row created.

15:01:55 SQL> insert into test_log values (1,1,2);

1 row created.

15:01:58 SQL> insert into test_log values (1,2,2);

1 row created.

15:02:00 SQL> insert into test_log values (2,2,2);

1 row created.

15:02:03 SQL> insert into test_log values (2,1,1);

1 row created.

15:02:08 SQL> commit;

Commit complete.

15:02:10 SQL>
15:02:10 SQL>
15:02:11 SQL>
15:02:11 SQL> select * from test_log;

         A          B          C
---------- ---------- ----------
         1          1
         1          2
         2          2
         2          1
         1          1          1
         1          1          2
         1          2          2
         2          2          2
         2          1          1

9 rows selected.

15:02:16 SQL>
15:12:02 SQL>
15:12:03 SQL>
16:08:39 SQL>
16:08:39 SQL>
16:08:40 SQL>
16:08:40 SQL> truncate table test_log;

Table truncated.

16:08:50 SQL>
16:08:51 SQL>
16:08:51 SQL>
16:08:51 SQL>
16:21:53 SQL>
16:21:53 SQL>
16:21:53 SQL> drop table test_log;

Table dropped.

As sys user

15:13:27 SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            D:\oracle\archive\catdb
Oldest online log sequence     25
Next log sequence to archive   27
Current log sequence           27
16:22:01 SQL>
16:22:01 SQL>
16:22:02 SQL> alter system switch logfile;

System altered.

16:22:06 SQL> alter system switch logfile;

System altered.

16:22:27 SQL>

So archive log 27 has the details

16:23:28 SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';16:31:35 

SQL> EXECUTE dbms_logmnr.add_logfile(LogFileName=>'D:\oracle\archive\catdb\ARC0000000027_0785160155.0001',OPTIONS => DBMS_LOGMNR.NEW);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.29
16:31:48 SQL> EXECUTE dbms_logmnr.add_logfile(LogFileName=>'D:\oracle\archive\catdb\ARC0000000028_0785160155.0001',OPTIONS => DBMS_LOGMNR.ADDFILE);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01

16:33:21 SQL> select filename,low_time,high_time,DICTIONARY_BEGIN,DICTIONARY_END from v$LOGMNR_LOGS;

FILENAME                                                               LOW_TIME             HIGH_TIME            DIC DIC
---------------------------------------------------------------------- -------------------- -------------------- --- ---
D:\oracle\archive\catdb\ARC0000000027_0785160155.0001                  12-JUN-2012 14:58:41 12-JUN-2012 16:22:06 NO  NO
D:\oracle\archive\catdb\ARC0000000028_0785160155.0001                  12-JUN-2012 16:22:06 12-JUN-2012 16:22:27 NO  NO

16:35:01 SQL> EXECUTE dbms_logmnr.start_logmnr(OPTIONS =>DBMS_LOGMNR.DDL_DICT_TRACKING +DBMS_LOGMNR.PRINT_PRETTY_SQL);

PL/SQL procedure successfully completed.

16:41:44 SQL> select username,OPERATION,TIMESTAMP,TABLE_NAME,SESSION#,SERIAL#,SQL_REDO from  V$LOGMNR_CONTENTS where SEG_OWNER='ANAND';

USERNAME   OPERATION                        TIMESTAMP            TABLE_NAME                         SESSION#    SERIAL# SQL_REDO
---------- -------------------------------- -------------------- -------------------------------- ---------- ---------- ------------------------------------------------------------
ANAND      DDL                              12-JUN-2012 14:59:14 TEST_LOG                             191            95 create table test_log (a number,b number);
ANAND      DDL                              12-JUN-2012 15:01:49 TEST_LOG                             191            95 alter table test_log add (c number);
ANAND      DDL                              12-JUN-2012 16:08:50 TEST_LOG                             191            95 truncate table test_log;
ANAND      DDL                              12-JUN-2012 16:21:58 TEST_LOG                             191            95 ALTER TABLE "ANAND"."TEST_LOG" RENAME TO "BIN$R2yT2OHzRhGspu
                                                                                                                5YAsnXPw==$0" ;

ANAND      DDL                              12-JUN-2012 16:21:58 TEST_LOG                             191            95 drop table test_log AS "BIN$R2yT2OHzRhGspu5YAsnXPw==$0" ;

If supplemental logging is not enabled, some columns of the view would have UNKNOWN value.It could be difficult in that case to identify the user and session information details.

Index Range Scan and db file scattered read as session wait event

I was asked to check a session which was running for more than expected time on 11.2.0.3 single instance database.The session was running a simple select statement

05:03:09 SYS@xx4040 > @sqlid 9kpjtkdz1mzus
old   9:        sql_id = ('&1')
new   9:        sql_id = ('9kpjtkdz1mzus')

SQL_ID        HASH_VALUE SQL_TEXT
------------- ---------- ------------------------------------------------------------------------------------------------------------------------------------------------------
9kpgmndz1mzus 2115632984 SELECT SC.ID FROM user.s_seq_cd SC WHERE SC.TYPE_CD <> 'CMOD' AND SC.URGENCY_CD IS NOT NULL AND SC.STAT_ID = 'Closed'

1 row selected.

Elapsed: 00:00:00.00
old  26:        sql_id = ('&1')
new  26:        sql_id = ('9kpjtkdz1mzus')

 CH# PARENT_HANDLE    OBJECT_HANDLE     PLAN_HASH OPT_COST     PARSES   H_PARSES EXECUTIONS    FETCHES ROWS_PROCESSED     CPU_MS     ELA_MS    AVG_ELA       LIOS       PIOS      SORTS USERS_EXECUTING
---- ---------------- ---------------- ---------- -------- ---------- ---------- ---------- ---------- -------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------------
   0 00000001C76435E8 00000001C7640D80 3122863050    18774          1          1          1         56         229376   1148.076 1659299.43 1659.29943     228000     209064          0        1

1 row selected.
SQL_ID  9kpjtkdz1mzus, child number 0
-------------------------------------
SELECT SC.ID FROM USER.S_SEQ_CD SR WHERE SC.TYPE_CD <>
'CMOD' AND SC.URGENCY_CD IS NOT NULL AND SC.STAT_ID = 'Closed'

Plan hash value: 3122863050

-----------------------------------------------------------------------------------------------
| Id  | Operation                   | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                 |       |       | 18774 (100)|          |
|*  1 |  TABLE ACCESS BY INDEX ROWID| S_SEQ_CD        |   630K|    54M| 18774   (1)| 00:03:46 |
|*  2 |   INDEX RANGE SCAN          | S_SEQ_CD_M28_X  |    12M|       |   552   (1)| 00:00:07 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("SC"."URGENCY_CD" IS NOT NULL AND "SC"."TYPE_CD"<>'CMOD'))
   2 - access("SC"."STAT_ID"='Closed')

On checking the session wait event it showed “db file scattered read”

05:12:01 SYS@xx4040 > @asw

USERNAME               SID    SERIAL# SPID                     MACHINE              EVENT                                    SQL_ID         WAIT_TIME SECONDS_IN_WAIT STATE
--------------- ---------- ---------- ------------------------ -------------------- ---------------------------------------- ------------- ---------- --------------- -------------------
USER                  575        331 9292                     xxxxxxxx              db file scattered read                   9kpjtkdz1mzus          0               0 WAITING

As per the plan i was expecting “db file sequential read” but seeing “db file scattered read” was little shocking to me. Traced the session at 10046 level 12, which showed

.......................
WAIT #47616080937272: nam='db file parallel read' ela= 39297 files=1 blocks=7 requests=7 obj#=55745 tim=1338199998917111
WAIT #47616080937272: nam='db file scattered read' ela= 3226 file#=60 block#=71137 blocks=8 obj#=55745 tim=1338199998920723
WAIT #47616080937272: nam='db file scattered read' ela= 15988 file#=60 block#=71145 blocks=7 obj#=55745 tim=1338199998936973

*** 2012-05-28 05:13:18.945
WAIT #47616080937272: nam='db file scattered read' ela= 7994 file#=60 block#=71152 blocks=9 obj#=55745 tim=1338199998945414
WAIT #47616080937272: nam='db file scattered read' ela= 7164 file#=60 block#=71161 blocks=7 obj#=55745 tim=1338199998952955
WAIT #47616080937272: nam='db file scattered read' ela= 9505 file#=60 block#=71169 blocks=7 obj#=55745 tim=1338199998963139
WAIT #47616080937272: nam='db file sequential read' ela= 5597 file#=60 block#=71168 blocks=1 obj#=55745 tim=1338199998968993
WAIT #47616080937272: nam='db file scattered read' ela= 9685 file#=60 block#=71176 blocks=7 obj#=55745 tim=1338199998978860
WAIT #47616080937272: nam='db file scattered read' ela= 8081 file#=60 block#=71183 blocks=8 obj#=55745 tim=1338199998987329
WAIT #47616080937272: nam='db file sequential read' ela= 5535 file#=30 block#=156495 blocks=1 obj#=55745 tim=1338199998993023
WAIT #47616080937272: nam='db file scattered read' ela= 17117 file#=60 block#=71191 blocks=7 obj#=55745 tim=1338199999010375
WAIT #47616080937272: nam='db file sequential read' ela= 14412 file#=114 block#=475510 blocks=1 obj#=531936 tim=1338199999025129
........................
........................
05:27:25 SYS@xx4040 > select object_name from dba_objects where object_id=531936;

OBJECT_NAME
----------------------------------------
S_SEQ_CD_M28_X

05:20:35 SYS@xx4040 > select object_name from dba_objects where object_id=55745;

OBJECT_NAME
----------------------------------------
S_SEQ_CD

Then i came across

http://www.freelists.org/post/oracle-l/RE-Calculating-LIOs,11

If a regular index index full or range scan causes “multiblock” reads, it is due index prefetching which is up to optimizer to decide.Index prefetching means that when doing a range or full scan, then Oracle will read the index by traversing the index tree, but it reads the tree several blocks at a time (for example, it gets a number of logically
consecutive leaf block addresses from branch block and reads a batch of leafs in).

So this is a *noncontiguous* multiblock read, meaning that Oracle just issues multiple single block read requests in one batch. This should result in a “db file parallel read” wait. So, asynch IO should definitely be used when relying on index prefetch, otherwise your performance might degrade, especially when some of the prefetched blocks don’t get used at all.

CKPT process maintains statistics history in X$KCBKPFS (kernel cache buffer k? pre-fetch statistics) about wasted prefetched blocks.(Luckily this x$ table’s columns are written in english not just abbreviations). So if sessions start hitting a lot of never pinned prefetch blocks, the x$kcbkpfs statistics start to indicate inefficient prefetching, thus causing CBO not to favour it and prefer regular single block reads.

Index full scans are the main candidates which may benefit from prefetch,since a full scan needs to read all leaf blocks anyway, thus not risking with prefetch wastage. However, in order to prefetch batches of leaf blocks
you have to read in the branch blocks “above” it, causing some extra IO (for regular single block index full scan you only have to read in branch blocks from root to first or last leaf block in tree). Note that I’m only concluding my last claim, I’ve not done testing to see whether extra branch blocks actually happen – but I don’t see any other way
how to find out DBAs of next leaf block batch.

You can control prefetching with parameters:

_index_prefetch_factor – defaults to 100, but when changed smaller, the optimizer will favor index prefetching.
_db_file_noncontig_mblock_read_count – maximum number of blocks to be prefetched in one batch. If set to 1, prefetching should be disabled.

There is an event 10299 which traces prefetch operations and if you set it to CKPT process you’ll see some aggregate statistics as well, also there area lot of %prefetch% statistics in v$sesstat & v$sysstat

On checking the session stat for prefetch -

05:45:59 SYS@xx4040 > select name,value from v$sesstat s, v$statname n where s.statistic# = n.statistic# and s.sid = 575 and n.name like '%prefetch%' order by name;

NAME                                                    VALUE
-------------------------------------------------- ----------
index crx upgrade (prefetch)                                0
physical reads cache prefetch                          454875
physical reads prefetch warmup                              0
prefetch clients - 16k                                      0
prefetch clients - 2k                                       0
prefetch clients - 32k                                      0
prefetch clients - 4k                                       0
prefetch clients - 8k                                       0
prefetch clients - default                                  1
prefetch clients - keep                                     0
prefetch clients - recycle                                  0
prefetch warmup blocks aged out before use              26826
prefetch warmup blocks flushed out before use               0
prefetched blocks aged out before use                     468

The value for “physical reads cache prefetch” increased to 533260 and after this consistent value, the session wait started showing db file sequential read.

06:44:18 SYS@xx4040 > select * from X$KCBKPFS where WASTED_BLOCKS > 0;

no rows selected

Elapsed: 00:00:00.00
06:44:27 SYS@xx4040 > select * from X$KCBKPFS where PREFETCH_BLOCKS> 0;

ADDR                   INDX    INST_ID BUFFER_POOL_ID  TIMESTAMP PREFETCH_OPS PREFETCH_BLOCKS WASTED_BLOCKS CLIENTS_PREFETCH PREFETCH_LIMIT
---------------- ---------- ---------- -------------- ---------- ------------ --------------- ------------- ---------------- --------------
00002B70873F6598        100          1              3   14781146          663            4619             0                7           9376
00002B70873F6598        101          1              3   14784346          780            5428             0                7           9376
00002B70873F6598        102          1              3   14788166         1161            8074             0                7           9376
........................
........................
00002B70873F6598        148          1              3   14773386         1159            8077             0                7           9376
00002B70873F6598        149          1              3   14778066         1093            7626             0                7           9376

49 rows selected.

Some important parameters set regarding it -

SYS@xx4040 > @hidden_param
Enter value for parameter: _db_file_noncontig_mblock_read_count
Enter value for 1: _db_file_noncontig_mblock_read_count
old  11: AND    a.ksppinm = DECODE(LOWER('&parameter'), 'all', a.ksppinm, LOWER('&1'))
new  11: AND    a.ksppinm = DECODE(LOWER('_db_file_noncontig_mblock_read_count'), 'all', a.ksppinm, LOWER('_db_file_noncontig_mblock_read_count'))

PARAMETER                             DESCRIPTION                    SESSION_VA INSTANCE_V
------------------------------------- ------------------------------ ---------- ----------
_db_file_noncontig_mblock_read_count  number of noncontiguous db     11         11
                                      blocks to be prefetched


1 row selected.

SYS@xx4040 > /
Enter value for parameter: _db_cache_pre_warm
Enter value for 1: _db_cache_pre_warm
old  11: AND    a.ksppinm = DECODE(LOWER('&parameter'), 'all', a.ksppinm, LOWER('&1'))
new  11: AND    a.ksppinm = DECODE(LOWER('_db_cache_pre_warm'), 'all', a.ksppinm, LOWER('_db_cache_pre_warm'))

PARAMETER                             DESCRIPTION                    SESSION_VA INSTANCE_V
------------------------------------- ------------------------------ ---------- ----------
_db_cache_pre_warm                    Buffer Cache Pre-Warm Enabled  TRUE       TRUE
                                      : hidden parameter


1 row selected.

SYS@xx4040 > /
Enter value for parameter: _index_prefetch_factor
Enter value for 1: _index_prefetch_factor
old  11: AND    a.ksppinm = DECODE(LOWER('&parameter'), 'all', a.ksppinm, LOWER('&1'))
new  11: AND    a.ksppinm = DECODE(LOWER('_index_prefetch_factor'), 'all', a.ksppinm, LOWER('_index_prefetch_factor'))

PARAMETER                             DESCRIPTION                    SESSION_VA INSTANCE_V
------------------------------------- ------------------------------ ---------- ----------
_index_prefetch_factor                index prefetching factor       100        100

1 row selected.

SYS@xx4040 > show parameter sga_target

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sga_target                           big integer 6G

SYS@xx4040 > show parameter multiblock

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_multiblock_read_count        integer     8

05:47:43 SYS@xx4040 > select sum(bytes/1024/1024/1024)GB from dba_segments where segment_name='S_SEQ_CD';

        GB
----------
13.7021484

05:49:57 SYS@xx4040 > select sum(bytes/1024/1024)MB from dba_segments where segment_name='S_SEQ_CD_M28_X';

        MB
----------
   467.125

The database was bounced just before the start of the sql as the box got rebooted.So what is it that the prefetch is cause of empty buffer cache? I am unable to understand this behavior and so it would be great if some of you can put your thoughts as comments on it along with what all more could be checked for.

UPDATE — You can also read the below blog from Tanel Poder which mentions having mutliblock reads for index unique scan.

http://blog.tanelpoder.com/2012/05/02/advanced-oracle-troubleshooting-guide-part-10-index-unique-scan-doing-multiblock-reads/

Index Sample Size

Recently i was asked a question, what would be the sample size percentage for indexes when tables stats are gathered with estimate percent of 5 and cascade set to true.My answer to it was 5 within a second.I had this believe that the sample size percentage for an index is same to the estimate_percent given while gathering the stats.So it was time for some test -

SQL> select * from v$version;
 
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for 32-bit Windows: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
 
SQL> drop table STAT_TEST purge;
 
Table dropped.
 
SQL> create table stat_test (
  2  c1 number not null,
  3  c2 number not null,
  4  c3 varchar2(300) not null);
 
Table created.
 
SQL> create index stat_c2_idx on stat_test(c2);
 
Index created.
 
SQL>
SQL> create unique index stat_c1_indx on stat_test(c1);
 
Index created.
 
SQL> create index stat_c3_idx on stat_test(c3);
 
Index created.
 
SQL> insert into stat_test
  2  select
  3  rownum as c1,
  4  1 as c2,
  5  rpad('A',300,'A') as c3
  6  from
  7  dual
  8  connect by
  9  level / 
Enter value for tblown: ANAND
Enter value for tblnm: STAT_TEST
 
SE SORT1                          SORT2                          S INDEX_NAME                     LAST_ANALYZED   NUM_ROWS SAMPLE_PCT DISTINCT_KEYS CLUSTERING_FACTOR FLAGS
-- ------------------------------ ------------------------------ - ------------------------------ ------------- ---------- ---------- ------------- ----------------- ----------
I  ANAND                          STAT_C2_IDX                      STAT_C2_IDX                                                                                N
I  ANAND                          STAT_C1_INDX                     STAT_C1_INDX                                                                               U
I  ANAND                          STAT_C3_IDX                      STAT_C3_IDX                                                                                N

Gathered stats on the index

SQL> exec dbms_stats.gather_index_stats(ownname =>'ANAND',indname =>'STAT_C1_INDX',estimate_percent=>5,degree => DBMS_STATS.AUTO_DEGREE);
 
PL/SQL procedure successfully completed.
 
SQL> exec dbms_stats.gather_index_stats(ownname =>'ANAND',indname =>'STAT_C2_IDX',estimate_percent=>5,degree => DBMS_STATS.AUTO_DEGREE);
 
PL/SQL procedure successfully completed.
 
SQL> exec dbms_stats.gather_index_stats(ownname =>'ANAND',indname =>'STAT_C3_IDX',estimate_percent=>5,degree => DBMS_STATS.AUTO_DEGREE);
 
PL/SQL procedure successfully completed.

As i have given estimate_percent i would expect sample_percentage to be 5%. So lets check -

SQL> / 
Enter value for tblown: ANAND
Enter value for tblnm: STAT_TEST
 
SE SORT1                          SORT2                          S INDEX_NAME                     LAST_ANALYZED   NUM_ROWS SAMPLE_PCT DISTINCT_KEYS CLUSTERING_FACTOR FLAGS
-- ------------------------------ ------------------------------ - ------------------------------ ------------- ---------- ---------- ------------- ----------------- ----------
I  ANAND                          STAT_C2_IDX                      STAT_C2_IDX                    20120518 0035    1011697         40             1             46109 N
I  ANAND                          STAT_C1_INDX                     STAT_C1_INDX                   20120518 0031    1016203         61       1016203             47680 U
I  ANAND                          STAT_C3_IDX                      STAT_C3_IDX                    20120518 0035     998800          5             1             68460 N

Deleted the stats on the index and gathered with dbms_stats.gather_table_stats with estimate
_percentage of 5% to check what happens in such case

SQL> exec dbms_stats.gather_table_stats(ownname =>'ANAND',tabname =>'STAT_TEST',estimate_percent=>5,degree => DBMS_STATS.AUTO_DEGREE);
 
PL/SQL procedure successfully completed.
 
SQL> / 
Enter value for tblown: ANAND
Enter value for tblnm: STAT_TEST
 
SE SORT1                          SORT2                          S INDEX_NAME                     LAST_ANALYZED   NUM_ROWS SAMPLE_PCT DISTINCT_KEYS CLUSTERING_FACTOR FLAGS
-- ------------------------------ ------------------------------ - ------------------------------ ------------- ---------- ---------- ------------- ----------------- ----------
I  ANAND                          STAT_C2_IDX                      STAT_C2_IDX                    20120518 0037    1008329         40             1             45928 N
I  ANAND                          STAT_C1_INDX                     STAT_C1_INDX                   20120518 0037    1007530         61       1007530             47296 U
I  ANAND                          STAT_C3_IDX                      STAT_C3_IDX                    20120518 0037    1008480          5             1             68880 N
 
SQL> select 'T ' seg_type,
  2         t.owner sort1,
  3         t.table_name sort2,
  4         null sort3,
  5         t.table_name,
  6         to_char(t.last_analyzed,'yyyymmdd hh24mi') last_analyzed,
  7         t.num_rows,
  8         -- t.sample_size,
  9         decode(t.num_rows,
 10                null,to_number(null),
 11                0,100,
 12                round((t.sample_size*100)/t.num_rows,0)) sample_pct,
 13    from dba_tables t
 14   where t.owner = upper('&tblown')
 15     and t.table_name like upper('&tblnm')
 16  / 
Enter value for tblown: ANAND
Enter value for tblnm: STAT_TEST
 
SE SORT1                          SORT2                          S TABLE_NAME                LAST_ANALYZED   NUM_ROWS SAMPLE_PCT 
-- ------------------------------ ------------------------------ - ------------------------- ------------- ---------- ---------- -
T  ANAND                          STAT_TEST                        STAT_TEST                 20120518 0037    1001020          5

So the table shows sample_pct as 5% but for indexes is different.

I posted the same on OTN forum and Sir Jonathan Lewis replied –

According to a comment I wrote a couple of years ago ( https://forums.oracle.com/forums/thread.jspa?threadID=1116700 ) Oracle used to have a note on MOS stating that because the data in an index was sorted, and therefore easily able to produce a distorted picture on a small sample, the index sample was adjusted if necessary to aim for a minimum number of leaf blocks which is about 900.

The two indexes where you’ve got large samples very short keys, so the 5% was probably too few blocks. The index where you got a 5% sample was a long key, so maybe the 5% was comfortably over 900.

After which i did few checks and observed that is if an index is having upto ~1100 leaf blocks, then stats with 100% sample is being generated on the index and as the leaf block increase oracle internally determines the estimate percentage for sampling.

Change in Database link – 11gR2

A little change in dblink from 11gR2 -

SQL> conn anand/anand123
Connected.
SQL>
SQL> @db_link

OWNER                          DB_LINK                        USERNAME                       HOST
------------------------------ ------------------------------ ------------------------------ ------------------------------
ANAND                          SCOTT_DBLINK                   SCOTT                          MATSTDBY

SQL> select sysdate from dual@SCOTT_DBLINK;

SYSDATE
---------
11-MAY-12

I login as sys and change scott schema’s password -

SQL> conn / as sysdba
Connected.
SQL>
SQL>
SQL>
SQL> alter user scott identified by abc;

User altered.

Now select using the dblink fails -

SQL> select sysdate from dual@SCOTT_DBLINK;
select sysdate from dual@SCOTT_DBLINK
                         *
ERROR at line 1:
ORA-01017: invalid username/password; logon denied
ORA-02063: preceding line from SCOTT_DBLINK

In prior to 11gR2 release, one had to drop and recreate the dblink with the correct password. From 11gR2 we have “alter database link” command -

SQL> alter database link SCOTT_DBLINK connect to scott identified by abc;
alter database link SCOTT_DBLINK connect to scott identified by abc
                    *
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> conn / as sysdba
Connected.

SQL> grant alter database link to anand;

Grant succeeded.
SQL> conn anand/anand123
Connected.
SQL>
SQL>
SQL>
SQL> alter database link SCOTT_DBLINK connect to scott identified by abc;

Database link altered.

SQL> select * from dual@SCOTT_DBLINK;

D
-
X