Archive for the ‘ Oracle ’ Category

UPGRADE CHECKPOINTTABLE – Goldengate

We have a goldengate setup wherein 3 different GoldenGate clients connects and replicat to one target database.Below are the versions being currently used 11.1.1.1, 11.2.1.0.0 and 11.2.1.0.3.

The version 11.2.1.0.3 was recently added, and below are the steps performed

GGSCI (myhost) 2> obey ./dirprm/add_rep.oby

GGSCI (myhost) 3>

GGSCI (myhost) 3> DBLOGIN USERID ggate@test PASSWORD 'xxxxxxxx'

ERROR: Unable to connect to database using user ggate@test. Please check privileges.
ORA-12170: TNS:Connect timeout occurred.

GGSCI (myhost) 4>

GGSCI (myhost) 4> ADD REPLICAT rep, extTrail /app/trail/rep/rp, checkpointTable ggate.OGG_CHECKPOINT

REPLICAT added.

dblogin failed but then the replicat got added. Now what happens if we try to delete it

GGSCI (myhost) 5> delete replicat rep
ERROR: Could not delete DB checkpoint for REPLICAT rep (Database login required to delete database checkpoint).

GGSCI (host) 6> info all

Program     Status      Group   Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     ext     00:00:00      00:00:05
REPLICAT    STOPPED     rep     00:00:00      00:00:38

So, now what can be done to delete it. Its simple,

In GG_HOME/dirchk

TEST:/u01/app/oracle/product/ggate/dirchk->ls -lrt
total 20
-rw-rw-r-- 1 ggate dba 4096 Nov 17 01:51 EXT.cpb
-rw-rw-r-- 1 ggate dba 2048 Nov 17 03:39 REP.cpr
-rw-rw-r-- 1 ggate dba   52 Nov 17 03:59 EXT.cps
-rw-rw-r-- 1 ggate dba 8192 Nov 17 03:59 EXT.cpe

Remove "REP.cpr" file.

TEST:/u01/app/oracle/product/ggate/dirchk->rm REP.cpr


TEST:/u01/app/oracle/product/ggate->ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.3 14400833 OGGCORE_11.2.1.0.3_PLATFORMS_120823.1258_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Aug 23 2012 20:20:21

Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.



GGSCI (host) 1> info all

Program     Status      Group  Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT     00:00:01      00:00:07

Once the port issue was resolved, added the replicat “rep” again successfully.

GGSCI (myhost) 2>  DBLOGIN USERID ggate@test PASSWORD "xxxxxxxx"
Successfully logged into database.

GGSCI (myhost) 3> ADD REPLICAT REP, extTrail /app/trail/rep/rp, checkpointTable ggate.OGG_CHECKPOINT
REPLICAT added.


GGSCI (myhost) 4> info all

Program     Status      Group   Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT     00:00:01      00:00:02
REPLICAT    STOPPED     REP     00:00:00      00:00:20


GGSCI (myhost) 2> start REP

Sending START request to MANAGER ...
REPLICAT REP starting


GGSCI (myhost) 3> info all

Program     Status      Group  Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT     00:00:00      00:00:06
REPLICAT    STOPPED     REP     00:00:00      00:04:15

Why is the status ‘STOPPED’? ggserr.log shows

2012-11-17 04:04:46  ERROR   OGG-00446  Oracle GoldenGate Delivery for Oracle, rep.prm:  Supplemental Checkpoint table does not exist.  Create a supplemental checkpoint table with the UPGRADE CHECKPOINTTABLE command in GGSCI if you have upgraded from release 11.2.1.0.0 or earlier.
2012-11-17 04:04:46  ERROR   OGG-01668  Oracle GoldenGate Delivery for Oracle, rep.prm:  PROCESS ABENDING.

The target db is on 11.2.0.2.0 version. Checking the checkpoint table

SYS@test > desc ggate.ogg_checkpoint
 Name                                                                       Null?    Type
 -------------------------------------------------------------------------- -------- --------------------------------------------------
 GROUP_NAME                                                                 NOT NULL VARCHAR2(8)
 GROUP_KEY                                                                  NOT NULL NUMBER(19)
 SEQNO                                                                               NUMBER(10)
 RBA                                                                        NOT NULL NUMBER(19)
 AUDIT_TS                                                                            VARCHAR2(29)
 CREATE_TS                                                                  NOT NULL DATE
 LAST_UPDATE_TS                                                             NOT NULL DATE
 CURRENT_DIR                                                                NOT NULL VARCHAR2(255)

04:00:55 SYS@test1 > /

GROUP_NA  GROUP_KEY      SEQNO        RBA AUDIT_TS                      CREATE_TS                   LAST_UPDATE_TS
-------- ---------- ---------- ---------- ----------------------------- --------------------------- ---------------------------
CURRENT_DIR
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
RABC     2253238980          0  230112172 2012-11-17 00:18:32.000000    16-NOV-12-13:56:33          17-NOV-12-04:08:45
/u01/app/oracle/product/ggate

RDEF     451369050        174   13320072 2012-11-17 04:08:48.000000    08-OCT-12-22:02:16          17-NOV-12-04:08:51
/app/ggate

RGBAA    3979228817         85   30933007 2012-11-17 03:03:33.000000    26-OCT-12-10:06:50          17-NOV-12-04:04:09
/app/ggate

RDONE    3150503361        365  276978037 2012-11-17 04:05:33.000000    14-OCT-12-10:34:54          17-NOV-12-04:05:40
/app/ggate

Though we have added and started REP, we don’t see any row for it in checkpoint table. Lets try to run upgrade checkpointtable command

GGSCI (myhost) 2> DBLOGIN USERID ggate@test PASSWORD "xxxxxxx"
Successfully logged into database.

GGSCI (myhost) 3> UPGRADE CHECKPOINTTABLE ggate.OGG_CHECKPOINT

Successfully upgraded checkpoint table ggate.OGG_CHECKPOINT.

GGSCI (myhost) 4> start rep

Sending START request to MANAGER ...
REPLICAT REP starting


GGSCI (myhost) 5> info all

Program     Status      Group  Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT     00:00:02      00:00:26
REPLICAT    RUNNING     REP     02:35:52      00:00:16

From db

SYS@test1 > desc ggate.ogg_checkpoint
 Name                                                                       Null?    Type
 -------------------------------------------------------------------------- -------- --------------------------------------------------
 GROUP_NAME                                                                 NOT NULL VARCHAR2(8)
 GROUP_KEY                                                                  NOT NULL NUMBER(19)
 SEQNO                                                                               NUMBER(10)
 RBA                                                                        NOT NULL NUMBER(19)
 AUDIT_TS                                                                            VARCHAR2(29)
 CREATE_TS                                                                  NOT NULL DATE
 LAST_UPDATE_TS                                                             NOT NULL DATE
 CURRENT_DIR                                                                NOT NULL VARCHAR2(255)
 LOG_CSN                                                                             VARCHAR2(129)
 LOG_XID                                                                             VARCHAR2(129)
 LOG_CMPLT_CSN                                                                       VARCHAR2(129)
 LOG_CMPLT_XIDS                                                                      VARCHAR2(2000)
 VERSION                                                                             NUMBER(3)

04:12:24 SYS@test1 > select * from ggate.ogg_checkpoint;

GROUP_NA  GROUP_KEY      SEQNO        RBA AUDIT_TS                      CREATE_TS                   LAST_UPDATE_TS
-------- ---------- ---------- ---------- ----------------------------- --------------------------- ---------------------------
CURRENT_DIR
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
LOG_CSN
---------------------------------------------------------------------------------------------------------------------------------
LOG_XID
---------------------------------------------------------------------------------------------------------------------------------
LOG_CMPLT_CSN
---------------------------------------------------------------------------------------------------------------------------------
LOG_CMPLT_XIDS
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
   VERSION
----------
RABC     2253238980          3  119752892 2012-11-20 02:12:30.000000    16-NOV-12-13:56:33          20-NOV-12-04:12:35
/u01/app/oracle/product/ggate






RDEF   451369050        183  229511885 2012-11-20 04:12:22.000000    08-OCT-12-22:02:16          20-NOV-12-04:12:24
/app/ggate






RGBAA   3979228817         97   80060759 2012-11-20 10:07:02.000000    26-OCT-12-10:06:50          20-NOV-12-04:10:09
/app/ggate






REP   2249640216         12  211547890 2012-11-20 04:12:32.000000    17-NOV-12-04:00:48          20-NOV-12-04:12:34
/u01/app/oracle/product/ggate
12957879573832
780.26.260408
12957879573832
780.26.260408
         1

RDONE    3150503361        404  290793846 2012-11-20 04:12:19.000000    14-OCT-12-10:34:54          20-NOV-12-04:12:23
/app/ggate


ORA-26723: user “XXXXX” requires the role “DV_GOLDENGATE_REDO_ACCESS”

While starting the extract on UAT DB env which had been recently moved to exadata we got the below error.As we had redo and archive logfiles on ASM, we used “TRANLOGOPTIONS DBLOGREADER” in extract parameter file.

2012-10-24 22:35:48  ERROR   OGG-00446  Oracle GoldenGate Capture for Oracle, emos_cc.prm:  Opening ASM file +RECO_UMO1/archivelog/2012_10_24/thread_1_seq_224.955.797517005 in DBLOGREADER mode: (26723) ORA-26723: user "GGATE" requires the role "DV_GOLDENGATE_REDO_ACCESS"

The first thing which we did is checked whether the role exists or not.


22:41:53 SYS@xxxxx1 > select role from dba_roles where role like 'DV_%';

no rows selected

Ahh, No roles starting with DV_ exists in the db. Then why is GOldenGate asking for this role.Doing some search on tahiti.oracle.com pointed to a document which mentioned

Grant the DV_GOLDENGATE_REDO_ACCESS role to any user who is responsible for using the Oracle GoldenGate TRANLOGOPTIONS DBLOGREADER method to access redo logs in an Oracle Database Vault environment. This enables the management of Oracle GoldenGate processes to be tightly controlled by Database Vault, but does not change or restrict the way an administrator would normally configure Oracle GoldenGate.

So, now we have a clue. Its something to do with Database Vault.The UAT env had recently been moved to exadata box, prior to which it on a normal server where the extract was running fine.

22:42:05 SYS@xxxxx1 > SELECT * FROM V$OPTION WHERE PARAMETER = 'Oracle Database Vault';

PARAMETER                                                        VALUE
---------------------------------------------------------------- ----------------------------------------------------------------
Oracle Database Vault                                            TRUE

Above, shows Database Vault option enabled, but as the database was restored from the backup of the db on normal server, we didn’t had any DVSYS and DVF schemas.

Oracle Database Vault has the following schemas:

DVSYS Schema: Owns the Oracle Database Vault schema and related objects

DVF Schema: Owns the Oracle Database Vault functions that are created to retrieve factor identities

As, vault wasn’t required, we used CHOPT utility available from 11.2 for enabling/disabling database features.


After shutting down the db, ran chopt on all the nodes --

abcde0025: (abncu1) /u01/abncu/admin> chopt disable dv

Writing to /u01/app/oracle/product/11.2.0.3/dbhome_1/install/disable_dv.log...
/usr/bin/make -f /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ins_rdbms.mk dv_off ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1
/usr/bin/make -f /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1

abcde0025: (abncu1) /u01/abncu/admin>

Started the db and checked for the value which was disabled (FALSE) and GoldenGate extract started working.

SYS@xxxxx1 > SELECT * FROM V$OPTION WHERE PARAMETER = 'Oracle Database Vault';

PARAMETER VALUE
----------------------------------------------------------------
----------------------------------------------------------------
Oracle Database Vault   FALSE

References
http://docs.oracle.com/cd/E11882_01/server.112/e23090/db_objects.htm#DVADM71151

http://docs.oracle.com/cd/E11882_01/install.112/e17214/postinst.htm#CHDBDCGE

Interesting blogs after OOW

It has been long time since i have posted something. After the OOW week and 12c db version being on its way, there have been few blogs which i would recommend to read

http://portrix-systems.de/blog/brost/favourite-oracle-12c-database-features-of-openworld-bloggers/

http://hansforbrich.blogspot.sg/2012/10/toms-12-top-new-features.html

http://dbastreet.com/blog/?p=908

http://momendba.blogspot.in/2012/10/oow-2012-oracle-12c-new-features.html

http://momendba.blogspot.in/2012/10/oracle-rac-12c-new-featues.html

http://uhesse.com/2012/10/01/exadata-x3-key-points/

Happy reading :)

Loading SQL Plans into SPM using AWR

SPM (SQL Plan Management) has been an useful solution for performance issues due to change in execution plan.As mentioned in Oracle Optimizer team’s blog –

SQL Plan Management (SPM) allows database users to maintain stable yet optimal performance for a set of SQL statements. SPM incorporates the positive attributes of plan adaptability and plan stability, while simultaneously avoiding their shortcomings. It has two main objectives:

prevent performance regressions in the face of database system changes
offer performance improvements by gracefully adapting to database system changes

SQL Plan baselines can be loaded in multiple ways

1. Using SQL Tuning set (STS_
2. From Cursor Cache
3. Export and Import using Staging table
4. Automatically

Here we would be see how to manually load the plans from AWR to SQL Plan Baseline.To load from AWR we need to create sql tuning set

SQL> exec dbms_sqltune.create_sqlset(sqlset_name => 'b8rc6j0krxwdc_sqlset_test',description => 'sqlset descriptions');

Identify the snap_id in which the sql belongs using dba_hist_sqlstat view.Once identified

declare
baseline_ref_cur DBMS_SQLTUNE.SQLSET_CURSOR;
begin
open baseline_ref_cur for
select VALUE(p) from table(
DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY(&begin_snap_id, &end_snap_id,'sql_id='||CHR(39)||'&sql_id'||CHR(39)||'',NULL,NULL,NULL,NULL,NULL,NULL,'ALL')) p;
DBMS_SQLTUNE.LOAD_SQLSET('b8rc6j0krxwdc_sqlset_test', baseline_ref_cur);
end;
/

It would prompt for the begin_snapid , end_snapid and the sql_id for which you want to load the sqlset with. Along with sql_id we can give the specific plan_hash_value in DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY.

declare
baseline_ref_cur DBMS_SQLTUNE.SQLSET_CURSOR;
begin
open baseline_ref_cur for
select VALUE(p) from table(
DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY(&begin_snap_id, &end_snap_id,'sql_id='||CHR(39)||'&sql_id'||CHR(39)||' and plan_hash_value=1421641795',NULL,NULL,NULL,NULL,NULL,NULL,'ALL')) p;
DBMS_SQLTUNE.LOAD_SQLSET('b8rc6j0krxwdc_sqlset_test', baseline_ref_cur);
end;
/

By default, select_workload_repository does not include the SQL Plan so we pass TYPICAL or ALL as the ‘attribute_list’ parameter to get the plan. The default value of BASIC does not capture the plan.

attribute_list

List of SQL statement attributes to return in the result. The possible values are:

TYPICAL – BASIC + SQL plan (without row source statistics) and without object reference list (default)

BASIC – all attributes (such as execution statistics and binds) are returned except the plans. The execution context is always part of the result.

ALL – return all attributes

Comma-separated list of attribute names this allows to return only a subset of SQL attributes: EXECUTION_STATISTICS, SQL_BINDS, SQL_PLAN_STATISTICS (similar to SQL_PLAN + row source statistics).

To load all the sqls captured in AWR snapshot use “NULL” in basic_filter.

declare
baseline_ref_cur DBMS_SQLTUNE.SQLSET_CURSOR;
begin
open baseline_ref_cur for
select VALUE(p) from table(
DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY(&begin_snap_id, &end_snap_id,NULL,NULL,NULL,NULL,NULL,NULL,NULL,'ALL')) p;
DBMS_SQLTUNE.LOAD_SQLSET('all_sqlset_test', baseline_ref_cur);
end;
/

View the sqlset using DBA_SQLSET

05:58:44 SYS >  SELECT NAME,OWNER,CREATED,STATEMENT_COUNT FROM DBA_SQLSET where name='b8rc6j0krxwdc_sqlset_test';

NAME                                     OWNER      CREATED                     STATEMENT_COUNT
---------------------------------------- ---------- --------------------------- ---------------
b8rc6j0krxwdc_sqlset_test                SYS        04-JUL-12-05:58:44                        1

To verify the execution Plan of a SQL_ID in the STS

select * from table(dbms_xplan.display_sqlset('b8rc6j0krxwdc_sqlset_test','&sql_id'));

The baseline has not yet been loaded

SYS > select count(*) from dba_sql_plan_baselines;

  COUNT(*)
----------
         0

To load SQL execution plans from SQL set into SQL baseline –

set serveroutput on
declare
my_int pls_integer;
begin
my_int := dbms_spm.load_plans_from_sqlset (
sqlset_name => 'b8rc6j0krxwdc_sqlset_test',
basic_filter => 'sql_id="b8rc6j0krxwdc",
sqlset_owner => 'SYS',
fixed => 'NO',
enabled => 'YES');
DBMS_OUTPUT.PUT_line(my_int);
end;
/

SYS >  select count(*) from dba_sql_plan_baselines;

  COUNT(*)
----------
         2

To load sql_id with specific plan_has_value use basic_filter => ‘sql_id=”b8rc6j0krxwdc” and plan_hash_value =1306981985′,

To verify the plan baseline

06:41:23 SYS@ornct1 > SELECT SQL_HANDLE, PLAN_NAME, ORIGIN, ENABLED, ACCEPTED, FIXED, MODULE FROM   DBA_SQL_PLAN_BASELINES;

SQL_HANDLE                     PLAN_NAME                      ORIGIN         ENA ACC FIX MODULE
------------------------------ ------------------------------ -------------- --- --- --- ----------------------------------------------------------------
SQL_bc74153ac0be1578           SQL_PLAN_bsx0p7b0bw5bs02dad9b2 MANUAL-LOAD    YES YES NO  emagent_SQL_rac_database
SQL_bc74153ac0be1578           SQL_PLAN_bsx0p7b0bw5bsfcc37a57 MANUAL-LOAD    YES YES NO  emagent_SQL_rac_database

If you want a particular plan_hash_value to be fixed use ‘sql_id=”b8rc6j0krxwdc” and plan_hash_value =1306981985′, fixed => ‘YES’ in dbms_spm.load_plans_from_sqlset.

References and interesting reads -

http://avdeo.com/2012/06/20/fixing-sql-plans-the-hard-way-part-1/
http://avdeo.com/2012/07/04/fixing-sql-plans-the-hard-way-part-2/
http://oracle-randolf.blogspot.in/2009/03/plan-stability-in-10g-using-existing.html
https://blogs.oracle.com/optimizer/entry/sql_plan_management_part_1_of_4_creating_sql_plan_baselines
HOW TO LOAD SQL PLANS INTO SPM FROM AWR (Doc ID 789888.1)

ORA-06553: PLS-306: wrong number or types of arguments in call to ‘OGC_Y’ while exporting table

Recently i had faced “ORA-06553: PLS-306:” error while taking an export using expdp for table using “query” parameter.

$ expdp directory=TEST_DPUMP_DIR  dumpfile=test_url.dmp logfile=test_url.log tables=anand.test_delete_objects query=\"WHERE processed_flag = 'Y'\"

Export: Release 11.2.0.3.0 - Production on Tue May 29 00:24:25 2012

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: system
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** directory=TEST_DPUMP_DIR dumpfile=test_url.dmp logfile=test_url.log tables=anand.test_delete_objects query="WHERE processed_flag = Y"
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 112 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-31693: Table data object "ANAND"."TEST_DELETE_OBJECTS" failed to load/unload and is being skipped due to error:
ORA-06553: PLS-306: wrong number or types of arguments in call to 'OGC_Y'
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  /exp/kmportal_dpump_dir/test_url.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 00:25:50

The export completes successfully if we use parfile

$ expdp parfile=test_url.lst

Export: Release 11.2.0.3.0 - Production on Tue May 29 00:53:49 2012

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: system
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** parfile=test_url.lst
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 112 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "ANAND"."TEST_DELETE_OBJECTS"           10.09 KB      38 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  /exp/kmportal_dpump_dir/test_url.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 00:54:39


$ more test_url.lst
directory=TEST_DPUMP_DIR
dumpfile=test_url.dmp
logfile=test_url.log
tables=anand.test_delete_objects
query="WHERE processed_flag = 'Y'"

So,i was little confused until i came across Jonathan Lewis’s comments on the below blog

http://jonathanlewis.wordpress.com/2010/01/26/aliases/

Unfortunately, if you’ve installed Spatial, there are two functions (called ogc_x and ogc_y) which has been given public synonyms X and Y respectively that move Oracle into the function-call validity checks – which is where the pls error comes from.

SYS > select owner,SYNONYM_NAME,TABLE_OWNER,TABLE_NAME from dba_synonyms where SYNONYM_NAME='Y';

OWNER      SYNONYM_NAME                   TABLE_OWNER                    TABLE_NAME
---------- ------------------------------ ------------------------------ -------------------------
PUBLIC     Y                              MDSYS                          OGC_Y

1 row selected.

SYS > select owner,SYNONYM_NAME,TABLE_OWNER,TABLE_NAME from dba_synonyms where SYNONYM_NAME='X';

OWNER      SYNONYM_NAME                   TABLE_OWNER                    TABLE_NAME
---------- ------------------------------ ------------------------------ -------------------------
PUBLIC     X                              MDSYS                          OGC_X

1 row selected.

So to run it succsfully from the command line

$ expdp directory=TEST_DPUMP_DIR dumpfile=test_url_test.dmp logfile=test_url_test.log tables=anand.test_delete_objects query=\"WHERE processed_flag = \'Y\'\"

Export: Release 11.2.0.3.0 - Production on Tue May 29 01:11:44 2012

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: system
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** directory=TEST_DPUMP_DIR dumpfile=test_url_test.dmp logfile=test_url_test.log tables=anand.test_delete_objects query="WHERE processed_flag = 'Y'"
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 112 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "ANAND"."TEST_DELETE_OBJECTS"           10.09 KB      38 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  /exp/kmportal_dpump_dir/test_url_test.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 01:12:24
Follow

Get every new post delivered to your Inbox.

Join 462 other followers