12cR1 RAC Installation on OEL7

To build Oracle Clusterware Database at Home, I believe , RAC ATTACK is the best place to learn. Its is a free curriculum and platform for hands-on learning labs related to Oracle RAC. While reviewing the article, I thought to perform 12cR1 RAC installation on OEL 7.2.

Attached is the document :- 12c_RAC_on_OEL7

The attached article is inspired by

RAC ATTACK :- https://en.wikibooks.org/wiki/RAC_Attack_-_Oracle_Cluster_Database_at_Home/RAC_Attack_12c

Tim Hall’s article :- https://oracle-base.com/articles/12c/oracle-db-12cr1-rac-installation-on-oracle-linux-7-using-virtualbox 

Deploying Oracle RAC Database 12c on RHEL 7 – Best Practices :- https://www.redhat.com/en/resources/deploying-oracle-rac-database-12c-rhel-7-best-practices

Big Thank you to RAC Attack members!!!

I hope the document helps some of you. Please feel free to comment.

Its all about learning 🙂

Advertisements

Clusterware version consistency failed

Recently we rollbacked (qtree snapshots were restored) 11.2.0.3 2-node RAC back to 10.2.0.5 version after testing successful upgrade.Again it was time for mock upgrade to 11.2.0.3, before performing it on production.We started the runInstaller for CRS upgrade and after the “Prerequisite Checks” it showed ” clusterware version consistency failed for both the nodes”.

The to be new 11gR2 Grid_HOME –> /u01/app/grid/11.2.0
The 10gR2 CRS_HOME –> /u01/app/oracle/product/crs

xy4000: (mode1)crsctl query crs softwareversion
CRS software version on node [xy4000] is [10.2.0.5.0]
xy4000: (node1)crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.5.0]

xy4001: (node2) /u01/app/grid> crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.5.0]
xy4001: (node2) /u01/app/grid> crsctl query crs softwareversion
CRS software version on node [xy4001] is [10.2.0.5.0]

So the active and software version looks correct on both the nodes.Started the runInstaller in debug mode and saw

[pool-1-thread-1] [ 2012-03-13 02:18:32.093 CDT ] [UnixSystem.getCRSHome:2762]  remote copy file result=1| :successful
[pool-1-thread-1] [ 2012-03-13 02:18:32.094 CDT ] [UnixSystem.getCRSHome:2786]  configFile=/tmp/olr.loc13316231114317895692168542383724.tmp
[pool-1-thread-1] [ 2012-03-13 02:18:32.097 CDT ] [Utils.getPropertyValue:241]  keyName=olrconfig_loc props.val=/u01/app/grid/11.2.0/cdata/xy4001.olr propValue=/u01/app/grid/11.
2.0/cdata/xy4001.olr
[pool-1-thread-1] [ 2012-03-13 02:18:32.100 CDT ] [Utils.getPropertyValue:241]  keyName=crs_home props.val=/u01/app/grid/11.2.0 propValue=/u01/app/grid/11.2.0
[pool-1-thread-1] [ 2012-03-13 02:18:32.103 CDT ] [Utils.getPropertyValue:301]  propName=crs_home propValue=/u01/app/grid/11.2.0
[pool-1-thread-1] [ 2012-03-13 02:18:32.106 CDT ] [UnixSystem.getCRSHome:2794]  crs_home=/u01/app/grid/11.2.0

It gave us the clue that things are being read for olr (Oracle Local Registry – new in 11gR2).Checked the /etc/oracle and olr.loc existed.

xy4000: (node1) /etc/oracle> ls -lrt
total 2244
drwxr-xr-x  3 root dba    4096 Apr 19  2009 scls_scr
-rw-r--r--  1 root dba     131 Apr 19  2009 ocr.loc
-rw-r--r--  1 root dba      82 Feb 29 01:06 olr.loc

Renamed it on both the nodes, and started the runInstaller, all went fine after it 🙂

DB startup/shutdown error after downgrade from 11gR2 to 10gR2

I was asked to work on a 2-node RAC db which was downgraded to 10.2.0.5 from 11.2.0.3. While working, i had stopped the database using “shutdown immediate” command from sqlplus instead of srvctl. While start the db using srvctl –

wdlab1: (matrix1) /u01/app/oracle/product/rdbms/10205> srvctl  start database -d matrix_lab -o open
PRKR-1001 : cluster database matrix_lab does not exist
PRKO-2005 : Application error: Failure in getting Cluster Database Configuration for: matrix_lab
wdlab1: (matrix1) /u01/app/oracle/product/rdbms/10205/bin> which srvctl
/u01/app/oracle/product/rdbms/10205/bin/srvctl

Hmmm…lets try starting it using srvctl from 11gR2 grid home

wdlab1: (matrix1) /u01/app/grid/11.2.0/bin> ./srvctl start database -d matrix_lab -o open
PRCR-1079 : Failed to start resource ora.matrix_lab.db
CRS-5017: The resource action "ora.matrix_lab.db start" encountered the following error:
ORA-02095: specified initialization parameter cannot be modified
. For details refer to "(:CLSN00107:)" in "/u01/app/grid/11.2.0/log/wdlab1/agent/crsd/oraagent_oracle/oraagent_oracle.log".

CRS-2674: Start of 'ora.matrix_lab.db' on 'wdlab1' failed
CRS-5017: The resource action "ora.matrix_lab.db start" encountered the following error:
ORA-02095: specified initialization parameter cannot be modified
. For details refer to "(:CLSN00107:)" in "/u01/app/grid/11.2.0/log/wdlab2/agent/crsd/oraagent_oracle/oraagent_oracle.log".

CRS-2632: There are no more servers to try to place resource 'ora.matrix_lab.db' on that would satisfy its placement policy
CRS-2674: Start of 'ora.matrix_lab.db' on 'wdlab2' failed

Hmmm…lets try to see what is the issue

wdlab1: (matrix1) /u01/app/grid/11.2.0/bin> ./crsctl stat res -t
..........
ora.matrix_lab.db
      1        ONLINE  OFFLINE                               Instance Shutdown
      2        ONLINE  OFFLINE                               Instance Shutdown
..........

wdlab1: (matrix1) /u01/app/grid/11.2.0/bin> ./crsctl stat res ora.matrix_lab.db -p
....................
GEN_USR_ORA_INST_NAME@SERVERNAME(wdlab1)=matrix1
GEN_USR_ORA_INST_NAME@SERVERNAME(wdlab2)=matrix2
HOSTING_MEMBERS=
INSTANCE_FAILOVER=0
....................
USR_ORA_INST_NAME=
USR_ORA_INST_NAME@SERVERNAME(wdlab1)=matrix1
USR_ORA_INST_NAME@SERVERNAME(wdlab2)=matrix2
USR_ORA_OPEN_MODE=open
....................

For a pre-11gR2 database registered to the 11gR2 grid should not have entry like “GEN_USR_ORA_INST_NAME@SERVERNAME” and also should have “ora.dbname.instancename.inst” in the crsctl stat res -t output.That was enough of the clue that, though the db was downgraded seemed like it wasn’t removed and added back using the 10gR2 OH srvctl.

wdlab1: (matrix1) /u01/app/grid/11.2.0/bin> ./srvctl remove instance -d matrix_lab -i matrix1 -f
wdlab1: (matrix1) /u01/app/grid/11.2.0/bin> ./srvctl remove instance -d matrix_lab -i matrix2 -f
wdlab1: (matrix1) /u01/app/grid/11.2.0/bin> ./srvctl remove database -d matrix_lab -f
wdlab1: (matrix1) /u01/app/grid/11.2.0/bin>
wdlab1: (matrix1) /u01/app/grid/11.2.0/bin> ./crsctl stat res -t

crsctl stat res -t didn;t show any entry for it. Add the instance and db info using srvctl from 10gR2 home

wdlab1: (matrix1) /u01/app/oracle/product/rdbms/10205/bin> which srvctl
/u01/app/oracle/product/rdbms/10205/bin/srvctl
wdlab1: (matrix1) /u01/app/oracle/product/rdbms/10205/bin> srvctl add database -d matrix_lab -o /u01/app/oracle/product/rdbms/10205
wdlab1: (matrix1) /u01/app/oracle/product/rdbms/10205/bin> srvctl add instance -d matrix_lab -i matrix1 -n wdlab1
wdlab1: (matrix1) /u01/app/oracle/product/rdbms/10205/bin> srvctl add instance -d matrix_lab -i matrix2 -n wdlab2
wdlab1: (matrix1) /home/oracle> srvctl modify database -d matrix_lab -p '/u01/admin/matrix/spfile/spfilematrix.ora' -s open

crsctl stat res -t output showed –

wdlab1: (matrix1) /u01/app/grid/11.2.0/bin> ./crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
......................
......................
ora.matrix_lab.db
      1        OFFLINE OFFLINE
ora.matrix_lab.matrix1.inst
      1        OFFLINE OFFLINE
ora.matrix_lab.matrix2.inst
      1        OFFLINE OFFLINE
.....................

hasGetClusterStatus Status verification failed due to cluvfy execution failure for node(s) – 10gR2 RAC

Received a mail from colleague reporting the below error from emgc Metric Collection Errors –

Target		crs_node1
Type		Cluster
Metric		Clusterware Status
Collection Timestamp	Dec 14, 2011 5:35:02 AM
Error Type		Collection Problem
Message		WARN|has::Common::hasGetClusterStatus Status verification failed due to cluvfy execution failure for node(s) node3:EFAIL,NODE_STATUS::node2:EFAIL,NODE_STATUS::node1:EFAIL,NODE_STATUS::node4:EFAIL,OVERALL_STATUS: 

The first thing i did is ran cluvfy


node1: (matrix1) /app/oracle/crs/bin> ./cluvfy stage -post crsinst -n node1,node2,node3,node4 -verbose

Performing post-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "node1"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  node3                                yes
  node1                                yes
  node4                                yes
  node2                                yes
Result: Node reachability check passed from node "node1".


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment
  ------------------------------------  ------------------------
  node3                                passed
  node2                                passed
  node1                                passed
  node4                                passed
Result: User equivalence check passed for user "oracle".

ERROR:
The location "/tmp/CVU_10.2.0.5.0.1_dba/" is owned by another user on nodes:
        node3,node1,node2
Verification will proceed with nodes:
        node4

ERROR:
CRS is not installed on any of the nodes.
Verification cannot proceed.


Post-check for cluster services setup was unsuccessful on all the nodes.

The cluvfy verification failed. Checked for the permission on /tmp/CVU_10.2.0.5.0.1_dba

drwxr-----  3 em  dba      4096 May 27  2010 CVU_10.2.0.5.0.1_dba

Changed the permission to 770

drwxrwx---  3 em  dba      4096 May 27  2010 CVU_10.2.0.5.0.1_dba

After changing the permission of /tmp/CVU_10.2.0.5.0.1_dba/ on all the nodes ran cluvfy again

node1: (matrix1) /app/oracle/crs/bin> ./cluvfy stage -post crsinst -n node1,node2,node3,node4 -verbose

Performing post-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "node1"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  node3                                yes
  node1                                yes
  node4                                yes
  node2                                yes
Result: Node reachability check passed from node "node1".


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment
  ------------------------------------  ------------------------
  node3                                passed
  node2                                passed
  node1                                passed
  node4                                passed
Result: User equivalence check passed for user "oracle".

WARNING:
CRS is not installed on nodes:
        node4
Verification will proceed with nodes:
        node3,node2,node1
.........................

Post-check for cluster services setup was unsuccessful.
Checks did not pass for the following node(s):
        node4

olsnodes showed all the 4 nodes, but cluvfy showed CRS is not installed on nodes: node4

To find out more, digged into the cluvfy log in $CRS_HOME/cv/log

[main] [5:36:32:622] [OUIData.readInventoryData:393]  ==== CRS home added: Oracle home properties:
Name     : OraCrs10g_home
Type     : CRS-HOME
Location : /app/oracle/crs
Node list: [node1, node2, node3]
...............................
[main] [5:43:44:502] [Stage.verify:359]  m_currentTaskSet.size=1; Wed Dec 14 05:43:44 CST 2011
[main] [5:43:44:502] [TaskNodeAppCreation.performTask:157]  Performing NodeApp Creation Verification Task... ; Wed Dec 14 05:43:44 CST 2011
[main] [5:43:44:502] [sVerificationUtil.getInventoryFileLocation:133]  Inventory Config File's name is:'/etc/oraInst.loc'; Wed Dec 14 05:43:44 CST 2011
[main] [5:43:44:503] [sVerificationUtil.getInventoryFileLocation:168]  inventory_loc=/app/oracle/crs/oraInventory; Wed Dec 14 05:43:44 CST 2011
[main] [5:43:44:503] [sVerificationUtil.getInventoryFileLocation:170]  Inventory File Location is-->/app/oracle/crs/oraInventory/ContentsXML/inventory.xml; Wed Dec 14 05:43:44 CST 2011
[main] [5:43:44:504] [VerificationUtil.isCRSInstalled:1208]  CRS found installed  on node: node3; Wed Dec 14 05:43:44 CST 2011
[main] [5:43:44:504] [VerificationUtil.isCRSInstalled:1208]  CRS found installed  on node: node2; Wed Dec 14 05:43:44 CST 2011
[main] [5:43:44:504] [VerificationUtil.isCRSInstalled:1208]  CRS found installed  on node: node1; Wed Dec 14 05:43:44 CST 2011
[main] [5:43:44:504] [TaskNodeAppCreation.performTask:178]  ==== Nodes with CRS installed is: 3; Wed Dec 14 05:43:44 CST 2011

..................

Checked the inventory.xml in /app/oracle/crs/oraInventory/ContentsXML/ which showed

{HOME NAME="OraCrs10g_home" LOC="/app/oracle/crs" TYPE="O" IDX="1" CRS="true"}
   {NODE_LIST}
      {NODE NAME="node1"/}
      {NODE NAME="node2"/}
      {NODE NAME="node3"/}
   {/NODE_LIST}
{/HOME}

Added NODE NAME “node4” to the inventory.xml using

node1: (matrix1) /app/oracle/crs/oui/bin> ./runInstaller -updateNodeList -silent "CLUSTER_NODES={node1,node2,node3,node4}" ORACLE_HOME=$CRS_HOME CRS=true

Starting Oracle Universal Installer...

No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /app/oracle/crs/oraInventory
'UpdateNodeList' was successful.

After it got completed, inventory.xml showed

{HOME_LIST}
{HOME NAME="O]raCrs10g_home" LOC="/app/oracle/crs" TYPE="O" IDX="1" CRS="true"}
   {NODE_LIST
      {NODE NAME="node1"/}
      {NODE NAME="node2"/}
      {NODE NAME="node3"/}
      {NODE NAME="node4"/}
   {/NODE_LIST}
{/HOME}
node1: (matrix1) /app/oracle/crs/bin> ./cluvfy stage -post crsinst -n node1,node2,node3,node4 -verbose

Performing post-checks for cluster services setup

..................
..................
Result: Check passed.


Post-check for cluster services setup was successful.

OHASD doesn’t start – 11gR2

Few weeks back had an issue where 2nd node of 4-node RAC got evicted and the alert log showed the below error before the instance was evicted –

Errors in file /u04/oraout/matrix/diag/rdbms/matrix_adc/matrix2/trace/matrix2_ora_8418.trc  (incident=16804):
ORA-00603: ORACLE server session terminated by fatal error
ORA-27504: IPC error creating OSD context
ORA-27300: OS system dependent operation:if_not_found failed with status: 0
ORA-27301: OS failure message: Error 0
ORA-27302: failure occurred at: skgxpvaddr9
ORA-27303: additional information: requested interface 169.254.*.* not found. Check output from ifconfig command
Sat Oct 22 23:54:41 2011

ORA-29740: evicted by instance number 2, group incarnation 24
LMON (ospid: 29328): terminating the instance due to error 29740
Sun Oct 23 00:00:01 2011
Instance terminated by LMON, pid = 29328

We tried starting the instance with srvctl and manually using startup command, but both failed.During the startup the interesting thing i noticed was

Private Interface 'bond2' configured from GPnP for use as a private interconnect.
  [name='bond2', type=1, ip=144.xx.xx.xxx, mac=xx-xx-xx-xx-xx-xx, net=144.20.xxx.xxx/xx, mask=255.255.x.x, use=cluster_interconnect/6]

But in normal cases it should have been like

Private Interface 'bond2:1' configured from GPnP for use as a private interconnect.
  [name='bond2:1', type=1, ip=169.254.*.*, mac=xx-xx-xx-xx-xx-xx, net=169.254.x.x/xx, mask=255.255.x.x, use=haip:cluster_interconnect/62]

Now, the question comes up what is “haip”. HAIP is High Availability IP,

Grid automatically picks free link local addresses from reserved 169.254.*.* subnet for HAIP. According to RFC-3927, link local subnet 169.254.*.* should not be used for any other purpose. With HAIP, by default, interconnect traffic will be load balanced across all active interconnect interfaces, and corresponding HAIP address will be failed over transparently to other adapters if one fails or becomes non-communicative. .

The number of HAIP addresses is decided by how many private network adapters are active when Grid comes up on the first node in the cluster . If there’s only one active private network, Grid will create one.Grid Infrastructure can activate a maximum of four private network adapters at a time even if more are defined.

Few commands to check –

$oifcfg iflist -p -n

$crsctl stat res -t -init  --> ora.cluster_interconnect.haip must be ONLINE

$ oifcfg getif

select inst_id,name,ip_address from gv$cluster_interconnects;

We got network team involved, but as per them everything was well on network side, so we finally decided to go for server rebooted, after which OHAS deamon wasn’t coming up automatically, though

$ cat crsstart
enable

TEST:oracle> (matrix2:11.2.0.2_matrix) /etc/oracle/scls_scr/test/root
$ cat ohasdstr
enable

No logs in $GRID_HOME/log/test/ were getting updated, so it was little difficult to diagnose it.As ohasd.bin is responsible to start up all other cluserware processes directly or indirectly, it needs to start up properly for the rest of the stack to come up, which wasn’t happening.

One of the reasons for ohasd not coming up is, if any rc Snncommand script is stuck at OS level

 root      2744     1  0 02:20 ?        00:00:00 /bin/bash /etc/rc.d/rc 3
 root      4888  2744  0 02:30 ?        00:00:00 /bin/sh /etc/rc3.d/S98gcstartup start

This S98gcsstartup was stuck.Checked the script which showed related to OMS startup. Renamed the file and got server rebooted, OHASD and all other resources came up successfully.

$ ls -lrt /etc/rc3.d/old_S98gcstartup
lrwxrwxrwx 1 root root 27 Jun  1 07:09 /etc/rc3.d/old_S98gcstartup -> /etc//rc.d/init.d/gcstartup

There are few other reasons too like ,inaccessible/corrupted OLR , CRS autostart disabled etc.

But still i was unable to find why we got “additional information: requested interface 169.254.*.* not found” all of a sudden when things were running fine.

crsctl start crs doesn’t start crs – 10gR2

Today morning a box got rebooted, crs(10.2.0.5 vesrsion) didn’t come up automatically. Manual crsctl start crs command didn’t update any log file in crsd,cssd, or even the alertxx4040.log

xx4040: (test1) /etc/oracle/scls_scr/test1/root> cat crsstart
enable
xx4040: (test1) /etc/oracle/scls_scr/test1/root>

So, as crsstart is enabled, it should have come up automatically.Ran cluvfy

xx4040: (test1) /home/oracle>cluvfy stage -pre crsinst -n test1,test2 -verbose

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "test1"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  test1                                yes
  test2                                yes
Result: Node reachability check passed from node "test1".


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment
  ------------------------------------  ------------------------
  test2                                passed
  test1                                passed
Result: User equivalence check passed for user "oracle".

ERROR:
Path "/tmp/" is not a writable directory on nodes:
        test1,test2
Please choose a different work area using CV_DESTLOC.
Verification cannot proceed.

Pre-check for cluster services setup was unsuccessful on all the nodes.

/var/log/messages showed –

Oct 25 00:10:12 xx4040 logger: Cluster Ready Services waiting on dependencies. Diagnostics in /tmp/crsctl.5232.
Oct 25 00:10:12 xx4040 logger: Cluster Ready Services waiting on dependencies. Diagnostics in /tmp/crsctl.4628.
Oct 25 00:10:12 xx4040 logger: Cluster Ready Services waiting on dependencies. Diagnostics in /tmp/crsctl.5230.

Checked the ownership and permission on /tmp

drwxr-xr-x    14 root   root   4096 Oct 25 00:08 tmp

Changed the permission to 777 of tmp and started the crs, and all started well :).

Update from the comments by Frits Hoogland –> The required permission for tmp is 1777 where 1 is for the sticky bit.

While searching on internet, got another article which could be helpful –

http://blogs.oracle.com/gverma/entry/crsctl_start_crs_does_not_work

Adding OCR and Voting Disk on NFS mount – 11gR2

From 11gR2 as the OCR and voting disk are stored on ASM storage the grid installer doesn’t show the option to specify multiple OCR and voting disk locations.We have the ocr and voting disk stored on NFS mounted file system and during the installation only one location was specified.

xx4040: (test1) /anand> crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.2.0]

xx4040: (test1) /anand> ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3232
         Available space (kbytes) :     258888
         ID                       : 2122021496
         Device/File Name         : /u01/oraadmin/test/CRS_DISK/ocr
                                    Device/File integrity check succeeded

                                    Device/File not configure
                                    ......
         Cluster registry integrity check succeeded
         Logical corruption check bypassed due to non-privileged user

xx4040: (test1) /anand> crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   2a7efe06a2e04f61bfd833fqsd354ec02 (/u01/oraadmin/test/CRS_DISK/vdsk) []
Located 1 voting disk(s).

These are the files created during the installation. As we need multiple OCR and voting disk it was time to add them.The ocr and voting disk has to be added to /u02/oraadmin/test/CRS_DISK location.

To add the ocr and voting disk the commands needs to be executed as ROOT user.Before adding/change make sure you have the latest backup using ocrconfig -showbackup

[root@xx4040 test]# cd /u01/app/grid/11.2.0.2/bin/
[root@xx4040 bin]# ./ocrconfig -add /u02/oraadmin/test/CRS_DISK/ocr
PROT-30: The Oracle Cluster Registry location to be added is not accessible
PROC-8: Cannot perform cluster registry operation because one of the parameters is invalid. Operating System error [No such file or directory] [2]
[root@xx4040 bin]#

For running the command successfully, the device should already exists.

[root@xx4040 test]# cd CRS_DISK/
[root@xx4040 CRS_DISK]# ls -lrt
total 0
[root@xx4040 CRS_DISK]# touch ocr
[root@xx4040 CRS_DISK]# ls -lrt
total 1
-rw-r--r-- 1 root root 0 Oct 12 00:52 ocr
[root@xx4040 CRS_DISK]# chown root:dba ocr
[root@xx4040 CRS_DISK]# ls -lrt
total 1
-rw-r--r-- 1 root dba 0 Oct 12 00:52 ocr
[root@xx4040 CRS_DISK]# chmod 640 ocr
[root@xx4040 CRS_DISK]# ls -lrt
total 1
-rw-r----- 1 root dba 0 Oct 12 00:52 ocr

As ocr device is created, execute the ocrconfig -add to add another ocr

[root@xx4040 CRS_DISK]# cd /u01/app/grid/11.2.0.2/bin/
[root@xx4040 bin]# ./ocrconfig -add /u02/oraadmin/test/CRS_DISK/ocr
[root@xx4040 bin]# cd /u02/oraadmin/test/CRS_DISK
[root@xx4040 CRS_DISK]# ls -lrt
total 7689
-rw-r----- 1 root dba 272756736 Oct 12 00:55 ocr
[root@xx4040 bin]# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3232
         Available space (kbytes) :     258888
         ID                       : 2122021496
         Device/File Name         : /u01/oraadmin/test/CRS_DISK/ocr
                                    Device/File integrity check succeeded
         Device/File Name         : /u02/oraadmin/test/CRS_DISK/ocr
                                    Device/File integrity check succeeded
                                    .........
         Cluster registry integrity check succeeded
         Logical corruption check succeeded

[root@xx4040 bin]# cat /etc/oracle/ocr.loc
#Device/file  getting replaced by device /u02/oraadmin/test/CRS_DISK/ocr
ocrconfig_loc=/u01/oraadmin/test/CRS_DISK/ocr
ocrmirrorconfig_loc=/u02/oraadmin/test/CRS_DISK/ocr

The ocr is added, now got to add voting disk.From 11.1 we can use crsctl add css votedisk online. Till 10.2 versions we had to shutdown the clusterware (crsctl stop crs) on all the nodes before adding/making any changes to the voting disk.

In a similar fashion to ocr, i created/touced vdsk in /u02/oraadmin/test/CRS_DISK/ location and gave proper permissions and ownership.

-rw-r----- 1 oracle dba         0 Oct 12 01:12 vdsk
[root@xx4040 bin]# ./crsctl add css votedisk /u02/oraadmin/test/CRS_DISK/vdsk
Now formatting voting disk: /u02/oraadmin/test/CRS_DISK/vdsk.
Failed to initialize voting file /u02/oraadmin/test/CRS_DISK/vdsk.
Change to configuration failed, but was successfully rolled back.
CRS-4000: Command Add failed, or completed with errors.

Removed the vdsk from /u02/oraadmin/test/CRS_DISK and executed the command again

[root@xx4040 bin]# rm /u02/oraadmin/test/CRS_DISK/vdsk
[root@xx4040 bin]#
[root@xx4040 bin]#
[root@xx4040 bin]# ./crsctl add css votedisk /u02/oraadmin/test/CRS_DISK/vdsk
Now formatting voting disk: /u02/oraadmin/test/CRS_DISK/vdsk.
CRS-4603: Successful addition of voting disk /u02/oraadmin/test/CRS_DISK/vdsk.
[root@xx4040 bin]#

xx4040: (test1) /anand> crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   2a7efe06a2e04f61bfd833f52054ec02 (/u01/oraadmin/test/CRS_DISK/vdsk) []
 2. ONLINE   aec514d603aa4fbbbf83c2768f8b6afc (/u02/oraadmin/test/CRS_DISK/vdsk) []

To verify OCR and voting disk integrity after adding them use cluvfy comp ocr -verbose and cluvfy comp vdisk -verbose respectively.