Please see my other blog for Oracle EBusiness Suite Posts - EBMentors

Search This Blog

Note: All the posts are based on practical approach avoiding lengthy theory. All have been tested on some development servers. Please don’t test any post on production servers until you are sure.

Tuesday, August 19, 2014

Exadata: Restoring OCR and Vote Disks


Oracle allows you to restore your Oracle Cluster Registry using the ocrconfig –restore command. This command accepts a backup OCR file as its argument, which you can choose by running ocrconfig –showbackup and selecting the appropriate backup copy that resides on your compute node file system.

When you execute the ocrconfig –restore command, Oracle will copy the backup OCR file and place it into the location specified in /etc/oracle/ocr.loc. On Exadata, this will refer to an ASM disk group location, typically DBFS_DG.


Without your Oracle Cluster Registry available, your Oracle RAC cluster will not be able to cleanly start any of its resources, including listeners, networks, ASM instances, and databases. Oracle protects against loss of your OCR by mirroring its contents using ASM redundancy in an ASM disk group built on Exadata grid disks.


Please note that restoring OCR and voting disks on Exadata is no different from restoring it on non-Exadata Oracle RAC environments. Exadata’s standards are to place the OCR in the DBFS_DG ASM disk group which must exist and be mounted in order to restore OCR.

Recovering Your OCR
===================
1- log in to your compute node as root and determine the location of your Oracle Cluster Registry

[root@pk3-iub-rp-od02 ~]# cat /etc/oracle/ocr.loc
ocrconfig_loc=+DBFS_DG
local_only=FALSE

2- list your OCR backups (Oracle maintains five backups of OCR file backups on local file systems by default.)

[root@pk3-iub-rp-od02 ~]# /u01/app/11.2.0.4/grid/bin/ocrconfig -showbackup

pk3-iub-rp-od01     2014/08/19 07:39:05     /u01/app/11.2.0.4/grid/cdata/pk3-iub-cluster/backup00.ocr
pk3-iub-rp-od01     2014/08/19 03:39:05     /u01/app/11.2.0.4/grid/cdata/pk3-iub-cluster/backup01.ocr
pk3-iub-rp-od01     2014/08/18 23:39:05     /u01/app/11.2.0.4/grid/cdata/pk3-iub-cluster/backup02.ocr
pk3-iub-rp-od01     2014/08/18 03:39:03     /u01/app/11.2.0.4/grid/cdata/pk3-iub-cluster/day.ocr
pk3-iub-rp-od01     2014/08/11 03:38:52     /u01/app/11.2.0.4/grid/cdata/pk3-iub-cluster/week.ocr
pk3-iub-rp-od01     2013/12/16 16:49:40     /u01/app/11.2.0.4/grid/cdata/pk3-iub-cluster/backup_20131216_164940.ocr
pk3-iub-rp-od01     2013/12/16 16:49:39     /u01/app/11.2.0.4/grid/cdata/pk3-iub-cluster/backup_20131216_164939.ocr

Note:
since your cluster expects the OCR to reside in the DBFS_DG ASM disk group, you must have this disk group created and mounted prior to restoring your OCR file. If you have lost or dropped your DBFS_DG ASM disk group, you must create it first while connected as SYSASM to an ASM instance. This implies that you have a collection of DBFS_DG grid disks created on your storage cells. If you do need to recreate your DBFS_DG ASM disk group, log in to a compute node as the Grid Infrastructure owner, connect to your ASM instance as SYSASM, and create your disk group 

SQL> create diskgroup DBFS_DG
normal redundancy
disk 'o/*/DBFS_DG*'
attribute 'compatible.rdbms' = '11.2.0.3.0',
'compatible.asm' = '11.2.0.3.0',
'cell.smart_scan_capable' = 'TRUE',
'au_size' = '4M';
Diskgroup created.
SQL>

If you have physically lost the ASM disk group on which your OCR resided, CRS will fail to start. You should disable automatic restart of Oracle High Availability Services by performing a crsctl disable crs as root, rebooting your nodes, and then running crsctl start crs –excl on one of your nodes. This will start enough CRS services to perform the tasks above.

3- restore your OCR as below 
[root@pk3-iub-rp-od02 ~]# /u01/app/11.2.0.4/grid/bin/ocrconfig –restore /u01/app/11.2.0.4/grid/cdata/pk3-iub-cluster/backup00.ocr

4- Enable your Oracle High Availability Services for automatic restart and reboot your compute nodes.

[root@pk3-iub-rp-od02 ~]# /u01/app/11.2.0.4/grid/bin/crsctl enable crs
[root@pk3-iub-rp-od02 ~]#  shutdown –r 0

5- After restart validate your OCR
[root@pk3-iub-rp-od02 ~]# /u01/app/11.2.0.4/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
Version                  :          3
Total space (kbytes)     :     262120
Used space (kbytes)      :       3308
Available space (kbytes) :     258812
ID                       : 1348892576
Device/File Name         :   +DBFS_DG
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded

[root@pk3-iub-rp-od02 ~]#

Restoring  Voting Disks
====================
1- To restore voting disks, CRS on Exadata Compute Nodes must be running and your OCR should be healthy. You can restore as below.

[root@pk3-iub-rp-od02 ~]# /u01/app/11.2.0.4/grid/bin/crsctl replace votedisk +DBFG_DG
Sucessful addition of voting disk 65e514063aa24f73bf38ccbbb1ef06da
Sucessful addition of voting disk 9ff55be6aa614f99bfd97527fec7e7ac
Sucessful addition of voting disk 102134e2dfda4f26bfe3be5b2805d9bc
CRS-4266: Voting file(s) successfully replaced

After successful voting disk restore, restart CRS on each node in your cluster.

2- verify your voting disk
root@pk3-iub-rp-od02 ~]# /u01/app/11.2.0.4/grid/bin/crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   65e514063aa24f73bf38ccbbb1ef06da (o/20.168.166.13/DBFS_DG_CD_02_pk3_iub_cel_es01) [DBFS_DG]
 2. ONLINE   9ff55be6aa614f99bfd97527fec7e7ac (o/20.168.166.14/DBFS_DG_CD_02_pk3_iub_cel_es02) [DBFS_DG]
 3. ONLINE   102134e2dfda4f26bfe3be5b2805d9bc (o/20.168.166.15/DBFS_DG_CD_02_pk3_iub_cel_es03) [DBFS_DG]
Located 3 voting disk(s).
[root@pk3-iub-rp-od02 ~]#



No comments: