RAC 11gR2模拟OCR和Votedisk损坏恢复过程
阅读原文时间:2020年10月18日阅读:1

1)破坏前的ocr和votedisk信息检查

检查ocr自动备份

[root@rac1 ~]# ocrconfig -showbackup

rac2 2013/10/13 09:45:30 /u01/grid/product/11.2.0/cdata/rac-cluster/backup00.ocr

rac2 2013/10/13 05:45:29 /u01/grid/product/11.2.0/cdata/rac-cluster/backup01.ocr

rac2 2013/10/13 01:45:28 /u01/grid/product/11.2.0/cdata/rac-cluster/backup02.ocr

rac2 2013/10/12 01:45:26 /u01/grid/product/11.2.0/cdata/rac-cluster/day.ocr

rac2 2013/09/28 02:55:56 /u01/grid/product/11.2.0/cdata/rac-cluster/week.ocr

PROT-25: Manual backups for the Oracle Cluster Registry are not available

这里有一个PROT-25的提示信息,手工备份时无效,所以破坏后直接用自动备份恢复

检查ocr磁盘信息

[root@rac1 ~]# ocrcheck

Status of Oracle Cluster Registry is as follows :

Version : 3

Total space (kbytes) : 262120

Used space (kbytes) : 2720

Available space (kbytes) : 259400

ID : 2026562699

Device/File Name : +OCRDATA

Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

查看votedisk磁盘信息

[root@rac1 ~]# crsctl query css votedisk

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE 5c190e6ab4c04facbfdd4ca0e836a798 (ORCL:OCR1) [OCRDATA]

2. ONLINE abcc18afe6214fbcbfa02fad1c41b21b (ORCL:OCR2) [OCRDATA]

3. ONLINE 31e0a9df91514f73bf50a4e0a344af3d (ORCL:OCR3) [OCRDATA]

Located 3 voting disk(s).

可以看到OCR磁盘和votedisk都通过ASM OCRDATA磁盘组管理

查看ASM磁盘组 OCRDATA信息,我的OCRDATA磁盘组是有OCR1-3组成所以直接通过下列命令查询

[root@rac1 ~]# /etc/init.d/oracleasm querydisk -d OCR1

Disk "OCR1" is a valid ASM disk on device /dev/sda1[8,1]

[root@rac1 ~]# /etc/init.d/oracleasm querydisk -d OCR2

Disk "OCR2" is a valid ASM disk on device /dev/sdh1[8,113]

[root@rac1 ~]# /etc/init.d/oracleasm querydisk -d OCR3

Disk "OCR3" is a valid ASM disk on device /dev/sdb1[8,17]

由于ASM参数文件保存在OCRDATA下,所以对备份一份spfile

SQL> show parameter spfile;

NAME TYPE VALUE

------------------------------------ ----------- ------------------------------

spfile string +OCRDATA/rac-cluster/asmparame

terfile/registry.253.825083547

SQL> create pfile='/tmp/asmbak.ora' from spfile; --备份到/tmp/asmbak.ora

File created.

2)模拟损坏

使用dd命令破坏这几个磁盘,

[root@rac1 ~]# dd if=/dev/zero of=/dev/sda1 bs=1M count=10

10+0 records in

10+0 records out

10485760 bytes (10 MB) copied, 0.005454 seconds, 1.9 GB/s

[root@rac1 ~]# dd if=/dev/zero of=/dev/sdh1 bs=1M count=10

10+0 records in

10+0 records out

10485760 bytes (10 MB) copied, 0.00603 seconds, 1.7 GB/s

破坏后执行下面命令,发现各节点服务一切正常

[root@rac1 ~]# crs_stat -t

Name Type Target State Host

------------------------------------------------------------

ora.DATA.dg ora….up.type ONLINE ONLINE rac1

ora.FRA.dg ora….up.type ONLINE ONLINE rac1

ora….ER.lsnr ora….er.type ONLINE ONLINE rac1

ora….N1.lsnr ora….er.type ONLINE ONLINE rac1

ora.OCRDATA.dg ora….up.type ONLINE ONLINE rac1

ora.asm ora.asm.type ONLINE ONLINE rac1

ora.eons ora.eons.type ONLINE ONLINE rac1

ora.gsd ora.gsd.type OFFLINE OFFLINE

ora….network ora….rk.type ONLINE ONLINE rac1

ora.oc4j ora.oc4j.type OFFLINE OFFLINE

ora.ons ora.ons.type ONLINE ONLINE rac1

ora….SM1.asm application ONLINE ONLINE rac1

ora….C1.lsnr application ONLINE ONLINE rac1

ora.rac1.gsd application OFFLINE OFFLINE

ora.rac1.ons application ONLINE ONLINE rac1

ora.rac1.vip ora….t1.type ONLINE ONLINE rac1

ora….SM2.asm application ONLINE ONLINE rac2

ora….C2.lsnr application ONLINE ONLINE rac2

ora.rac2.gsd application OFFLINE OFFLINE

ora.rac2.ons application ONLINE ONLINE rac2

ora.rac2.vip ora….t1.type ONLINE ONLINE rac2

ora.ractest.db ora….se.type ONLINE ONLINE rac1

ora….ry.acfs ora….fs.type ONLINE ONLINE rac1

ora.scan1.vip ora….ip.type ONLINE ONLINE rac1

[root@rac1 ~]# ocrcheck

Status of Oracle Cluster Registry is as follows :

Version : 3

Total space (kbytes) : 262120

Used space (kbytes) : 2720

Available space (kbytes) : 259400

ID : 2026562699

Device/File Name : +OCRDATA

Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@rac1 ~]# crsctl query css votedisk

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE 5c190e6ab4c04facbfdd4ca0e836a798 (ORCL:OCR1) [OCRDATA]

2. ONLINE abcc18afe6214fbcbfa02fad1c41b21b (ORCL:OCR2) [OCRDATA]

3. ONLINE 31e0a9df91514f73bf50a4e0a344af3d (ORCL:OCR3) [OCRDATA]

Located 3 voting disk(s).

停止RAC1crs服务

[root@rac1 ~]# crsctl stop crs

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'

CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'

CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'

CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac1'

CRS-2673: Attempting to stop 'ora.OCRDATA.dg' on 'rac1'

CRS-2673: Attempting to stop 'ora.ractest.db' on 'rac1'

CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac1'

CRS-2677: Stop of 'ora.registry.acfs' on 'rac1' succeeded

CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'

CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac1'

CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.rac1.vip' on 'rac2'

CRS-2677: Stop of 'ora.scan1.vip' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac2'

CRS-2676: Start of 'ora.rac1.vip' on 'rac2' succeeded

CRS-2676: Start of 'ora.scan1.vip' on 'rac2' succeeded

CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac2'

CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac2' succeeded

CRS-2677: Stop of 'ora.OCRDATA.dg' on 'rac1' succeeded

CRS-2677: Stop of 'ora.ractest.db' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'

CRS-2673: Attempting to stop 'ora.FRA.dg' on 'rac1'

CRS-2677: Stop of 'ora.FRA.dg' on 'rac1' succeeded

CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'rac1'

CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.ons' on 'rac1'

CRS-2673: Attempting to stop 'ora.eons' on 'rac1'

CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1'

CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded

CRS-2677: Stop of 'ora.eons' on 'rac1' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed

CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'

CRS-2673: Attempting to stop 'ora.asm' on 'rac1'

CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'

CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded

CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'

CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'

CRS-2673: Attempting to stop 'ora.diskmon' on 'rac1'

CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'

CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.diskmon' on 'rac1' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed

CRS-4133: Oracle High Availability Services has been stopped.

3) 故障定位

在此启动发现已经无法正常启动了,跟踪alter日志,(由于系统日志没有发现什么重要信息,这里就没贴出来)

[root@rac1 ~]# tail -f /u01/grid/product/11.2.0/log/rac1/alertrac1.log

……..

[ohasd(28327)]CRS-2765:Resource 'ora.cssdmonitor' has failed on server 'rac1'.

2013-10-13 11:22:46.094

[cssd(28791)]CRS-1713:CSSD daemon is started in clustered mode

2013-10-13 11:22:46.178

[cssd(28791)]CRS-1637:Unable to locate configured voting file with ID 5c190e6a-b4c04fac-bfdd4ca0-e836a798; details at (:CSSNM00020:) in /u01/grid/product/11.2.0/log/rac1/cssd/ocssd.log

2013-10-13 11:22:46.179

[cssd(28791)]CRS-1637:Unable to locate configured voting file with ID abcc18af-e6214fbc-bfa02fad-1c41b21b; details at (:CSSNM00020:) in /u01/grid/product/11.2.0/log/rac1/cssd/ocssd.log

2013-10-13 11:22:46.179

[cssd(28791)]CRS-1705:Found 1 configured voting files but 2 voting files are required, terminating to ensure data integrity; details at (:CSSNM00021:) in /u01/grid/product/11.2.0/log/rac1/cssd/ocssd.log

2013-10-13 11:22:46.179

[cssd(28791)]CRS-1603:CSSD on node rac1 shutdown by user.

2013-10-13 11:22:52.768

[ohasd(28327)]CRS-2765:Resource 'ora.diskmon' has failed on server 'rac1'.

发现服务都无法启动,根据上面的提示检查ocssd.log日志文件

2013-10-13 11:43:47.201: [ SKGFD][1084574016]Lib :ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so: closing handle 0x1dc67d90 for disk :ORCL:OCR3:

2013-10-13 11:43:47.201: [ CSSD][1084574016]clssnmvDiskVerify: file is not a voting file, cannot recognize on-disk signature for a voting

2013-10-13 11:43:47.201: [ CSSD][1084574016]clssnmvDiskVerify: file is not a voting file, cannot recognize on-disk signature for a voting

2013-10-13 11:43:47.201: [ CSSD][1084574016]clssnmvDiskVerify: file is not a voting file, cannot recognize on-disk signature for a voting

2013-10-13 11:43:47.201: [ CSSD][1084574016]clssnmvDiskVerify: file is not a voting file, cannot recognize on-disk signature for a voting

2013-10-13 11:43:47.201: [ CSSD][1084574016]clssnmvDiskVerify: file is not a voting file, cannot recognize on-disk signature for a voting

2013-10-13 11:43:47.201: [ CSSD][1084574016]clssnmvDiskVerify: file is not a voting file, cannot recognize on-disk signature for a voting

2013-10-13 11:43:47.201: [ CSSD][1084574016]clssnmvDiskVerify: file is not a voting file, cannot recognize on-disk signature for a voting

上面错误写的非常清楚无法识别表决磁盘,继续查看下面的log,

2013-10-13 11:43:47.201: [ SKGFD][1084574016]Handle 0x1dc61820 from lib :ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so: for disk :ORCL:OCR3:

2013-10-13 11:43:47.201: [ CLSF][1084574016]Opened hdl:0x1dc17dc0 for dev:ORCL:OCR3:

2013-10-13 11:43:47.213: [ CSSD][1084574016]clssnmvDiskVerify: Successful discovery for disk ORCL:OCR3, UID 31e0a9df-91514f73-bf50a4e0-a344af3d, Pending CIN 0:1378101162:0, Committed CIN 0:1378101162:0

2013-10-13 11:43:47.213: [ CLSF][1084574016]Closing handle:0x1dc17dc0

2013-10-13 11:43:47.213: [ SKGFD][1084574016]Lib :ASM:/opt/oracle/extapi/64/asm/orcl/1/libasm.so: closing handle 0x1dc61820 for disk :ORCL:OCR3:

2013-10-13 11:43:47.213: [ CSSD][1084574016]clssnmvDiskVerify: Successful discovery of 1 disks

发先只有OCR3磁盘可以用

2013-10-13 11:43:47.213: [ CSSD][1084574016]clssnmvVerifyCommittedConfigVFs: Insufficient voting files found, found 1 of 3 configured, needed 2 voting files

2013-10-13 11:43:47.213: [ CSSD][1084574016](:CSSNM00020:)clssnmvVerifyCommittedConfigVFs: voting file 0, id 5c190e6a-b4c04fac-bfdd4ca0-e836a798 not found

2013-10-13 11:43:47.213: [ CSSD][1084574016](:CSSNM00020:)clssnmvVerifyCommittedConfigVFs: voting file 1, id abcc18af-e6214fbc-bfa02fad-1c41b21b not found

上面log写的很清楚,在3快磁盘中只有1块可用,voting file 0,voting file 1 notfound,根据上面的log找到OCR3是可以用的,通过oracleasmdisk命令可以找到我们损坏的磁盘除了OCR3剩下

的两块就是OCR1 和OCR2了

[root@rac1 ~]# /etc/init.d/oracleasm listdisks

DATA1

DATA2

DATA3

FRA1

FRA2

OCR1

OCR2

OCR3

4)实施恢复

先重启一下操作系统

[root@rac1 ~]# reboot

重启完成后发现ASM磁盘OCR1和OCR2已经找不到了

[root@rac1 ~]# /etc/init.d/oracleasm listdisks

DATA1

DATA2

DATA3

FRA1

FRA2

OCR3

由于之前我只破坏10m的内容,这里我将这两个磁盘格式化后重新分区(一定要注意是哪两块磁盘,错了就麻烦了)

[root@rac1 ~]# mkfs -t ext3 /dev/sda1

mke2fs 1.39 (29-May-2006)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

1024128 inodes, 2047996 blocks

102399 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=2097152000

63 block groups

32768 blocks per group, 32768 fragments per group

16256 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 32 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.

[root@rac1 ~]# mkfs -t ext3 /dev/sdh1

mke2fs 1.39 (29-May-2006)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

768544 inodes, 1535065 blocks

76753 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=1572864000

47 block groups

32768 blocks per group, 32768 fragments per group

16352 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 32 mounts or

[root@rac1 ~]# fdisk /dev/sda

The number of cylinders for this disk is set to 8000.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sda: 8388 MB, 8388608000 bytes

64 heads, 32 sectors/track, 8000 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System

/dev/sda1 1 8000 8191984 83 Linux

Command (m for help): d --删除分区

Selected partition 1

Command (m for help): n --重新分区

Command action

e extended

p primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-8000, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-8000, default 8000):

Using default value 8000

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

[root@rac1 ~]# fdisk /dev/sdh

Command (m for help): p

Disk /dev/sdh: 6291 MB, 6291456000 bytes

194 heads, 62 sectors/track, 1021 cylinders

Units = cylinders of 12028 * 512 = 6158336 bytes

Device Boot Start End Blocks Id System

/dev/sdh1 1 1021 6140263 83 Linux

Command (m for help): d

Selected partition 1

Command (m for help): n

Command action

e extended

p primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-1021, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-1021, default 1021):

Using default value 1021

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

[root@rac1 ~]# partprobe

重新创建ASM磁盘

[root@rac1 ~]# /etc/init.d/oracleasm createdisk OCR1 /dev/sda1

Marking disk "OCR1" as an ASM disk: [ OK ]

[root@rac1 ~]# /etc/init.d/oracleasm createdisk OCR2 /dev/sdh1

Marking disk "OCR2" as an ASM disk: [ OK ]

强制停止CRS服务

[root@rac1 ~]# crsctl stop crs -f

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'

CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'

CRS-4548: Unable to connect to CRSD

CRS-2675: Stop of 'ora.crsd' on 'rac1' failed

CRS-2679: Attempting to clean 'ora.crsd' on 'rac1'

CRS-4548: Unable to connect to CRSD

CRS-2678: 'ora.crsd' on 'rac1' has experienced an unrecoverable failure

CRS-0267: Human intervention required to resume its availability.

CRS-2795: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has failed

CRS-4687: Shutdown command has completed with error(s).

发现无法停止,这里我耽误了很长时间,禁用crs自动启动然后reboot电脑也不行,最后找到是由于

BUG:9011779 所致,通过下面的解决了

kill these processes: ohasd, gipcd, mdnsd, gpnpd, evmd, and crsd

然后以-excl的方式启动Start Oracle Clusterware in exclusive mode,这里与10g不同

[root@rac1 ~]# crsctl start crs -excl

CRS-4123: Oracle High Availability Services has been started.

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'

CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'

CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'

CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'

CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'

CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded

CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'

CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'

CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'rac1'

CRS-2679: Attempting to clean 'ora.diskmon' on 'rac1'

CRS-2681: Clean of 'ora.diskmon' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'

CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded

CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'

CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac1'

CRS-2676: Start of 'ora.drivers.acfs' on 'rac1' succeeded

CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'rac1'

CRS-2676: Start of 'ora.asm' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'rac1'

CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded

恢复COR磁盘 前面已经列出了备份在RAC2 /u01/grid/product/11.2.0/cdata/rac-cluster下

SQL> create diskgroup OCRDATA normal redundancy disk 'ORCL:OCR1','ORCL:OCR2','ORCL:OCR3' attribute 'COMPATIBLE.ASM'='11.2';

Diskgroup created.

root@rac1 rac1]# ocrconfig -restore /tmp/backup00.ocr

创建参数文件

SQL> create spfile='+OCRDATA' from pfile='/tmp/asmbak.ora';

File created.

执行ocrcheck命令发现OCR磁盘已经恢复

[root@rac1 rac1]# ocrcheck

Status of Oracle Cluster Registry is as follows :

Version : 3

Total space (kbytes) : 262120

Used space (kbytes) : 2720

Available space (kbytes) : 259400

ID : 2026562699

Device/File Name : +OCRDATA

Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

查看votedisk为空

[root@rac1 rac1]# crsctl query css votedisk

Located 0 voting disk(s).

votedisk恢复

[root@rac1 rac1]# crsctl replace votedisk +OCRDATA

Successful addition of voting disk afdbd9a7bce04fd9bfca848161db4ee5.

Successful addition of voting disk 939a58464f5a4f6cbfade37e83b7c729.

Successful addition of voting disk 3fc60f96eeca4faebf4b44700fc74897.

Successfully replaced voting disk group with +OCRDATA.

CRS-4266: Voting file(s) successfully replaced

[root@rac1 rac1]# crsctl query css votedisk

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE afdbd9a7bce04fd9bfca848161db4ee5 (ORCL:OCR1) [OCRDATA]

2. ONLINE 939a58464f5a4f6cbfade37e83b7c729 (ORCL:OCR2) [OCRDATA]

3. ONLINE 3fc60f96eeca4faebf4b44700fc74897 (ORCL:OCR3) [OCRDATA]

Located 3 voting disk(s).

重启crs服务

[root@rac1 rac1]# crsctl stop crs -f

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'

CRS-2673: Attempting to stop 'ora.asm' on 'rac1'

CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'

CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded

CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'

CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.diskmon' on 'rac1'

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'

CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.diskmon' on 'rac1' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed

CRS-4133: Oracle High Availability Services has been stopped.

[root@rac1 rac1]# crsctl start crs

CRS-4123: Oracle High Availability Services has been started.

[root@rac1 rac1]# crs_stat -t

Name Type Target State Host

------------------------------------------------------------

ora.DATA.dg ora….up.type ONLINE ONLINE rac1

ora.FRA.dg ora….up.type ONLINE ONLINE rac1

ora….ER.lsnr ora….er.type ONLINE ONLINE rac1

ora….N1.lsnr ora….er.type ONLINE ONLINE rac1

ora.OCRDATA.dg ora….up.type ONLINE ONLINE rac1

ora.asm ora.asm.type ONLINE ONLINE rac1

ora.eons ora.eons.type ONLINE ONLINE rac1

ora.gsd ora.gsd.type OFFLINE OFFLINE

ora….network ora….rk.type ONLINE ONLINE rac1

ora.oc4j ora.oc4j.type OFFLINE OFFLINE

ora.ons ora.ons.type ONLINE ONLINE rac1

ora….SM1.asm application ONLINE ONLINE rac1

ora….C1.lsnr application ONLINE ONLINE rac1

ora.rac1.gsd application OFFLINE OFFLINE

ora.rac1.ons application ONLINE ONLINE rac1

ora.rac1.vip ora….t1.type ONLINE ONLINE rac1

ora.rac2.vip ora….t1.type ONLINE ONLINE rac1

ora.ractest.db ora….se.type ONLINE OFFLINE

ora….ry.acfs ora….fs.type ONLINE ONLINE rac1

ora.scan1.vip ora….ip.type ONLINE ONLINE rac1

节点一已经恢复了,节点2重启下crs服务就可以了

至此COR和Votedisk损坏恢复完成,在这个实验中我犯了一个小错误,由于我的环境是通过Linux下的虚拟化搭建的,通过iscsi-target做的共享磁盘,我的rac没有修改udev设备,

在重启电脑后盘符的顺序改变了,不小心将数据磁盘给格式化了,还好做了冗余,之后又做了恢复。

本文出自 “一步一步” 博客,请务必保留此出处http://5073392.blog.51cto.com/5063392/1308401