Chinaunix首页 | 论坛 | 博客
  • 博客访问: 6683709
  • 博文数量: 1005
  • 博客积分: 8199
  • 博客等级: 中将
  • 技术积分: 13071
  • 用 户 组: 普通用户
  • 注册时间: 2010-05-25 20:19
个人简介

脚踏实地、勇往直前!

文章分类

全部博文(1005)

文章存档

2020年(2)

2019年(93)

2018年(208)

2017年(81)

2016年(49)

2015年(50)

2014年(170)

2013年(52)

2012年(177)

2011年(93)

2010年(30)

分类: Oracle

2014-03-04 18:01:41

环境:
OS:Linux As 6
DB:11.2.0.1
  今天将grid从11.2.0.1升级到11.2.0.4后,发现crs无法启动,网上搜了很多,都没有找到解决的办法,最后没办法,计划将11.2.0.4回退到11.2.0.1.
 原本以为很容易回退到11.2.0.1的,后来发现我没有备份ocr,因为是刚安装的数据库,系统也没有自动备份ocr,更糟糕的是我已经将ocr和votedisk
  磁盘组CRS对应的磁盘已经使用dd命令清空了.实在没办法了,只能重建crs了,步骤如下:
 
1.停止crs
[root@node1 ~]# /u01/app/grid/11.2.0/bin/crsctl stop crs -f
[root@node2 ~]# /u01/app/grid/11.2.0/bin/crsctl stop crs -f
停不掉的情况下可以禁用开机启动,然后重启机器
/u01/app/grid/11.2.0.4/bin/crsctl disable has

2.删除crs
节点1
[root@node1 admin]# /u01/app/grid/11.2.0/crs/install/rootcrs.pl -deconfig -force

最后一个节点
[root@node2 ]# /u01/app/grid/11.2.0/crs/install/rootcrs.pl -deconfig -force -lastnode

3.执行root.sh
先在节点1上执行
[root@node1 admin]# /u01/app/grid/11.2.0/root.sh
等待节点1执行完成后继续在节点2上执行
[root@node2 admin]# /u01/app/grid/11.2.0/root.sh

4.配置ons
[grid@node1 install]$ /u01/app/grid/11.2.0/crs/install/onsconfig add_config node1:6251 node2:6251
The ONS configuration is created successfully
Stopping ONS resource 'ora.node1.ons'
Attempting to stop `ora.ons` on member `node1`
Stop of `ora.ons` on member `node1` succeeded.
The resource ora.node1.ons stopped successfully for restart
Attempting to start `ora.ons` on member `node1`
Start of `ora.ons` on member `node1` succeeded.
The resource ora.node1.ons restarted successfully
Stopping ONS resource 'ora.node2.ons'
Attempting to stop `ora.ons` on member `node2`
Stop of `ora.ons` on member `node2` succeeded.
The resource ora.node2.ons stopped successfully for restart
Attempting to start `ora.ons` on member `node2`
Start of `ora.ons` on member `node2` succeeded.
The resource ora.node2.ons restarted successfully

5.配置网络接口
[grid@node1 install]$ oifcfg iflist
eth0  192.168.56.0
eth1  172.16.10.0

[grid@node1 install]$ oifcfg setif -global eth0/192.168.56.0:public
[grid@node1 install]$ oifcfg setif -global eth1/172.16.10.0:cluster_interconnect

6.使用netca配置监听器
分别在节点1和节点2上将之前的监听文件转移到临时目录
[grid@node1 11.2.0]$ mv /u01/app/grid/11.2.0/network/admin/listener.ora /tmp/listener.ora.original_node1
[grid@node2 11.2.0]$ mv /u01/app/grid/11.2.0/network/admin/listener.ora /tmp/listener.ora.original_node1
在其中一个节点上使用netca添加监听器,添加完成后可以看到监听器资源已经加入到ocr.

添加完后,可以发现监听器资源已经启动
[grid@node1 11.2.0]$ crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.CRS.dg     ora....up.type ONLINE    ONLINE    node1
ora....ER.lsnr ora....er.type ONLINE    ONLINE    node1
ora....N1.lsnr ora....er.type ONLINE    ONLINE    node1
ora.asm        ora.asm.type   ONLINE    ONLINE    node1
ora.eons       ora.eons.type  ONLINE    ONLINE    node1
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    node1
ora....SM1.asm application    ONLINE    ONLINE    node1
ora....E1.lsnr application    ONLINE    ONLINE    node1
ora.node1.gsd  application    OFFLINE   OFFLINE
ora.node1.ons  application    ONLINE    ONLINE    node1
ora.node1.vip  ora....t1.type ONLINE    ONLINE    node1
ora....SM2.asm application    ONLINE    ONLINE    node2
ora....E2.lsnr application    ONLINE    ONLINE    node2
ora.node2.gsd  application    OFFLINE   OFFLINE
ora.node2.ons  application    ONLINE    ONLINE    node2
ora.node2.vip  ora....t1.type ONLINE    ONLINE    node2
ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE
ora.ons        ora.ons.type   ONLINE    ONLINE    node1
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    node1

7.将资源添加到ocr(grid用户下执行).
添加asm实例(注意大小写),操作只在一个节点上进行.
[grid@node1 11.2.0]$srvctl add asm -i +ASM1 -n node1 -o /u01/product/oracle/11.2.0/db_1
[grid@node1 11.2.0]$srvctl add asm -i +ASM2 -n node2 -o /u01/product/oracle/11.2.0/db_1

添加数据库(oracle用户下执行)
[oracle@node1 ~]$ srvctl add database -d racdb -o /u01/product/oracle/11.2.0/db_1
[oracle@node1 ~]$

添加实例(oracle用户下执行)
[oracle@node1 ~]$ srvctl add instance -d racdb -i racdb1 -n node1
[oracle@node1 ~]$ srvctl add instance -d racdb -i racdb2 -n node2

添加之前数据库的服务(oracle用户下执行)
[oracle@node1 ~]$ srvctl add service -d racdb -s kettle -r racdb1 -a racdb2 -P BASIC

8.启动数据库(grid用户)
[grid@node1 11.2.0]$ srvctl start asm -n node1
[grid@node1 11.2.0]$ srvctl start asm -n node2
[grid@node1 11.2.0]$ srvctl start database -d racdb
[grid@node1 11.2.0]$ srvctl start service -d racdb

启动数据库的时候报错误
[grid@node1 11.2.0]$ srvctl start database -d racdb
PRCR-1079 : Failed to start resource ora.racdb.db
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Process ID: 0
Session ID: 0 Serial number: 0


ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Process ID: 0
Session ID: 0 Serial number: 0


ORA-01078: failure in processing system parameters
ORA-01078: failure in processing system parameters
CRS-2674: Start of 'ora.racdb.db' on 'node2' failed
CRS-2674: Start of 'ora.racdb.db' on 'node1' failed
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Process ID: 0
Session ID: 0 Serial number: 0


ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Linux-x86_64 Error: 2: No such file or directory
Process ID: 0
Session ID: 0 Serial number: 0
CRS-2632: There are no more servers to try to place resource 'ora.racdb.db' on that would satisfy its placement policy

在每个节点下进入asm实例查看,asm磁盘情况
SQL> select name,state from v$asm_diskgroup;


NAME                           STATE
------------------------------ -----------
CRS                            MOUNTED
DATA                           DISMOUNTED
REC                            DISMOUNTED


SQL> alter diskgroup DATA mount;


Diskgroup altered.


SQL> alter diskgroup REC mount;


Diskgroup altered.


将状态是DISMOUNTED的手工mount起来后,继续启动数据库.


启动完上面的命令后,查看资源情况
[grid@node1 11.2.0]$ crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.CRS.dg     ora....up.type ONLINE    ONLINE    node1
ora.DATA.dg    ora....up.type ONLINE    ONLINE    node1
ora....ER.lsnr ora....er.type ONLINE    ONLINE    node1
ora....N1.lsnr ora....er.type ONLINE    ONLINE    node1
ora.REC.dg     ora....up.type ONLINE    ONLINE    node1
ora.asm        ora.asm.type   ONLINE    ONLINE    node1
ora.eons       ora.eons.type  ONLINE    ONLINE    node1
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    node1
ora....SM1.asm application    ONLINE    ONLINE    node1
ora....E1.lsnr application    ONLINE    ONLINE    node1
ora.node1.gsd  application    OFFLINE   OFFLINE
ora.node1.ons  application    ONLINE    ONLINE    node1
ora.node1.vip  ora....t1.type ONLINE    ONLINE    node1
ora....SM2.asm application    ONLINE    ONLINE    node2
ora....E2.lsnr application    ONLINE    ONLINE    node2
ora.node2.gsd  application    OFFLINE   OFFLINE
ora.node2.ons  application    ONLINE    ONLINE    node2
ora.node2.vip  ora....t1.type ONLINE    ONLINE    node2
ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE
ora.ons        ora.ons.type   ONLINE    ONLINE    node1
ora.racdb.db   ora....se.type ONLINE    ONLINE    node1
ora....tle.svc ora....ce.type ONLINE    ONLINE    node2
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    node1

到这里crs已经重建完成,这里自己还有个疑问,我已经清空了CRS磁盘组,在重建过程中为什么不需要重新创建该磁盘组.

-- The End --
阅读(5733) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~