Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1244171
  • 博文数量: 350
  • 博客积分: 10
  • 博客等级: 民兵
  • 技术积分: 5668
  • 用 户 组: 普通用户
  • 注册时间: 2011-03-23 17:53
文章分类

全部博文(350)

文章存档

2013年(350)

分类: Oracle

2013-04-27 16:53:53

二、添加clusterware到新节点

2.1 检查环境

  首先是检查安装环境,仍然是使用runcluvfy.sh脚本来进行验证,该脚本可以在现有配置中的任意节点上执行,这里在节点1执行,如下:

    [oracle@jssdbn1 ~]$/data/software/clusterware/cluvfy/runcluvfy.sh stage -pre crsinst -n jssdbn3 -verbose

    Performing pre-checks for cluster services setup

    Checking node reachability...

    Check: Node reachability from node "jssdbn1"

    Destination Node Reachable?

    ------------------------------------ ------------------------

    jssdbn3 yes

    Result: Node reachability check passed from node "jssdbn1".

    Checking user equivalence...

    Check: User equivalence for user ""

    Node Name Comment

    ------------------------------------ ------------------------

    jssdbn3 passed

    Result: User equivalence check passed for user "oracle".

    Checking administrative privileges...

    Check: Existence of user "oracle"

    Node Name User Exists Comment

    ------------ ------------------------ ------------------------

    jssdbn3 yes passed

    Result: User existence check passed for "oracle".

    Check: Existence of group "oinstall"

    Node Name Status Group ID

    ------------ ------------------------ ------------------------

    jssdbn3 exists 500

    Result: Group existence check passed for "oinstall".

    Check: Membership of user "oracle" in group "oinstall" [as Primary]

    Node Name User Exists Group Exists User in Group Primary Comment

    ---------------- ------------ ------------ ------------ ------------ ------------

    jssdbn3 yes yes yes yes passed

    Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed.

    Administrative privileges check passed.

    Checking node connectivity...

    Interface information for node "jssdbn3"

    Interface Name IP Address Subnet

    ------------------------------ ------------------------------ ----------------

    eth0 192.168.10.13 192.168.10.0

    eth1 10.10.10.103 10.10.10.0

    Check: Node connectivity of subnet "192.168.10.0"

    Result: Node connectivity check passed for subnet "192.168.10.0" with node(s) jssdbn3.

    Check: Node connectivity of subnet "10.10.10.0"

    Result: Node connectivity check passed for subnet "10.10.10.0" with node(s) jssdbn3.

    Suitable interfaces for the private interconnect on subnet "192.168.10.0":

    jssdbn3 eth0:192.168.10.13

    Suitable interfaces for the private interconnect on subnet "10.10.10.0":

    jssdbn3 eth1:10.10.10.103

    ERROR:

    Could not find a suitable set of interfaces for VIPs.

    Result: Node connectivity check failed.

    Checking system requirements for ¨crs¨...

    No checks registered for this product.

    Pre-check for cluster services setup was unsuccessful on all the nodes.

  如果返回信息中提示“Could not find a suitable set of interfaces for VIPs.”,可以忽略该错误信息,这是一个bug,Metalink中有详细说明,doc.id:338924.1。

  没有其它错误的话,安装可以正常进行,下面准备正式进入安装阶段。

2.2 安装clusterware到新节点

  新节点中clusterware的安装也是从现有的RAC环境中开始的,在当前RAC环境中任意节点的$ORA_CRS_HOME,执行oui/bin/addNode.sh脚本敲出视界界面,操作如下:

  然后就能看到视图界面,点击下一步

  看到当前已有的节点列表。在下方输入框中输入新结点的信息,包括public-name,private-name等,这部分信息应与hosts文件中完全匹配才行。正确输入后点击下一步,如图:

  显示摘要信息如图,如无问题,点击install按钮:

  开始复制文件,并进行一些必要的配置:

  当文件复制完成后,会提示运行脚本指定的脚本:

  一定要按照界面中的提示,在正确的节点中按照顺序执行脚本。详细说就是:

  • 节点3上执行orainstRoot.sh;
  • 节点1上执行rootaddnode.sh;
  • 节点3上执行root.sh;

  上述所有脚本都是以root身份执行。

  需要注意最后一个脚本,即root.sh执行时会调用vipca,不过vipca脚本中部分对应一个bug:3937317,建议在执行root.sh前首先修改vipca 文件

    [root@jssdbn3 ~]# vi /data/ora10g/product/10.2.0/crs_1/bin/vipca

  找到如下内容:

    Remove this workaroundwhenthe bug3937317 is fixed

    arch=`uname -m`

    if [ "$arch" = "i686" -o "$arch" = "ia64" ]

    then

    LD_ASSUME_KERNEL=2.4.19

    exportLD_ASSUME_KERNEL

    fi

    #Endworkaround

  在fi 后新添加一行:

    unsetLD_ASSUME_KERNEL

  保存退出, 然后再在jssdbn3节点端执行root.sh

  Root.sh 执行完成后,默认情况下会自动调用vipca,用来配置虚拟IP的网络接口服务,不过如果前面root.sh脚本执行时因为bug原因,未能自动调用vipca,那么在root执行完毕后,手动执行vipca命令即可打开配置窗口,该项配置比较简单,基本上全面点击下一步即可。

  如果脚本执行一切顺序,返回到crs的安装界面,点击ok按钮。

  正如界面中所示,End of Installation,点击exit退出该界面即可。

  接下来需要将新节点的ONS(Oracle Notification Services)配置信息写入OCR(Oracle Cluster Register),在节点1执行脚本如下:

    [oracle@jssdbn1 ~]$ /data/ora10g/product/10.2.0/crs_1/bin/racgons add_config jssdbn3:6200
    提示:jssdbn3的端口号可以查询该结节中/data/ora10g/product/10.2.0/crs_1/opmn/conf/ons.config 文件中的配置,此处指定的端口号为remoteport。

至此,新节点的CLUSTERWARE配置完成,要检查安装的结果,可以在新节点中调用cluvfy命令进行验证,例如:

    [oracle@jssdbn3 ~]$ /data/ora10g/product/10.2.0/crs_1/bin/cluvfy stage -post crsinst -n jssdbn3 -verbose

    Performing post-checks for cluster services setup

    Checking node reachability...

    Check: Node reachability from node "jssdbn3"

    Destination Node Reachable?

    ------------------------------------ ------------------------

    jssdbn3 yes

    Result: Node reachability check passed from node "jssdbn3".

    Checking user equivalence...

    Check: User equivalence for user "oracle"

    Node Name Comment

    ------------------------------------ ------------------------

    jssdbn3 passed

    Result: User equivalence check passed for user "oracle".

    Checking Cluster manager integrity...

    Checking CSS daemon...

    Node Name Status

    ------------------------------------ ------------------------

    jssdbn3 running

    Result: Daemon status check passed for "CSS daemon".

    Cluster manager integrity check passed.

    Checking cluster integrity...

    Node Name

    ------------------------------------

    jssdbn1

    jssdbn2

    jssdbn3

    Cluster integrity check passed

    .......................

    ......................

    Post-check for cluster services setup was successful.


查看之前的连载
=======================================================

阅读(2287) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~