Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1851908
  • 博文数量: 524
  • 博客积分: 10
  • 博客等级: 民兵
  • 技术积分: 2483
  • 用 户 组: 普通用户
  • 注册时间: 2011-06-25 18:36
个人简介

打杂

文章分类

全部博文(524)

文章存档

2022年(3)

2021年(9)

2019年(1)

2018年(32)

2017年(11)

2016年(152)

2015年(198)

2014年(118)

分类: Oracle

2015-02-13 13:43:05

redhat linux 5.4 rac 修改主机名

0.配置信息
A.cluster安装用户是grid,设置 ORACLE_HOME=/u01/app/11.2.0/grid
B.RAC安装用户是oracle,设置   ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
C.VBox4.2.12+OEL6.4i386+Oracle11gR203+2节点Rac

1. 两个节点ora和orb,/etc/hosts的配置如下
192.168.11.101  ora
192.168.11.102  ora-vip
192.168.88.100  ora-priv

192.168.11.104  orb
192.168.11.105  orb-vip
192.168.88.101  orb-priv

192.168.11.103  ora-scan

2.CRS状态
Name                                    Type                         Target     State      Host      
----------------------------------- -------------------------- ---------- ---------  -------   
ora.CRS1.dg                    ora.diskgroup.type         ONLINE     ONLINE     ora       
ora.DATA.dg                    ora.diskgroup.type         ONLINE     ONLINE     ora       
ora.LISTENER.lsnr              ora.listener.type          ONLINE     ONLINE    ora       
ora.LISTENER_SCAN1.lsnr        ora.scan_listener.type     ONLINE     ONLINE    ora       
ora.asm                        ora.asm.type               ONLINE     ONLINE    ora       
ora.ck.db                      ora.database.type          OFFLINE    OFFLINE              
ora.cvu                        ora.cvu.type               ONLINE     ONLINE     ora       
ora.gsd                        ora.gsd.type               OFFLINE    OFFLINE              
ora.net1.network               ora.network.type           ONLINE     ONLINE     ora       
ora.oc4j                       ora.oc4j.type              ONLINE     ONLINE    ora      
ora.ons                        ora.ons.type               ONLINE     ONLINE     ora       
ora.ora.ASM1.asm               application                ONLINE     ONLINE     ora       
ora.ora.LISTENER_ORA.lsnr      application                ONLINE     ONLINE    ora      
ora.ora.gsd                    application                OFFLINE    OFFLINE              
ora.ora.ons                    application                ONLINE     ONLINE    ora       
ora.ora.vip                    ora.cluster_vip_net1.type  ONLINE     ONLINE    ora       
ora.orb.ASM2.asm               application                ONLINE     ONLINE    orb       
ora.orb.LISTENER_ORB.lsnr      application                ONLINE     ONLINE    orb       
ora.orb.gsd                    application                OFFLINE    OFFLINE              
ora.orb.ons                    application                ONLINE     ONLINE     orb      
ora.orb.vip                    ora.cluster_vip_net1.type  ONLINE     ONLINE    orb      
ora.orcl.db                    ora.database.type          ONLINE     ONLINE    ora       
ora.orcl.orcl.xj.com.svc       ora.service.type           ONLINE     ONLINE    ora       
ora.scan1.vip                  ora.scan_vip.type          ONLINE     ONLINE     ora

2.现在要把两个节点的主机名改为rac1和rac2,方法是:删除2节点,改2节点主机名,加2节点进入crs,删除1节点,改1节点主机名,加1节点进入crs

3.删除2节点


A.检查2节点是否是active和Unpinned ,如果是pinned的,用crsctl unpin css
olsnodes -s -t
ora     Active  Unpinned
orb     Active  Unpinned

B. root用户在2节点 GRID_HOME 上执行
/u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Network exists: 1/192.168.11.0/255.255.255.0/eth0, type static
VIP exists: /ora-vip/192.168.11.102/192.168.11.0/255.255.255.0/eth0, hosting node ora
VIP exists: /orb-vip/192.168.11.105/192.168.11.0/255.255.255.0/eth0, hosting node orb
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'orb'
CRS-2673: Attempting to stop 'ora.crsd' on 'orb'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'orb'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'orb'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'orb'
CRS-2677: Stop of 'ora.orcl.db' on 'orb' succeeded
CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'orb'
CRS-2677: Stop of 'ora.DATA1.dg' on 'orb' succeeded
CRS-2677: Stop of 'ora.CRS.dg' on 'orb' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'orb'
CRS-2677: Stop of 'ora.asm' on 'orb' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'orb' has completed
CRS-2677: Stop of 'ora.crsd' on 'orb' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'orb'
CRS-2673: Attempting to stop 'ora.ctssd' on 'orb'
CRS-2673: Attempting to stop 'ora.evmd' on 'orb'
CRS-2673: Attempting to stop 'ora.asm' on 'orb'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'orb'
CRS-2677: Stop of 'ora.crf' on 'orb' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'orb' succeeded
CRS-2677: Stop of 'ora.evmd' on 'orb' succeeded
CRS-2677: Stop of 'ora.asm' on 'orb' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'orb'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'orb' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'orb' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'orb'
CRS-2677: Stop of 'ora.cssd' on 'orb' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'orb'
CRS-2677: Stop of 'ora.gipcd' on 'orb' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'orb'
CRS-2677: Stop of 'ora.gpnpd' on 'orb' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'orb' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node

C.从1节点上root执行
/u01/app/11.2.0/grid/bin/crsctl delete node -n orb
CRS-4661: Node orb successfully deleted.

D. 从2节点上grid用户执行
/u01/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={orb}" CRS=TRUE -silent -local

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2010 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oracle/oraInventory
'UpdateNodeList' was successful.

E. 从2节点上grid用户执行
/u01/app/11.2.0/grid/deinstall/deinstall –local
期间会有交互,一直回车用默认值,最后产生一个脚本,用root在另一终端执行

F.从1节点上grid用户执行
/u01/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={ora}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 1433 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oracle/oraInventory
'UpdateNodeList' was successful.

G.在1节点上检查2节点是否被删除成功
cluvfy stage -post nodedel -n orb -verbose

H.节点2被正确删除后,修改节点2的主机名为rac2,修改两个节点的/etc/host
192.168.11.101  ora
192.168.11.102  ora-vip
192.168.88.100  ora-priv

192.168.11.104  rac2
192.168.11.105  rac2-vip
192.168.88.101  rac2-priv

192.168.11.103  ora-scan

I.加节点2到CRS
1.节点1上grid用户,检查节点2是否满足
cluvfy stage -pre nodeadd -n rac2 -fixup -fixupdir /tmp -verbose

Performing pre-checks for node addition

Checking node reachability...

Check: Node reachability from node "ora"
  Destination Node                      Reachable?              
  ------------------------------------  ------------------------
  rac2                                  yes                     
Result: Node reachability check passed from node "ora"


Checking user equivalence...

Check: User equivalence for user "grid"
  Node Name                             Status                  
  ------------------------------------  ------------------------
  rac2                                  failed                  
Result: PRVF-4007 : User equivalence check failed for user "grid"

ERROR:
User equivalence unavailable on all the specified nodes
Verification cannot proceed


Pre-check for node addition was unsuccessful on all the nodes.

因为主机名修改了,两节点间grid用户信任关系需要重建
/u01/app/11.2.0/grid/deinstall/sshUserSetup.sh -user grid -hosts ora rac2 -noPromptPassphrase

2.节点1上grid用户执行
$ORACLE_HOME/oui/bin/addNode.sh "CLUSTER_NEW_NODES={rac2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}"
一些无关紧要的小问题检查不通过,在图形界面安装时是可以忽略的,这里是不能直接忽略的,需要修改一下addNode.sh文件
#!/bin/sh
OHOME=/u01/app/11.2.0/grid
INVPTRLOC=$OHOME/oraInst.loc
EXIT_CODE=0
ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC ORACLE_HOME=$OHOME $*"
if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f "$OHOME/cv/cvutl/check_nodeadd.pl" ]
then
        $ADDNODE
        EXIT_CODE=$?;
else
        CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre ORACLE_HOME=$OHOME $*"
        $CHECK_NODEADD
        EXIT_CODE=$?;
EXIT_CODE=0   ##在这里添加一行,用于忽略一些小错误
        if [ $EXIT_CODE -eq 0 ]
        then
                $ADDNODE
                EXIT_CODE=$?;
        fi
fi
exit $EXIT_CODE ;

重新执行

$ORACLE_HOME/oui/bin/addNode.sh "CLUSTER_NEW_NODES={rac2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}"


Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "ora"


Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.11.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.11.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.11.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.11.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking CRS integrity...

Clusterware version consistency passed

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
"/u01/app/11.2.0/grid" is shared
Shared resources check for node addition passed


Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.11.0"


Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "192.168.88.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.11.0".
Subnet mask consistency check passed for subnet "192.168.88.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.11.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.11.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "192.168.88.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.88.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.
Total memory check failed
Check failed on nodes:
        rac2,ora
Available memory check passed
Swap space check passed
Free disk space check failed for "rac2:/u01/app/11.2.0/grid,rac2:/tmp"
Check failed on nodes:
        rac2
Free disk space check failed for "ora:/u01/app/11.2.0/grid,ora:/tmp"
Check failed on nodes:
        ora
Check for multiple users with UID value 1100 passed
User existence check passed for "grid"
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "binutils"
Package existence check passed for "gcc"
Package existence check passed for "gcc-c++"
Package existence check passed for "libaio"
Package existence check passed for "glibc"
Package existence check passed for "compat-libstdc++-33"
Package existence check passed for "elfutils-libelf"
Package existence check passed for "elfutils-libelf-devel"
Package existence check passed for "glibc-common"
Package existence check passed for "glibc-devel"
Package existence check passed for "glibc-headers"
Package existence check passed for "libaio-devel"
Package existence check passed for "libgcc"
Package existence check passed for "libstdc++"
Package existence check passed for "libstdc++-devel"
Package existence check passed for "sysstat"
Package existence check failed for "pdksh"
Check failed on nodes:
        rac2,ora
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed


User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: ora

File "/etc/resolv.conf" is not consistent across nodes

Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Pre-check for node addition was unsuccessful on all the nodes.
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 1395 MB    Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.


Performing tests to see whether nodes rac2 are available
............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
   Source: /u01/app/11.2.0/grid
   New Nodes
Space Requirements
   New Nodes
      rac2
         /: Required 3.80GB : Available 4.99GB
Installed Products
   Product Names
      Oracle Grid Infrastructure 11.2.0.3.0
      Sun JDK 1.5.0.30.03
      Installer SDK Component 11.2.0.3.0
      Oracle One-Off Patch Installer 11.2.0.1.7
      Oracle Universal Installer 11.2.0.3.0
      Oracle USM Deconfiguration 11.2.0.3.0
      Oracle Configuration Manager Deconfiguration 10.3.1.0.0
      Enterprise Manager Common Core Files 10.2.0.4.4
      Oracle DBCA Deconfiguration 11.2.0.3.0
      Oracle RAC Deconfiguration 11.2.0.3.0
      Oracle Quality of Service Management (Server) 11.2.0.3.0
      Installation Plugin Files 11.2.0.3.0
      Universal Storage Manager Files 11.2.0.3.0
      Oracle Text Required Support Files 11.2.0.3.0
      Automatic Storage Management Assistant 11.2.0.3.0
      Oracle Database 11g Multimedia Files 11.2.0.3.0
      Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
      Oracle Globalization Support 11.2.0.3.0
      Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
      Oracle Core Required Support Files 11.2.0.3.0
      Bali Share 1.1.18.0.0
      Oracle Database Deconfiguration 11.2.0.3.0
      Oracle Quality of Service Management (Client) 11.2.0.3.0
      Expat libraries 2.0.1.0.1
      Oracle Containers for Java 11.2.0.3.0
      Perl Modules 5.10.0.0.1
      Secure Socket Layer 11.2.0.3.0
      Oracle JDBC/OCI Instant Client 11.2.0.3.0
      Oracle Multimedia Client Option 11.2.0.3.0
      LDAP Required Support Files 11.2.0.3.0
      Character Set Migration Utility 11.2.0.3.0
      Perl Interpreter 5.10.0.0.1
      PL/SQL Embedded Gateway 11.2.0.3.0
      OLAP SQL Scripts 11.2.0.3.0
      Database SQL Scripts 11.2.0.3.0
      Oracle Extended Windowing Toolkit 3.4.47.0.0
      SSL Required Support Files for InstantClient 11.2.0.3.0
      SQL*Plus Files for Instant Client 11.2.0.3.0
      Oracle Net Required Support Files 11.2.0.3.0
      Oracle Database User Interface 2.2.13.0.0
      RDBMS Required Support Files for Instant Client 11.2.0.3.0
      RDBMS Required Support Files Runtime 11.2.0.3.0
      XML Parser for Java 11.2.0.3.0
      Oracle Security Developer Tools 11.2.0.3.0
      Oracle Wallet Manager 11.2.0.3.0
      Enterprise Manager plugin Common Files 11.2.0.3.0
      Platform Required Support Files 11.2.0.3.0
      Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
      RDBMS Required Support Files 11.2.0.3.0
      Oracle Ice Browser 5.2.3.6.0
      Oracle Help For Java 4.2.9.0.0
      Enterprise Manager Common Files 10.2.0.4.3
      Deinstallation Tool 11.2.0.3.0
      Oracle Java Client 11.2.0.3.0
      Cluster Verification Utility Files 11.2.0.3.0
      Oracle Notification Service (eONS) 11.2.0.3.0
      Oracle LDAP administration 11.2.0.3.0
      Cluster Verification Utility Common Files 11.2.0.3.0
      Oracle Clusterware RDBMS Files 11.2.0.3.0
      Oracle Locale Builder 11.2.0.3.0
      Oracle Globalization Support 11.2.0.3.0
      Buildtools Common Files 11.2.0.3.0
      Oracle RAC Required Support Files-HAS 11.2.0.3.0
      SQL*Plus Required Support Files 11.2.0.3.0
      XDK Required Support Files 11.2.0.3.0
      Agent Required Support Files 10.2.0.4.3
      Parser Generator Required Support Files 11.2.0.3.0
      Precompiler Required Support Files 11.2.0.3.0
      Installation Common Files 11.2.0.3.0
      Required Support Files 11.2.0.3.0
      Oracle JDBC/THIN Interfaces 11.2.0.3.0
      Oracle Multimedia Locator 11.2.0.3.0
      Oracle Multimedia 11.2.0.3.0
      HAS Common Files 11.2.0.3.0
      Assistant Common Files 11.2.0.3.0
      PL/SQL 11.2.0.3.0
      HAS Files for DB 11.2.0.3.0
      Oracle Recovery Manager 11.2.0.3.0
      Oracle Database Utilities 11.2.0.3.0
      Oracle Notification Service 11.2.0.3.0
      SQL*Plus 11.2.0.3.0
      Oracle Netca Client 11.2.0.3.0
      Oracle Net 11.2.0.3.0
      Oracle JVM 11.2.0.3.0
      Oracle Internet Directory Client 11.2.0.3.0
      Oracle Net Listener 11.2.0.3.0
      Cluster Ready Services Files 11.2.0.3.0
      Oracle Database 11g 11.2.0.3.0
-----------------------------------------------------------------------------


Instantiating scripts for add node (Saturday, May 18, 2013 12:55:02 PM CST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Saturday, May 18, 2013 12:55:05 PM CST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Saturday, May 18, 2013 12:58:24 PM CST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/11.2.0/grid/root.sh #On nodes rac2
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node
   
The Cluster Node Addition of /u01/app/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.

按照提示在2节点上用root执行:/u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node ora, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

3. 检查CRS状态
crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.CRS.dg     ora....up.type ONLINE    ONLINE    ora         
ora.DATA1.dg   ora....up.type ONLINE    ONLINE    ora         
ora....ER.lsnr ora....er.type ONLINE    ONLINE    ora         
ora....N1.lsnr ora....er.type ONLINE    ONLINE    ora         
ora.asm        ora.asm.type   ONLINE    ONLINE    ora         
ora.cvu        ora.cvu.type   ONLINE    ONLINE    ora         
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE               
ora....network ora....rk.type ONLINE    ONLINE    ora         
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    ora         
ora.ons        ora.ons.type   ONLINE    ONLINE    ora         
ora....SM1.asm application    ONLINE    ONLINE    ora         
ora....RA.lsnr application    ONLINE    ONLINE    ora         
ora.ora.gsd    application    OFFLINE   OFFLINE               
ora.ora.ons    application    ONLINE    ONLINE    ora         
ora.ora.vip    ora....t1.type ONLINE    ONLINE    ora         
ora.orcl.db    ora....se.type ONLINE    ONLINE    ora         
ora....SM3.asm application    ONLINE    ONLINE    rac2        
ora....C2.lsnr application    ONLINE    ONLINE    rac2        
ora.rac2.gsd   application    OFFLINE   OFFLINE               
ora.rac2.ons   application    ONLINE    ONLINE    rac2        
ora.rac2.vip   ora....t1.type ONLINE    ONLINE    rac2        
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    ora


##在旧节点名称上删除实例
srvctl remove instance -d orcl -i orcl2 -f -y
##加实例到新节点
srvctl add instance -d orcl -i orcl2 -n rac2 -f
##开始实例在新节点
srvctl start instance -d orcl -i orcl2
##查看数据库在节点上的状态
srvctl status database -d orcl
Instance orcl1 is running on node ora
Instance orcl2 is running on node rac2

J.删除节点1
1.这次删除节点1多测试一些步骤
#停止、删除节点1上的db实例
srvctl stop instance -d orcl -i orcl1
srvctl remove instance -d orcl -i orcl1 -f -y

2.检查2节点是否是active和Unpinned ,如果是pinned的,用crsctl unpin css
olsnodes -s -t
ora     Active  Unpinned
rac2     Active  Unpinned

3. root用户在1节点 GRID_HOME 上执行
/u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force

4.从2节点上root执行
/u01/app/11.2.0/grid/bin/crsctl delete node -n ora
CRS-4661: Node orb successfully deleted.

5. 从1节点上grid用户执行
/u01/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={ora}" CRS=TRUE -silent -local

6. 从1节点上grid用户执行
/u01/app/11.2.0/grid/deinstall/deinstall -local
期间会有交互,一直回车用默认值,最后产生一个脚本,用root在节点1的另一终端执行

7.从2节点上grid用户执行
/u01/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={rac2}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 1433 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oracle/oraInventory
'UpdateNodeList' was successful.

8.在2节点上检查1节点是否被删除成功
cluvfy stage -post nodedel -n ora -verbose

K.节点2被正确删除后,修改节点2的主机名为rac2,修改两个节点的/etc/host
192.168.11.101  rac1
192.168.11.102  rac1-vip
192.168.88.100  rac1-priv

192.168.11.104  rac2
192.168.11.105  rac2-vip
192.168.88.101  rac2-priv

192.168.11.103  ora-scan

L.加节点2到CRS
1.节点2上设置grid用户对等性
/u01/app/11.2.0/grid/deinstall/sshUserSetup.sh -user grid -hosts rac1 rac2 -noPromptPassphrase
2.节点2上grid用户,检查节点1是否满足
cluvfy stage -pre nodeadd -n rac1 -fixup -fixupdir /tmp -verbose
修改 $ORACLE_HOME/oui/bin/addNode.sh

M.加节点1到CRS
$ORACLE_HOME/oui/bin/addNode.sh "CLUSTER_NEW_NODES={rac1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac1-vip}"
按照提示在1节点上用root执行:/u01/app/11.2.0/grid/root.sh

N.
##在旧节点名称上删除实例
srvctl remove instance -d orcl -i orcl1 -f -y
##加实例到新节点
srvctl add instance -d orcl -i orcl1 -n rac1 -f
##开始实例在新节点
srvctl start instance -d orcl -i orcl1
##查看数据库在节点上的状态
srvctl status database -d orcl
Instance orcl1 is running on node rac1
Instance orcl2 is running on node rac2

O. 发现两节点ASM实例名被修改为+ASM3和+ASM4
阅读(5204) | 评论(0) | 转发(0) |
0

上一篇: 11g r2 RAC 加节点步骤

下一篇:AWR快速入门

给主人留下些什么吧!~~