1.4 关闭其他应用程序
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./srvctl stop nodeapps -n rac1
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./srvctl stop nodeapps -n rac2 |
1.5 关闭crs后台进程, 这一操作会在操作系统一级中止运行的crs后台进程, 必须在所有节点上运行
rac1:/u01/app/oracle/product/10.2.0/crs/bin# /etc/init.d/init.crs stop
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
Shutdown has begun. The daemons should exit soon. |
2.修改操作系统的ip设置
debian的网络配置文件为/etc/network/interfaces和/etc/hosts, 其他linux发行版及unix网络配置文件位置可能并不一样. 以节点rac1为例, 修改前/etc/network/interfaces文件内容为:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0 eth1
iface eth0 inet static
address 192.168.0.181
netmask 255.255.255.0
network 192.168.0.0
broadcast 192.168.0.255
gateway 192.168.0.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 202.106.0.20
iface eth1 inet static
address 10.10.10.181
netmask 255.255.255.0
network 10.10.10.0
broadcast 10.10.10.255 |
修改后内容为:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0 eth1
iface eth0 inet static
address 192.168.1.181
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 202.106.0.20
iface eth1 inet static
address 10.1.0.181
netmask 255.255.255.0
network 10.1.0.0
broadcast 10.1.0.255
/etc/hosts文件内容为:
127.0.0.1 localhost.localdomain localhost
192.168.0.181 rac1
192.168.0.182 rac2
192.168.0.191 rac1-vip
192.168.0.192 rac2-vip
10.10.10.181 rac1-priv
10.10.10.182 rac2-priv
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts |
修改后:
127.0.0.1 localhost.localdomain localhost
192.168.1.181 rac1
192.168.1.182 rac2
192.168.1.191 rac1-vip
192.168.1.192 rac2-vip
10.1.0.181 rac1-priv
10.1.0.182 rac2-priv
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
集群中所有节点的hosts文件应该保持一致.
/etc/network/interfaces和/etc/hosts文件修改后可使用
/etc/init.d/networking restart
或者重启操作系统使设置生效. |
3.启动crs, 设置Oracle中ip地址相关的设置
3.1 启动crs, 并关闭随crs启动的应用程序
rac1:/u01/app/oracle/product/10.2.0/db_1/network/admin# /etc/init.d/init.crs start
Startup will be queued to init within 90 seconds. |
由于Oracle所有应用设置为自动启动, 所以在crs启动时会试图启动所有的服务, 但是在对oracle相关的ip地址进行设置时需要crs处于运行状态而数据库, asm和node application处于停止状态, 所以需要我们参考1.2, 1.3, 1.4的内容关闭数据库, asm和node application.
3.2 使用oifcfg修改网卡设置, oifconfig可以被用来设置和查看网卡被oracle使用的方式
rac2:/u01/app/oracle/product/10.2.0/crs/bin#
./oifcfg getif -global
eth0 192.168.0.0 global public
eth1 10.10.10.0 global cluster_interconnect
rac2:/u01/app/oracle/product/10.2.0/crs/bin#
./oifcfg setif -global eth0/192.168.1.0:public
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./oifcfg iflist
eth0 192.168.1.0
eth0 192.168.0.0
eth1 10.1.0.0
rac2:/u01/app/oracle/product/10.2.0/crs/bin#
./oifcfg delif -global eth0/192.168.0.0
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./oifcfg iflist
eth0 192.168.1.0
eth1 10.1.0.0
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./oifcfg getif -global
eth0 192.168.1.0 global public
eth1 10.10.10.0 global cluster_interconnect
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./lifcfg setif -global
eth1:/10.1.0.0:cluster_interconnect
-bash: ./lifcfg: No such file or directory
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./oifcfg setif -global
eth1:/10.1.0.0:cluster_interconnect
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./oifcfg getif -global
eth0 192.168.1.0 global public
eth1 10.10.10.0 global cluster_interconnect
eth1: 10.1.0.0 global cluster_interconnect
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./oifcfg setif -global
eth1/10.1.0.0:cluster_interconnect
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./oifcfg getif -global
eth0 192.168.1.0 global public
eth1 10.10.10.0 global cluster_interconnect
eth1 10.1.0.0 global cluster_interconnect
eth1: 10.1.0.0 global cluster_interconnect
rac2:/u01/app/oracle/product/10.2.0/crs/bin#
./oifcfg delif -global eth1:
rac2:/u01/app/oracle/product/10.2.0/crs/bin#
./oifcfg delif -global eth1/10.10.10.0
rac2:/u01/app/oracle/product/10.2.0/crs/bin#
./oifcfg getif -global
eth0 192.168.1.0 global public
eth1 10.1.0.0 global cluster_interconnect |
oifcfg iflist会显示当前使用的网卡及其子网设置, 而oifcfg getif -global 则会显示配置文件中的信息。
3.3 修改vip地址
rac2:/u01/app/oracle/product/10.2.0/crs/bin#
./srvctl modify nodeapps -n rac1 -A
192.168.1.191/255.255.255.0/eth0
rac2:/u01/app/oracle/product/10.2.0/crs/bin#
./srvctl modify nodeapps -n rac2 -A
192.168.1.192/255.255.255.0/eth0 |
3.4 设置listener.ora和tnsnames.ora, 检查这些文件中是否有指定原来ip的地方, 修改为更改后的ip地址, 在rac1的配置文件中listener.ora包含了 192.168.0.181我修改成了192.168.1.181, rac2上的listener.ora也做了相应的修改。
3.5 启动node applications, asm, 数据库
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./srvctl start nodeapps -n rac1
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./srvctl start nodeapps -n rac2
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./srvctl start asm -n rac2
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./srvctl start asm -n rac1
rac2:/u01/app/oracle/product/10.2.0/crs/bin#
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./srvctl start database -d orcl |
3.6 来看看我们的成果:
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:0C:29:0D:FE:0F
inet addr:192.168.1.182 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:142242 errors:0 dropped:0 overruns:0 frame:0
TX packets:140057 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:83167889 (79.3 MiB) TX bytes:87987399 (83.9 MiB)
Interrupt:19 Base address:0x1480
eth0:1 Link encap:Ethernet HWaddr 00:0C:29:0D:FE:0F
inet addr:192.168.1.192 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:19 Base address:0x1480
eth1 Link encap:Ethernet HWaddr 00:0C:29:0D:FE:19
inet addr:10.1.0.182 Bcast:10.1.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:29781 errors:0 dropped:0 overruns:0 frame:0
TX packets:26710 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:19667330 (18.7 MiB) TX bytes:11573375 (11.0 MiB)
Interrupt:16 Base address:0x1800
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:21796 errors:0 dropped:0 overruns:0 frame:0
TX packets:21796 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:6238339 (5.9 MiB) TX bytes:6238339 (5.9 MiB)
rac2:/u01/app/oracle/product/10.2.0/crs/bin# ./crs_stat
NAME=ora.orcl.db
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1
NAME=ora.orcl.orcl1.inst
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1
NAME=ora.orcl.orcl2.inst
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac2
NAME=ora.rac1.ASM1.asm
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1
NAME=ora.rac1.LISTENER_RAC1.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1
NAME=ora.rac1.gsd
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1
NAME=ora.rac1.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1
NAME=ora.rac1.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac1
NAME=ora.rac2.ASM2.asm
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac2
NAME=ora.rac2.LISTENER_RAC2.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac2
NAME=ora.rac2.gsd
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac2
NAME=ora.rac2.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac2
NAME=ora.rac2.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE on rac2
rac2:/u01/app/oracle/product/10.2.0/crs/bin# su - oracle
oracle@rac2:~$ lsnrctl stat
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 23-AUG-2006 23:23:47
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER_RAC2
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 23-AUG-2006 22:24:44
Uptime 0 days 0 hr. 59 min. 3 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File
/u01/app/oracle/product/10.2.0/db_1/network/admin/listener.ora
Listener Log File
/u01/app/oracle/product/10.2.0/db_1/network/log/listener_rac2.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.192)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.182)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM2", status BLOCKED, has 1 handler(s) for this service...
Service "+ASM_XPT" has 1 instance(s).
Instance "+ASM2", status BLOCKED, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "orcl" has 2 instance(s).
Instance "orcl1", status READY, has 1 handler(s) for this service...
Instance "orcl2", status READY, has 2 handler(s) for this service...
Service "orclXDB" has 2 instance(s).
Instance "orcl1", status READY, has 1 handler(s) for this service...
Instance "orcl2", status READY, has 1 handler(s) for this service...
Service "orcl_XPT" has 2 instance(s).
Instance "orcl1", status READY, has 1 handler(s) for this service...
Instance "orcl2", status READY, has 2 handler(s) for this service...
The command completed successfully |
上面的操作大部分使用root进行, 实时上, 使用svrctl进行的操作也可以使用root用户完,当然也可以使用Oracle用户完成, 而修改vip则必须使用root用户完成,最后我们可以使用crs_stat -ls查看各个资源的所有者, 属组, 以及相关的权限。