env:oracle10.2.0.1 rhel5.3_32
1.on rac1 exec /u01/app/oracle/product/10.2.0/crs_1/root.sh
- [root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh
-
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
-
WARNING: directory '/u01/app/oracle/product' is not owned by root
-
WARNING: directory '/u01/app/oracle' is not owned by root
-
WARNING: directory '/u01/app' is not owned by root
-
WARNING: directory '/u01' is not owned by root
-
Checking to see if Oracle CRS stack is already configured
-
Setting the permissions on OCR backup directory
-
Setting up NS directories
-
Oracle Cluster Registry configuration upgraded successfully
-
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
-
WARNING: directory '/u01/app/oracle/product' is not owned by root
-
WARNING: directory '/u01/app/oracle' is not owned by root
-
WARNING: directory '/u01/app' is not owned by root
-
WARNING: directory '/u01' is not owned by root
-
assigning default hostname rac1 for node 1.
-
assigning default hostname rac2 for node 2.
-
Successfully accumulated necessary OCR keys.
-
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
-
node <nodenumber>: <nodename> <private interconnect name> <hostname>
-
node 1: rac1 rac1priv rac1
-
node 2: rac2 rac2priv rac2
-
Creating OCR keys for user 'root', privgrp 'root'..
-
Operation successful.
-
Now formatting voting device: /dev/raw/raw2
-
Format of 1 voting devices complete.
-
Startup will be queued to init within 90 seconds.
-
Adding daemons to inittab
-
Expecting the CRS daemons to be up within 600 seconds.
-
CSS is active on these nodes.
-
rac1
-
CSS is inactive on these nodes.
-
rac2
-
Local node checking complete.
-
Run root.sh on remaining nodes to start CRS daemons.
2.1.on rac2 exec /u01/app/oracle/product/10.2.0/crs_1/root.sh
- [root@rac2 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh
-
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
-
WARNING: directory '/u01/app/oracle/product' is not owned by root
-
WARNING: directory '/u01/app/oracle' is not owned by root
-
WARNING: directory '/u01/app' is not owned by root
-
WARNING: directory '/u01' is not owned by root
-
Checking to see if Oracle CRS stack is already configured
-
Setting the permissions on OCR backup directory
-
Setting up NS directories
-
Oracle Cluster Registry configuration upgraded successfully
-
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
-
WARNING: directory '/u01/app/oracle/product' is not owned by root
-
WARNING: directory '/u01/app/oracle' is not owned by root
-
WARNING: directory '/u01/app' is not owned by root
-
WARNING: directory '/u01' is not owned by root
-
clscfg: EXISTING configuration version 3 detected.
-
clscfg: version 3 is 10G Release 2.
-
assigning default hostname rac1 for node 1.
-
assigning default hostname rac2 for node 2.
-
Successfully accumulated necessary OCR keys.
-
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
-
node <nodenumber>: <nodename> <private interconnect name> <hostname>
-
node 1: rac1 rac1priv rac1
-
node 2: rac2 rac2priv rac2
-
clscfg: Arguments check out successfully.
-
-
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-
-force is destructive and will destroy any previous cluster
-
configuration.
-
Oracle Cluster Registry for cluster has already been initialized
-
Startup will be queued to init within 90 seconds.
-
Adding daemons to inittab
-
Expecting the CRS daemons to be up within 600 seconds.
-
CSS is active on these nodes.
-
rac1
-
rac2
-
CSS is active on all nodes.
-
Waiting for the Oracle CRSD and EVMD to start
-
Oracle CRS stack installed and running under init(1M)
-
Running vipca(silent) for configuring nodeapps
-
/u01/app/oracle/product/10.2.0/crs_1/jdk/jre//bin/java: error while loading shared
-
libraries: libpthread.so.0: cannot open shared object file: No such file or directory
exec vipca by root:
- [root@rac2 bin]# ./vipca
-
Error 0(Native: listNetInterfaces:[3])
-
[Error 0(Native: listNetInterfaces:[3])]
modify vipca
- #if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
- #then
- #LD_ASSUME_KERNEL=2.4.19
- #export LD_ASSUME_KERNEL
- #fi
- #unset LD_ASSUME_KERNEL
modify srvctl
- 165 #Remove this workaround when the bug 3937317 is fixed
- 166 #LD_ASSUME_KERNEL=2.4.19
- 167 #export LD_ASSUME_KERNEL
public,pri都是私有地址的缘故,需要指定:
- [root@rac2 bin]#./oifcfg setif -global eth0/10.0.127.128:public
-
[root@rac2 bin]#./oifcfg setif -global eth1/192.168.0.0:cluster_interconnect
-
[root@rac2 bin]#./oifcfg getif #查看
-
eth0 10.0.127.128 global public
-
eth1 192.168.0.0 global cluster_interconnect
3.on rac2 exec vipca by root
- [root@rac2 bin]#./vipca
- 在OUI界面输入rac1,rac2的alias/vip/mask
4.check ok ,on the OUI window.
http://space.itpub.net/457986/viewspace-607057
http://jingh3209.blog.163.com/blog/static/1569667200963123010754/
阅读(2359) | 评论(0) | 转发(0) |