Chinaunix首页 | 论坛 | 博客
  • 博客访问: 218110
  • 博文数量: 57
  • 博客积分: 1376
  • 博客等级: 中尉
  • 技术积分: 658
  • 用 户 组: 普通用户
  • 注册时间: 2010-05-09 09:51
文章分类

全部博文(57)

文章存档

2012年(1)

2011年(56)

分类: Oracle

2011-12-04 19:20:22


一、资源规划:

1. VM名字规划:

CentOS_5.7_Oracle10g_RAC_node1_192.168.0.31
CentOS_5.7_Oracle10g_RAC_node2_192.168.0.32


2. rac的ip规划:
    

ip \ nodeNode 1 Node 2
public ip(eth0)192.168.0.31 192.168.0.32
private ip(eth1)100.0.0.10100.0.0.20
vip192.168.0.231192.168.0.232


3. 硬盘空间规划:

OS 12G * 2 = 24G  => oracle+crs = 2G *2
Share storage 16G => raw device


4. 操作系统规划:

CentOS 5.5 32bit


5. Oracle软件规划:

Oracle Clusterware 10.2.0.1.0
Oracle Database 10.2.0.1.0



二、系统安装及配置

1. CentOS 5.5 32bit的安装。

2. 安装完毕后,关闭一些不必要的系统服务(Bluetooth, cups, iptables, ip6talbes, sendmail):
chkconfig bluetooth off
chkconfig cups off
chkconfig iptables off
chkconfig ip6tables off
chkconfig sendmail off

service bluetooth stop
service cups stop
service iptables stop
service ip6tables stop
service sendmail stop



操作系统安装完毕后,占用大概2.7G。

3. 把oracle的软件上传到node 1上

配置yum,把vsftpd软件装上,从宿主系统上传 或 登录宿主系统的ftp server去下载。

[root@rac1 ora_sw]# ll
total 876872
-rw-r--r-- 1 root root 228239016 Jul 20 17:35 10201_clusterware_linux32.zip
-rw-r--r-- 1 root root 668734007 Jul 20 17:34 10201_database_linux32.zip
-rw-r--r-- 1 root root     47533 Jul 20 17:37 rlwrap-0.30-1.el5.i386.rpm

4. 配置网络(node1 & node2)

具体网络ip设置如资源规划里的ip规划。
在各自/etc/hosts文件中添加如下信息:
192.168.0.31  rac1
100.0.0.10    rac1-priv
192.168.0.231 rac1-vip

192.168.0.32  rac2
100.0.0.20    rac2-priv
192.168.0.232 rac2-vip


在/etc/sysconfig/network里加入gateway信息和hostname信息。

5. 解压clusterware

[root@rac1 ora_sw]# unzip 10201_clusterware_linux32.zip

6. 添加oracle用户及相关组(node1 & node2)

[root@rac2 ~]# groupadd oinstall
[root@rac2 ~]# groupadd dba
[root@rac2 ~]# useradd oracle -g oinstall -G dba
[root@rac2 ~]# echo 'oracle' | passwd oracle --stdin
Changing password for user oracle.
passwd: all authentication tokens updated successfully.

7. 设置对等性:(node1 & node2)

node 1执行:

[oracle@rac1 ~]$ mkdir ~/.ssh
[oracle@rac1 ~]$ chmod 700 ~/.ssh
[oracle@rac1 ~]$
[oracle@rac1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
c6:79:7f:d4:21:f7:e2:2e:54:47:85:93:d9:3f:e2:3c oracle@rac1.at.com
[oracle@rac1 ~]$
[oracle@rac1 ~]$
[oracle@rac1 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
ea:a3:fc:28:49:39:9f:ef:c1:74:42:24:37:03:85:90 oracle@rac1.at.com


node 2执行:
[oracle@rac2 ~]$ mkdir ~/.ssh
[oracle@rac2 ~]$ chmod 700 ~/.ssh
[oracle@rac2 ~]$
[oracle@rac2 ~]$
[oracle@rac2 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
b2:b4:60:f8:c7:a3:dc:e5:27:95:b7:78:9d:5b:29:fe oracle@rac2.at.com
[oracle@rac2 ~]$
[oracle@rac2 ~]$
[oracle@rac2 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
6b:ee:83:59:ae:14:94:4a:e3:52:02:0d:02:be:9f:9b oracle@rac2.at.com

node 1再执行:
[oracle@rac1 ~]$ cat ~/.ssh/id_rsa.pub > ~/.ssh/authorized_keys
[oracle@rac1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@rac1 ~]$
[oracle@rac1 ~]$
[oracle@rac1 ~]$ ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'rac2 (192.168.0.32)' can't be established.
RSA key fingerprint is 37:da:a5:00:46:14:3c:8a:5c:64:4f:55:f5:a8:33:20.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2,192.168.0.32' (RSA) to the list of known hosts.
oracle@rac2's password:
[oracle@rac1 ~]$
[oracle@rac1 ~]$
[oracle@rac1 ~]$ ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
oracle@rac2's password:
[oracle@rac1 ~]$
[oracle@rac1 ~]$
[oracle@rac1 ~]$ scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
oracle@rac2's password:
authorized_keys                                        100% 2016     2.0KB/s 


两个节点都测试对等性:
[oracle@rac1 ~]$ ssh rac1 date
The authenticity of host 'rac1 (192.168.0.31)' can't be established.
RSA key fingerprint is 1e:72:d9:f1:3f:08:d5:e2:d5:92:1a:ea:3d:2e:f7:f0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1,192.168.0.31' (RSA) to the list of known hosts.
Thu Jul 21 15:25:03 CST 2011
[oracle@rac1 ~]$
[oracle@rac1 ~]$
[oracle@rac1 ~]$
[oracle@rac1 ~]$
[oracle@rac1 ~]$ ssh rac2 date
Thu Jul 21 15:25:09 CST 2011
[oracle@rac1 ~]$ ssh rac1-priv date
The authenticity of host 'rac1-priv (100.0.0.10)' can't be established.
RSA key fingerprint is 1e:72:d9:f1:3f:08:d5:e2:d5:92:1a:ea:3d:2e:f7:f0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1-priv,100.0.0.10' (RSA) to the list of known hosts.
Thu Jul 21 15:25:18 CST 2011
[oracle@rac1 ~]$
[oracle@rac1 ~]$
[oracle@rac1 ~]$ ssh rac2-priv date
The authenticity of host 'rac2-priv (100.0.0.20)' can't be established.
RSA key fingerprint is 37:da:a5:00:46:14:3c:8a:5c:64:4f:55:f5:a8:33:20.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2-priv,100.0.0.20' (RSA) to the list of known hosts.
Thu Jul 21 15:25:34 CST 2011


[oracle@rac2 ~]$ ssh rac1 date
The authenticity of host 'rac1 (192.168.0.31)' can't be established.
RSA key fingerprint is 1e:72:d9:f1:3f:08:d5:e2:d5:92:1a:ea:3d:2e:f7:f0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1,192.168.0.31' (RSA) to the list of known hosts.
Thu Jul 21 15:26:03 CST 2011
[oracle@rac2 ~]$ ssh rac2 date
The authenticity of host 'rac2 (192.168.0.32)' can't be established.
RSA key fingerprint is 37:da:a5:00:46:14:3c:8a:5c:64:4f:55:f5:a8:33:20.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2,192.168.0.32' (RSA) to the list of known hosts.
Thu Jul 21 15:26:08 CST 2011
[oracle@rac2 ~]$ ssh rac1-priv date
The authenticity of host 'rac1-priv (100.0.0.10)' can't be established.
RSA key fingerprint is 1e:72:d9:f1:3f:08:d5:e2:d5:92:1a:ea:3d:2e:f7:f0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac1-priv,100.0.0.10' (RSA) to the list of known hosts.
Thu Jul 21 15:26:17 CST 2011
[oracle@rac2 ~]$ ssh rac2-priv date
The authenticity of host 'rac2-priv (100.0.0.20)' can't be established.
RSA key fingerprint is 37:da:a5:00:46:14:3c:8a:5c:64:4f:55:f5:a8:33:20.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2-priv,100.0.0.20' (RSA) to the list of known hosts.
Thu Jul 21 15:26:24 CST 2011


检查节点的连接性是否完好:
[oracle@rac1 cluvfy]$ ./runcluvfy.sh comp nodecon -n rac1,rac2 -verbose

Verifying node connectivity

Checking node connectivity...


Interface information for node "rac2"
  Interface Name                  IP Address                      Subnet         
  ------------------------------  ------------------------------  ----------------
  eth0                            192.168.0.32                    192.168.0.0    
  eth1                            100.0.0.20                      100.0.0.0      


Interface information for node "rac1"
  Interface Name                  IP Address                      Subnet         
  ------------------------------  ------------------------------  ----------------
  eth0                            192.168.0.31                    192.168.0.0    
  eth1                            100.0.0.10                      100.0.0.0      


Check: Node connectivity of subnet "192.168.0.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  rac2:eth0                       rac1:eth0                       yes            
Result: Node connectivity check passed for subnet "192.168.0.0" with node(s) rac2,rac1.

Check: Node connectivity of subnet "100.0.0.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  rac2:eth1                       rac1:eth1                       yes            
Result: Node connectivity check passed for subnet "100.0.0.0" with node(s) rac2,rac1.

Suitable interfaces for VIP on subnet "100.0.0.0":
rac2 eth1:100.0.0.20
rac1 eth1:100.0.0.10

Suitable interfaces for the private interconnect on subnet "192.168.0.0":
rac2 eth0:192.168.0.32
rac1 eth0:192.168.0.31

Result: Node connectivity check passed.
Verification of node connectivity was successful.


8. 添加oracle用户profile: (node1 & node 2)

su -oracle
vi ~/.bashrc
加入如下内容:
umask 022
export TMP=/tmp
export TMPDIR=$TMP

export ORACLE_BASE=/opt/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db1
export ORA_CRS_HOME=$ORACLE_BASE/product/crs

export PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_OWNER=oracle
export ORACLE_SID=myrac
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$PATH


9. 检查是否有未安装软件:(node1 & node2)

[root@rac1 ~]# rpm -q binutils compat-db control-center gcc gcc-c++ glibc glibc-common libstdc++ libstdc++-devel make openmotif setarch
binutils-2.17.50.0.6-14.el5
package compat-db is not installed
control-center-2.16.0-16.el5
gcc-4.1.2-48.el5
gcc-c++-4.1.2-48.el5
glibc-2.5-49
glibc-common-2.5-49
libstdc++-4.1.2-48.el5
libstdc++-devel-4.1.2-48.el5
make-3.81-3.el5
openmotif-2.3.1-2.el5_4.1
setarch-2.0-1.1


如果有可以使用yum安装补全。



10. 设置参数:(node1 & node2)

[oracle@rac1 ~]$ cat /etc/issue
CentOS release 5.5 (Final)
Kernel \r on an \m

[root@rac2 ~]# tail -n 10 /etc/security/limits.conf
# Oracle configure shell parameters
oracle          soft    nofile          65536
oracle          hard    nofile          65536
oracle          soft    nproc           16384
oracle          hard    nproc           16384

# Added for increasing the per-process max locked
oracle          soft    memlock         3145728
oracle          hard    memlock         3145728

[root@rac1 ~]# tail -n 17 /etc/sysctl.conf
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 4294967295

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 268435456


# Added for Oracle
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144


让上述参数生效:(node1 & node2)
[root@rac1 ~]# sysctl -p
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 4294967295
kernel.shmall = 268435456
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144


设置pam的验证模块生效:(node1 & node2)
[root@rac1 ~]# tail -n 2 /etc/pam.d/login
# Added for oracle user
session    required     /lib/security/pam_limits.so


设置hangcheck-timer 模块:(node1 & node2)
[root@rac1~]#insmod /lib/modules/2.6.18-194.el5/kernel/drivers/char/hangcheck-timer.ko hangcheck_tick=30 hangcheck_margin=180
[root@rac1 ~]#
[root@rac1 ~]#
[root@rac1 ~]# lsmod | grep hang
hangcheck_timer         8025  0

把它加入到/etc/rc.local里:(node1 & node2)
[root@rac1 ~]# tail -n 2 /etc/rc.local
modprobe hangcheck-timer hangcheck-tick=30 hangcheck_margin=180


11. 安装ASM包和相关内核包:(node1 & node2) ==> 也可以采用非asmlib的方式建立 

安装系统kernel的相关包:
yum install kernel-debug kernel-PAE kernel-xen -y

安装ASM的支持包:
[root@rac1 ~]# rpm -ivh *.rpm
warning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
   1:oracleasm-support      ########################################### [ 14%]
   2:oracleasm-2.6.18-194.el########################################### [ 29%]
   3:oracleasm-2.6.18-194.el########################################### [ 43%]
   4:oracleasm-2.6.18-194.el########################################### [ 57%]
   5:oracleasm-2.6.18-194.el########################################### [ 71%]
   6:oracleasm-2.6.18-194.el########################################### [ 86%]
   7:oracleasmlib           ########################################### [100%]


12. 分区和建立ASM磁盘组

[root@rac1 ~]# fdisk -l

Disk /dev/sda: 12.8 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        1428    11365987+  83  Linux
/dev/sda3            1429        1566     1108485   82  Linux swap / Solaris

Disk /dev/sdb: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1          13      104391   83  Linux
/dev/sdb2              14          50      297202+  83  Linux
/dev/sdb3              51          53       24097+  83  Linux
/dev/sdb4              54        1958    15301912+   5  Extended
/dev/sdb5              54          56       24066   83  Linux


在ASM命令里建立磁盘:
[root@rac1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [  OK  ]
Scanning the system for Oracle ASMLib disks: [  OK  ]
[root@rac1 ~]#
[root@rac1 ~]#
[root@rac1 ~]#
[root@rac1 oracle]# chown oracle.oinstall /dev/sdb[12]
[root@rac1 oracle]# ll !$
ll /dev/sdb[12]
brw-r----- 1 oracle oinstall 8, 17 Aug 25 15:36 /dev/sdb3
brw-r----- 1 oracle oinstall 8, 18 Aug 25 15:36 /dev/sdb5
[root@rac1 ~]#
[root@rac1 ~]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdb3
Marking disk "VOL1" as an ASM disk: [  OK  ]

[root@rac1 ~]# /etc/init.d/oracleasm createdisk VOL2 /dev/sdb5
Marking disk "VOL2" as an ASM disk: [  OK  ]
[root@rac1 ~]#

在node2里执行oracleasm configure,然后,在sacndisk
[root@rac2 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y    
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [  OK  ]
Scanning the system for Oracle ASMLib disks: [  OK  ]
[root@rac2 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2


13. Votingdisk 和 ocr的绑定:(node1 & node 2)

/dev/sdb1 votingdisk
/dev/sdb2 ocr

[root@rac1 ~]# cat /etc/sysconfig/rawdevices
# raw device bindings
# format: 
#         
# example: /dev/raw/raw1 /dev/sda1
#          /dev/raw/raw2 8 5


/dev/raw/raw1 /dev/sdb1
/dev/raw/raw2 /dev/sdb2

[root@rac1 ~]# cat /etc/rc.local
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local
chown oracle.oinstall /dev/raw/raw[12];
          
modprobe hangcheck-timer hangcheck-tick=30 hangcheck_margin=180


14. 建立安装路径: (node1 & node2)

[root@rac1 oracle]# mkdir /opt/oracle/
[root@rac1 oracle]# chown oracle.oinstall /opt/oracle/ -R
[oracle@rac1 clusterware]$ mkdir $ORACLE_HOME -p
[oracle@rac1 clusterware]$ mkdir $ORA_CRS_HOME –p

[root@rac2 oracle]# mkdir /opt/oracle/
[root@rac2 oracle]# chown oracle.oinstall /opt/oracle/ -R
[oracle@rac2 ~]$  mkdir $ORACLE_HOME -p
[oracle@rac2 ~]$  mkdir $ORA_CRS_HOME –p

15. 检查一下两个节点的信息:

[oracle@rac1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "rac1"
  Destination Node                      Reachable?             
  ------------------------------------  ------------------------
  rac2                                  yes                    
  rac1                                  yes                    
Result: Node reachability check passed from node "rac1".


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment                
  ------------------------------------  ------------------------
  rac2                                  passed                 
  rac1                                  passed                 
Result: User equivalence check passed for user "oracle".

Checking administrative privileges...

Check: Existence of user "oracle"
  Node Name     User Exists               Comment                
  ------------  ------------------------  ------------------------
  rac2          yes                       passed                 
  rac1          yes                       passed                 
Result: User existence check passed for "oracle".

Check: Existence of group "oinstall"
  Node Name     Status                    Group ID               
  ------------  ------------------------  ------------------------
  rac2          exists                    500                    
  rac1          exists                    500                    
Result: Group existence check passed for "oinstall".

Check: Membership of user "oracle" in group "oinstall" [as Primary]
  Node Name         User Exists   Group Exists  User in Group  Primary       Comment    
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac2              yes           yes           yes           yes           passed     
  rac1              yes           yes           yes           yes           passed     
Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed.

Administrative privileges check passed.

Checking node connectivity...


Interface information for node "rac2"
  Interface Name                  IP Address                      Subnet         
  ------------------------------  ------------------------------  ----------------
  eth0                            192.168.0.32                    192.168.0.0    
  eth1                            100.0.0.20                      100.0.0.0      


Interface information for node "rac1"
  Interface Name                  IP Address                      Subnet         
  ------------------------------  ------------------------------  ----------------
  eth0                            192.168.0.31                    192.168.0.0    
  eth1                            100.0.0.10                      100.0.0.0      


Check: Node connectivity of subnet "192.168.0.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  rac2:eth0                       rac1:eth0                       yes            
Result: Node connectivity check passed for subnet "192.168.0.0" with node(s) rac2,rac1.

Check: Node connectivity of subnet "100.0.0.0"
  Source                          Destination                     Connected?     
  ------------------------------  ------------------------------  ----------------
  rac2:eth1                       rac1:eth1                       yes            
Result: Node connectivity check passed for subnet "100.0.0.0" with node(s) rac2,rac1.

Suitable interfaces for VIP on subnet "100.0.0.0":
rac2 eth1:100.0.0.20
rac1 eth1:100.0.0.10

Suitable interfaces for the private interconnect on subnet "192.168.0.0":
rac2 eth0:192.168.0.32
rac1 eth0:192.168.0.31

Result: Node connectivity check passed.


Checking system requirements for 'crs'...

Check: Total memory
  Node Name     Available                 Required                  Comment  
  ------------  ------------------------  ------------------------  ----------
  rac2          1010.85MB (1035108KB)     512MB (524288KB)          passed   
  rac1          1010.85MB (1035108KB)     512MB (524288KB)          passed   
Result: Total memory check passed.

Check: Free disk space in "/tmp" dir
  Node Name     Available                 Required                  Comment  
  ------------  ------------------------  ------------------------  ----------
  rac2          7GB (7342116KB)           400MB (409600KB)          passed   
  rac1          5.54GB (5811160KB)        400MB (409600KB)          passed   
Result: Free disk space check passed.

Check: Swap space
  Node Name     Available                 Required                  Comment  
  ------------  ------------------------  ------------------------  ----------
  rac2          1.06GB (1108476KB)        1GB (1048576KB)           passed   
  rac1          1.06GB (1108476KB)        1GB (1048576KB)           passed   
Result: Swap space check passed.

Check: System architecture
  Node Name     Available                 Required                  Comment  
  ------------  ------------------------  ------------------------  ----------
  rac2          i686                      i686                      passed   
  rac1          i686                      i686                      passed   
Result: System architecture check passed.

Check: Kernel version
  Node Name     Available                 Required                  Comment  
  ------------  ------------------------  ------------------------  ----------
  rac2          2.6.18-194.el5            2.4.21-15EL               passed   
  rac1          2.6.18-194.el5            2.4.21-15EL               passed   
Result: Kernel version check passed.

Check: Package existence for "make-3.79"
  Node Name                       Status                          Comment        
  ------------------------------  ------------------------------  ----------------
  rac2                            make-3.81-3.el5                 passed         
  rac1                            make-3.81-3.el5                 passed         
Result: Package existence check passed for "make-3.79".

Check: Package existence for "binutils-2.14"
  Node Name                       Status                          Comment        
  ------------------------------  ------------------------------  ----------------
  rac2                            binutils-2.17.50.0.6-14.el5     passed         
  rac1                            binutils-2.17.50.0.6-14.el5     passed         
Result: Package existence check passed for "binutils-2.14".

Check: Package existence for "gcc-3.2"
  Node Name                       Status                          Comment        
  ------------------------------  ------------------------------  ----------------
  rac2                            gcc-4.1.2-48.el5                passed         
  rac1                            gcc-4.1.2-48.el5                passed         
Result: Package existence check passed for "gcc-3.2".

Check: Package existence for "glibc-2.3.2-95.27"
  Node Name                       Status                          Comment        
  ------------------------------  ------------------------------  ----------------
  rac2                            glibc-2.5-49                    passed         
  rac1                            glibc-2.5-49                    passed         
Result: Package existence check passed for "glibc-2.3.2-95.27".

Check: Package existence for "compat-db-4.0.14-5"
  Node Name                       Status                          Comment        
  ------------------------------  ------------------------------  ----------------
  rac2                            compat-db-4.2.52-5.1            passed         
  rac1                            compat-db-4.2.52-5.1            passed         
Result: Package existence check passed for "compat-db-4.0.14-5".

Check: Package existence for "compat-gcc-7.3-2.96.128"
  Node Name                       Status                          Comment        
  ------------------------------  ------------------------------  ----------------
  rac2                            missing                         failed         
  rac1                            missing                         failed         
Result: Package existence check failed for "compat-gcc-7.3-2.96.128".

Check: Package existence for "compat-gcc-c++-7.3-2.96.128"
  Node Name                       Status                          Comment        
  ------------------------------  ------------------------------  ----------------
  rac2                            missing                         failed         
  rac1                            missing                         failed         
Result: Package existence check failed for "compat-gcc-c++-7.3-2.96.128".

Check: Package existence for "compat-libstdc++-7.3-2.96.128"
  Node Name                       Status                          Comment        
  ------------------------------  ------------------------------  ----------------
  rac2                            missing                         failed         
  rac1                            missing                         failed         
Result: Package existence check failed for "compat-libstdc++-7.3-2.96.128".

Check: Package existence for "compat-libstdc++-devel-7.3-2.96.128"
  Node Name                       Status                          Comment        
  ------------------------------  ------------------------------  ----------------
  rac2                            missing                         failed         
  rac1                            missing                         failed         
Result: Package existence check failed for "compat-libstdc++-devel-7.3-2.96.128".

Check: Package existence for "openmotif-2.2.3"
  Node Name                       Status                          Comment        
  ------------------------------  ------------------------------  ----------------
  rac2                            openmotif-2.3.1-2.el5_4.1       passed         
  rac1                            openmotif-2.3.1-2.el5_4.1       passed         
Result: Package existence check passed for "openmotif-2.2.3".

Check: Package existence for "setarch-1.3-1"
  Node Name                       Status                          Comment        
  ------------------------------  ------------------------------  ----------------
  rac2                            setarch-2.0-1.1                 passed         
  rac1                            setarch-2.0-1.1                 passed         
Result: Package existence check passed for "setarch-1.3-1".

Check: Group existence for "dba"
  Node Name     Status                    Comment                
  ------------  ------------------------  ------------------------
  rac2          exists                    passed                 
  rac1          exists                    passed                 
Result: Group existence check passed for "dba".

Check: Group existence for "oinstall"
  Node Name     Status                    Comment                
  ------------  ------------------------  ------------------------
  rac2          exists                    passed                 
  rac1          exists                    passed                 
Result: Group existence check passed for "oinstall".

Check: User existence for "nobody"
  Node Name     Status                    Comment                
  ------------  ------------------------  ------------------------
  rac2          exists                    passed                 
  rac1          exists                    passed                 
Result: User existence check passed for "nobody".

System requirement failed for 'crs'

Pre-check for cluster services setup was unsuccessful on all the nodes.


三、安装crs

安装选择步骤省略。。。
运行脚本的顺序:
先rac1 上, /opt/oracle/oraInventory/orainstRoot.sh
再rac2上, /opt/oracle/oraInventory/orainstRoot.sh
再 rac1上,/opt/oracle/product/crs/root.sh
再 rac2上,/opt/oracle/product/crs/root.sh

此时,可能会在rac2的root.sh运行中报错:


[root@rac1 ~]# /opt/oracle/oraInventory/orainstRoot.sh
Changing permissions of /opt/oracle/oraInventory to 770.
Changing groupname of /opt/oracle/oraInventory to oinstall.
The execution of the script is complete

[root@rac2 ~]# /opt/oracle/oraInventory/orainstRoot.sh
Changing permissions of /opt/oracle/oraInventory to 770.
Changing groupname of /opt/oracle/oraInventory to oinstall.
The execution of the script is complete


[root@rac1 ~]# /opt/oracle/product/crs/root.sh
WARNING: directory '/opt/oracle/product' is not owned by root
WARNING: directory '/opt/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/opt/oracle/product' is not owned by root
WARNING: directory '/opt/oracle' is not owned by root
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw1
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.
        rac1
CSS is inactive on these nodes.
        rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

[root@rac2 ~]# /opt/oracle/product/crs/root.sh
WARNING: directory '/opt/oracle/product' is not owned by root
WARNING: directory '/opt/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/opt/oracle/product' is not owned by root
WARNING: directory '/opt/oracle' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
        rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/opt/oracle/product/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory


此时的解决方法是:
[root@rac2 ~]# vi /opt/oracle/product/crs/bin/vipca
找到这几行:
       #Remove this workaround when the bug 3937317 is fixed
       arch=`uname -m`
       if [ "$arch" = "i686" -o "$arch" = "ia64" ]
       then
            LD_ASSUME_KERNEL=2.4.19
            export LD_ASSUME_KERNEL
       fi
           unset LD_ASSUME_KERNEL 
          
           [root@rac2 ~]# vi /opt/oracle/product/crs/bin/srvctl
找到这几行:
#Remove this workaround when the bug 3937317 is fixed
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
unset LD_ASSUME_KERNEL


然后,重新执行root.sh脚本:
[root@rac2 ~]# /opt/oracle/product/crs/root.sh
WARNING: directory '/opt/oracle/product' is not owned by root
WARNING: directory '/opt/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)

在”Oracle Cluster Verification Utility”,里如下报错:

Checking existence of VIP node application (required)
Check failed.
Check failed on nodes:
     rac2,rac1

此时,用root用户运行vipca程序,
选择”eth0”, 填写vip后,然后,对应的主机名会自动填入。然后,下一步,最后,点finish就可以了。建立好后,然后,在点下原来安装界面的”retry”。
成功后,点退出。此时,crs安装完毕!

用crs_stat命令查看:
[oracle@rac1 cluvfy]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host       
----------------------------------------------------------------------
ora.rac1.gsd   application    0/5    0/0    ONLINE    ONLINE    rac1       
ora.rac1.ons   application    0/3    0/0    ONLINE    ONLINE    rac1       
ora.rac1.vip   application    0/0    0/0    ONLINE    ONLINE    rac1       
ora.rac2.gsd   application    0/5    0/0    ONLINE    ONLINE    rac2       
ora.rac2.ons   application    0/3    0/0    ONLINE    ONLINE    rac2       
ora.rac2.vip   application    0/0    0/0    ONLINE    ONLINE    rac2    

都是online的状态。

**在验证vip时,失败,新建了vip后,报错说rac2不可达,查看发现rac2已经关机,重新开机后,等待片刻,继续“retry”。最后,成功。


四、安装数据库软件

1. 解压软件包

[oracle@rac1 ora_sw]$ unzip 10201_database_linux32.zip

2. 安装软件

只安装数据库软件。
最后运行脚本。

[root@rac1 ~]# /opt/oracle/product/10.2.0/db1/root.sh
Running Oracle10 root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/oracle/product/10.2.0/db1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

[root@rac2 ~]# /opt/oracle/product/10.2.0/db1/root.sh
Running Oracle10 root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/oracle/product/10.2.0/db1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

五、建立数据库

密码设置:
DB: sys/manager 
ASM: sys/asmadmin
阅读(5367) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~