Chinaunix首页 | 论坛 | 博客
  • 博客访问: 217975
  • 博文数量: 57
  • 博客积分: 1376
  • 博客等级: 中尉
  • 技术积分: 658
  • 用 户 组: 普通用户
  • 注册时间: 2010-05-09 09:51
文章分类

全部博文(57)

文章存档

2012年(1)

2011年(56)

分类: Oracle

2011-04-04 19:26:31

前几天终于费劲的把rac搭上了,虽然,最后有两个应用有小问题,但是不影响使用。
使用的是三思兄写的guide,非常感谢。

安装过程中,我把自己认为有问题或是重要的地方稍做了记录,可能条理不是太好。。。
1. ip规划:

             A              B
 ---------------------------------------       
priv ip: 98.0.0.1       98.0.0.2  ==> eth0
pub  ip: 20.0.0.1       20.0.0.2   ==> eth1

vip:     20.0.0.11       20.0.0.12
ORACLE_SID: r1  r2


2. vim /etc/hosts
加入这些信息:
98.0.0.1    n1
98.0.0.2    n2
20.0.0.1    n1-pri
20.0.0.2    n2-pri
20.0.0.11   n1-vip
20.0.0.12   n2-vip

[root@rac1 ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1       localhost
98.0.0.1    n1
98.0.0.2    n2
20.0.0.1    n1-pri
20.0.0.2    n2-pri

检查一些设置:
[root@rac1 ~]# groupadd oinstall
groupadd:oinstall 组已存在
[root@rac1 ~]# groupadd dba
groupadd:dba 组已存在
[root@rac1 ~]# id oracle
uid=500(oracle) gid=500(oinstall) groups=500(oinstall),501(dba)
[root@rac1 ~]# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@rac1 ~]#

20.0.0.11   n1-vip
20.0.0.12   n2-vip


[root@rac1 ~]# cp /etc/skel/.bash* /export/home/oracle/
[oracle@n1 ~]$ cat .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin

export PATH
unset USERNAME

export PATH
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_BASE=/u01/app/oracle/
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1/
export ORACLE_SID=r1
export ORACLE_TERM=xterm
export ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs_1/
export PATH=/usr/sbin:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin/
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
ulimit -u 16384 -n 65536
umask 022

[oracle@rac1 ~]$ cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
# sysctl.conf(5) for more details.
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536
# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536
# Controls the maximum shared segment size, in bytes
#kernel.shmmax = 4294967295

# Controls the maximum number of shared memory segments, in pages
#kernel.shmall = 268435456
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576
net.core.wmem_default = 262144
net.core.wmem_max = 262144


[root@rac1 rac5.3]# rpm -ivh compat-binutils215-2.15.92.0.2-24.i386.rpm
Preparing...                ########################################### [100%]
   1:compat-binutils215     ########################################### [100%]
[root@rac1 rac5.3]# rpm -ivh compat-libcwait-2.1-1.i386.rpm compat-libstdc++-egcs-1.1.2-1.i386.rpm compat-oracle-el5-1.0-5.i386.rpm openmotif21-2.1.30-11.EL5.i386.rpm  openmotif21-debuginfo-2.1.30-11.EL5.i386.rpm oracleasm* xorg-x11-libs-compat-6.8.2-1.EL.33.0.1.i386.rpm
warning: oracleasm-2.6.18-128.el5-2.0.5-1.el5.i686.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
        package oracleasm-support-2.1.2-1.el5.i386 is already installed
        package oracleasm-2.6.18-128.el5xen-2.0.5-1.el5.i686 is already installed
        package oracleasm-2.6.18-128.el5-2.0.5-1.el5.i686 is already installed
        package oracleasm-2.6.18-128.el5debug-2.0.5-1.el5.i686 is already installed
        package oracleasm-2.6.18-128.el5PAE-2.0.5-1.el5.i686 is already installed
        package oracleasmlib-2.0.3-1.el5.i386 is already installed


[root@rac1 ~]# fdisk -l
Disk /dev/sda: 12.8 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         765     6144831   83  Linux
/dev/sda2             766         956     1534207+  82  Linux swap / Solaris
/dev/sda3             957        1566     4899825   83  Linux

Disk /dev/sdb: 536 MB, 536870912 bytes
64 heads, 32 sectors/track, 512 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         512      524272   83  Linux

Disk /dev/sdc: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1         391     3140676   83  Linux

Disk /dev/sdd: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1         391     3140676   83  Linux

Disk /dev/sde: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1         261     2096451   83  Linux


[root@n1 ~]# cat /etc/udev/rules.d/60-raw.rules
# This file and interface are deprecated.
# Applications needing raw device access should open regular
# block devices with O_DIRECT.
#
# Enter raw device bindings here.
#
# An example would be:
#   ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N"
# to bind /dev/raw/raw1 to /dev/sda, or
#   ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"
# to bind /dev/raw/raw2 to the device with major 8, minor 1.
ACTION=="add", KERNEL=="/dev/sdb1",RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", ENV{MAJOR}=="8",ENV{MINOR}=="17",RUN+="/bin/raw /dev/raw/raw1 %M %m"
ACTION=="add", KERNEL=="/dev/sdc1",RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", ENV{MAJOR}=="8",ENV{MINOR}=="33",RUN+="/bin/raw /dev/raw/raw2 %M %m"
ACTION=="add", KERNEL=="/dev/sdd1",RUN+="/bin/raw /dev/raw/raw3"
ACTION=="add", ENV{MAJOR}=="8",ENV{MINOR}=="49",RUN+="/bin/raw /dev/raw/raw3 %M %m"
ACTION=="add", KERNEL=="/dev/sde1",RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", ENV{MAJOR}=="8",ENV{MINOR}=="65",RUN+="/bin/raw /dev/raw/raw4 %M %m"
KERNEL=="raw[1-4]", OWNER="oracle", GROUP="dba", MODE="660"

[root@n1 ~]# ping n2
PING n2 (98.0.0.2) 56(84) bytes of data.
64 bytes from n2 (98.0.0.2): icmp_seq=1 ttl=64 time=0.000 ms
64 bytes from n2 (98.0.0.2): icmp_seq=2 ttl=64 time=0.505 ms

--- n2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.000/0.252/0.505/0.253 ms
[root@n1 ~]# ping n2-pri
PING n2-pri (20.0.0.2) 56(84) bytes of data.
64 bytes from n2-pri (20.0.0.2): icmp_seq=1 ttl=64 time=0.356 ms
64 bytes from n2-pri (20.0.0.2): icmp_seq=2 ttl=64 time=0.000 ms

--- n2-pri ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.000/0.178/0.356/0.178 ms


[root@n1 ~]# su - oracle

[oracle@n1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/oracle/.ssh/id_rsa):
Created directory '/export/home/oracle/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /export/home/oracle/.ssh/id_rsa.
Your public key has been saved in /export/home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
2f:2b:5e:5d:6b:b7:2d:2b:1d:b3:55:50:71:26:4e:99 oracle@n1

[oracle@n1 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/export/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /export/home/oracle/.ssh/id_dsa.
Your public key has been saved in /export/home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
b0:e2:75:7f:54:54:e5:c0:fb:b3:fd:a8:0d:7e:5b:67 oracle@n1

[oracle@n1 ~]$ ssh n1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'n1 (98.0.0.1)' can't be established.
RSA key fingerprint is f4:ca:58:3b:74:74:de:4d:cb:14:28:42:96:5c:d7:d2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'n1,98.0.0.1' (RSA) to the list of known hosts.
oracle@n1's password:

[oracle@n1 ~]$ ssh n2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'n2 (98.0.0.2)' can't be established.
RSA key fingerprint is f4:ca:58:3b:74:74:de:4d:cb:14:28:42:96:5c:d7:d2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'n2,98.0.0.2' (RSA) to the list of known hosts.
oracle@n2's password:

[oracle@n1 ~]$ scp ~/.ssh/authorized_keys n2:~/.ssh/
oracle@n2's password:
authorized_keys                               100%  782     0.8KB/s   00:00   
[oracle@n1 ~]$


[oracle@n1 ~]$ ssh n1 date
2011年 03月 31日 星期四 15:06:33 CST
[oracle@n1 ~]$ ssh n2 date
2011年 03月 31日 星期四 15:06:39 CST
[oracle@n1 ~]$ ssh n1-pri date
2011年 03月 31日 星期四 15:06:51 CST
[oracle@n1 ~]$ ssh n2-pri date
2011年 03月 31日 星期四 15:06:57 CST


去掉ssh的自动登录的第一次提示:
[oracle@n1 ~]$ ssh n2

[oracle@n2 ~]$ ssh n2-pri
The authenticity of host 'n2-pri (20.0.0.2)' can't be established.
RSA key fingerprint is f4:ca:58:3b:74:74:de:4d:cb:14:28:42:96:5c:d7:d2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'n2-pri,20.0.0.2' (RSA) to the list of known hosts.
Last login: Thu Mar 31 19:40:06 2011 from n1

[oracle@n2 ~]$ ssh n1
The authenticity of host 'n1 (98.0.0.1)' can't be established.
RSA key fingerprint is f4:ca:58:3b:74:74:de:4d:cb:14:28:42:96:5c:d7:d2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'n1,98.0.0.1' (RSA) to the list of known hosts.

[oracle@n1 ~]$ ssh n2
Last login: Thu Mar 31 19:40:13 2011 from n2-pri
[oracle@n2 ~]$ ssh n1-pri
The authenticity of host 'n1-pri (20.0.0.1)' can't be established.
RSA key fingerprint is f4:ca:58:3b:74:74:de:4d:cb:14:28:42:96:5c:d7:d2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'n1-pri,20.0.0.1' (RSA) to the list of known hosts.
Last login: Thu Mar 31 19:38:19 2011 from n2


设置asm:
[root@n1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

[root@n2 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]


[root@n1 ~]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdd1
Marking disk "VOL1" as an ASM disk:                        [  OK  ]
[root@n1 ~]# /etc/init.d/oracleasm createdisk VOL2 /dev/sde1
Marking disk "VOL2" as an ASM disk:                        [  OK  ]
[root@n1 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2

[root@n2 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@n2 ~]# /etc/init.d/oracleasm listdisks
VOL1
VOL2

[oracle@n1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n n1,n2 -verbose

执行 群集服务设置 的预检查

正在检查节点的可访问性...

检查: 节点 "n1" 的节点可访问性
  目标节点                                  是否可访问?                 
  ------------------------------------  ------------------------
  n2                                    是                      
  n1                                    是                      
结果:节点 "n1" 的节点可访问性检查已通过。


正在检查等同用户...

检查: 用户 "oracle" 的等同用户
  节点名                                   注释                     
  ------------------------------------  ------------------------
  n2                                    通过                     
  n1                                    通过                     
结果:用户 "oracle" 的等同用户检查已通过。

正在检查管理权限...

检查: 用户 "oracle" 的存在性
  节点名           用户存在                      注释                     
  ------------  ------------------------  ------------------------
  n2            是                         通过                     
  n1            是                         通过                     
结果:"oracle" 的用户存在性检查已通过。

检查: 组 "oinstall" 的存在性
  节点名           状态                        组 ID                   
  ------------  ------------------------  ------------------------
  n2            存在                        500                    
  n1            存在                        500                    
结果:"oinstall" 的组存在性检查已通过。

检查: 组 "oinstall" 中用户 "oracle" 的成员资格 [作为 主]
  节点名               用户存在          组存在           组中的用户         主             注释         
  ----------------  ------------  ------------  ------------  ------------  ------------
  n2                是             是             是             是             通过         
  n1                是             是             是             是             通过         
结果:组 "oinstall" 中用户 "oracle" 的成员资格检查 [作为 主] 已通过。

管理权限检查已通过。

正在检查节点的连接性...


节点 "n2" 的接口信息
  接口名                             IP 地址                           子网             
  ------------------------------  ------------------------------  ----------------
  eth0                            98.0.0.2                        98.0.0.0       
  eth1                            20.0.0.2                        20.0.0.0       


节点 "n1" 的接口信息
  接口名                             IP 地址                           子网             
  ------------------------------  ------------------------------  ----------------
  eth0                            98.0.0.1                        98.0.0.0       
  eth1                            20.0.0.1                        20.0.0.0       


检查: 子网 "98.0.0.0" 的节点连接性
  源                               目标                              是否已连接?         
  ------------------------------  ------------------------------  ----------------
  n2:eth0                         n1:eth0                         是              
结果:含有节点 n2,n1 的子网 "98.0.0.0" 的节点连接性检查已通过。

检查: 子网 "20.0.0.0" 的节点连接性
  源                               目标                              是否已连接?         
  ------------------------------  ------------------------------  ----------------
  n2:eth1                         n1:eth1                         是              
结果:含有节点 n2,n1 的子网 "20.0.0.0" 的节点连接性检查已通过。

子网 "98.0.0.0" 上用于 VIP 的合适接口:
n2 eth0:98.0.0.2
n1 eth0:98.0.0.1

子网 "20.0.0.0" 上用于 VIP 的合适接口:
n2 eth1:20.0.0.2
n1 eth1:20.0.0.1

WARNING:
找不到用于专用互联的合适接口集。

结果:节点的连接性检查已通过。


正在检查其系统要求 'crs'...

检查: 内存总量
  节点名           可用                        必需                        注释       
  ------------  ------------------------  ------------------------  ----------
  n2            503.36MB (515444KB)       512MB (524288KB)          失败       
  n1            503.36MB (515444KB)       512MB (524288KB)          失败       
结果:内存总量 检查失败。

检查: "/tmp" 目录中的 空闲磁盘空间
  节点名           可用                        必需                        注释       
  ------------  ------------------------  ------------------------  ----------
  n2            2.3GB (2410128KB)         400MB (409600KB)          通过       
  n1            1.81GB (1895168KB)        400MB (409600KB)          通过       
结果:空闲磁盘空间 检查已通过。

检查: 交换空间
  节点名           可用                        必需                        注释       
  ------------  ------------------------  ------------------------  ----------
  n2            1.46GB (1534196KB)        1GB (1048576KB)           通过       
  n1            1.46GB (1534196KB)        1GB (1048576KB)           通过       
结果:交换空间 检查已通过。

检查: 系统体系结构
  节点名           可用                        必需                        注释       
  ------------  ------------------------  ------------------------  ----------
  n2            i686                      i686                      通过       
  n1            i686                      i686                      通过       
结果:系统体系结构 检查已通过。

检查: 内核版本
  节点名           可用                        必需                        注释       
  ------------  ------------------------  ------------------------  ----------
  n2            2.6.18-128.el5            2.4.21-15EL               通过       
  n1            2.6.18-128.el5            2.4.21-15EL               通过       
结果:内核版本 检查已通过。

检查: "make-3.79" 的 包存在性
  节点名                             状态                              注释             
  ------------------------------  ------------------------------  ----------------
  n2                              make-3.81-3.el5                 通过             
  n1                              make-3.81-3.el5                 通过             
结果:"make-3.79" 的 包存在性 检查已通过。

检查: "binutils-2.14" 的 包存在性
  节点名                             状态                              注释             
  ------------------------------  ------------------------------  ----------------
  n2                              binutils-2.17.50.0.6-9.el5      通过             
  n1                              binutils-2.17.50.0.6-9.el5      通过             
结果:"binutils-2.14" 的 包存在性 检查已通过。

检查: "gcc-3.2" 的 包存在性
  节点名                             状态                              注释             
  ------------------------------  ------------------------------  ----------------
  n2                              gcc-4.1.2-44.el5                通过             
  n1                              gcc-4.1.2-44.el5                通过             
结果:"gcc-3.2" 的 包存在性 检查已通过。

检查: "glibc-2.3.2-95.27" 的 包存在性
  节点名                             状态                              注释             
  ------------------------------  ------------------------------  ----------------
  n2                              glibc-2.5-34                    通过             
  n1                              glibc-2.5-34                    通过             
结果:"glibc-2.3.2-95.27" 的 包存在性 检查已通过。

检查: "compat-db-4.0.14-5" 的 包存在性
  节点名                             状态                              注释             
  ------------------------------  ------------------------------  ----------------
  n2                              缺失                              失败             
  n1                              缺失                              失败             
结果:"compat-db-4.0.14-5" 的 包存在性 检查失败。

检查: "compat-gcc-7.3-2.96.128" 的 包存在性
  节点名                             状态                              注释             
  ------------------------------  ------------------------------  ----------------
  n2                              缺失                              失败             
  n1                              缺失                              失败             
结果:"compat-gcc-7.3-2.96.128" 的 包存在性 检查失败。

检查: "compat-gcc-c++-7.3-2.96.128" 的 包存在性
  节点名                             状态                              注释             
  ------------------------------  ------------------------------  ----------------
  n2                              缺失                              失败             
  n1                              缺失                              失败             
结果:"compat-gcc-c++-7.3-2.96.128" 的 包存在性 检查失败。

检查: "compat-libstdc++-7.3-2.96.128" 的 包存在性
  节点名                             状态                              注释             
  ------------------------------  ------------------------------  ----------------
  n2                              缺失                              失败             
  n1                              缺失                              失败             
结果:"compat-libstdc++-7.3-2.96.128" 的 包存在性 检查失败。

检查: "compat-libstdc++-devel-7.3-2.96.128" 的 包存在性
  节点名                             状态                              注释             
  ------------------------------  ------------------------------  ----------------
  n2                              缺失                              失败             
  n1                              缺失                              失败             
结果:"compat-libstdc++-devel-7.3-2.96.128" 的 包存在性 检查失败。

检查: "openmotif-2.2.3" 的 包存在性
  节点名                             状态                              注释             
  ------------------------------  ------------------------------  ----------------
  n2                              openmotif-2.3.1-2.el5           通过             
  n1                              openmotif-2.3.1-2.el5           通过             
结果:"openmotif-2.2.3" 的 包存在性 检查已通过。

检查: "setarch-1.3-1" 的 包存在性
  节点名                             状态                              注释             
  ------------------------------  ------------------------------  ----------------
  n2                              setarch-2.0-1.1                 通过             
  n1                              setarch-2.0-1.1                 通过             
结果:"setarch-1.3-1" 的 包存在性 检查已通过。

检查: "dba" 的 组存在性
  节点名           状态                        注释                     
  ------------  ------------------------  ------------------------
  n2            存在                        通过                     
  n1            存在                        通过                     
结果:"dba" 的 组存在性 检查已通过。

检查: "oinstall" 的 组存在性
  节点名           状态                        注释                     
  ------------  ------------------------  ------------------------
  n2            存在                        通过                     
  n1            存在                        通过                     
结果:"oinstall" 的 组存在性 检查已通过。

检查: "nobody" 的 用户存在性
  节点名           状态                        注释                     
  ------------  ------------------------  ------------------------
  n2            存在                        通过                     
  n1            存在                        通过                     
结果:"nobody" 的 用户存在性 检查已通过。

其系统要求失败 'crs'

在所有节点上预检查 群集服务设置 失败。

那些包只要安装上了,可以忽略上面的提示。
[oracle@n1 cluvfy]$ rpm -qa | grep openmotif
openmotif-2.3.1-2.el5
openmotif22-2.2.3-18
openmotif-devel-2.3.1-2.el5
[oracle@n1 cluvfy]$ rpm -qa | grep


安装crs:
[oracle@n1 clusterware]$ export LC_ALL=en_US
[oracle@n1 clusterware]$ ./runInstaller -ignoreSysPreReqs
[oracle@n1 clusterware]$ export CRS_HOME=/u01/app/oracle/product/10.2.0/crs_1

添加节点n2
改变相应ip类型,public还是private
=> External Redundancy => Specify OCR Location => /dev/raw2

$ORACLE_HOME/bin/srvctl config nodeapps -n rac2 -a

OCR:
/dev/raw/raw2

VOTING DISK:
/dev/raw/raw1


==> 运行一些脚本:
[root@n2 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oracle/oraInventory to 770.
Changing groupname of /u01/app/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@n2 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname n1 for node 1.
assigning default hostname n2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: n1 n1-pri n1
node 2: n2 n2-pri n2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        n1
        n2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/app/oracle/product/10.2.0/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory


NOTE: n2 出现了问题在运行

解决方法:
[root@n2 ~]# vim /u01/app/oracle/product/10.2.0/crs_1/bin/vipca

大概在120行
       unset LD_ASSUME_KERNEL

[root@n2 ~]# vim /u01/app/oracle/product/10.2.0/crs_1/bin/srvctl

166 LD_ASSUME_KERNEL=2.4.19
167 export LD_ASSUME_KERNEL
168 unset LD_ASSUME_KERNEL


[root@n2 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)


[root@n1 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
assigning default hostname n1 for node 1.
assigning default hostname n2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: n1 n1-pri n1
node 2: n2 n2-pri n2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw1
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        n1
CSS is inactive on these nodes.
        n2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

如果报这个错:
----------------------------------
Checking existence of VIP node application (required)
Check failed.
Check failed on nodes:
    n2,n1
这样解决:
以root身份运行vipca:
[root@n1 ~]# LANG=C
[root@n1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/vipca &


设置vip:
三思第68页之后

回到失败出,retry一把!!!
Please read the documentation before attempting to do anything.
VIPCA is supposed to be run as root and not as Oracle user.
If you are manually invoking VIPCA you may need to set your DISPLAY properly from the root session (it is a gui tool).
If the crs installation is invoking the tool (through root.sh run as root), it would be run in background.
Use cluvfy to check pre/post setup.
If nothing works, start over again. Clean the CRS and reinstall.

----------------------------------
   

[root@n1 ~]# /u01/app/oracle/product/10.2.0/crs_1/bin/crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora.n1.gsd     application    ONLINE    ONLINE    n1         
ora.n1.ons     application    ONLINE    ONLINE    n1         
ora.n1.vip     application    ONLINE    ONLINE    n1         
ora.n2.gsd     application    ONLINE    ONLINE    n2         
ora.n2.ons     application    ONLINE    ONLINE    n2         
ora.n2.vip     application    ONLINE    ONLINE    n2         
[1]+  Done                    /u01/app/oracle/product/10.2.0/crs_1/bin/vipca



安装数据库:
[oracle@n1 ~]$ cd database/
[oracle@n1 database]$ ls
doc  install  response  runInstaller  stage  welcome.html
[oracle@n1 database]$ ./runInstaller &

选择其他的node,然后,只安装软件

安装途中,遇到关于时间同步问题的报错,just忽略!!!


创建库和asm实例:
dbca &
=> Oracle Real Application Cluster database
=> Create a database
=> Select All the notes
=> Custom database
=> Type "r" as our sid are r1 , r2
这里有两项需要你指定,一个是global name,同时还有一个sid 的前缀,注意是前缀哟。然后oracle
会自动为各节点分配sid,比如这里的sid 前缀是racdb,则node1 的sid 就会是racdb1,node2 的将会是racdb2。
=> Untick "Configure the database with Enterprise Manager"
=> Type the password(oracle)
=> ASM
=> Set the sys password for ASM instance(ora) and Tick "Create initialization parameter file(IFILE)"
报错: failed to retrieve network listener resources required for real application clusters high availablity extentions
configurations on the following nodes:[n1,n2]
Do you want listeners on port 1521 with prefix listener to be created on nodes[n1,n2] automatically? If you would like to
cofigure the listener with different properties, run NetCA before continuing.

press "Yes"


=> Create  New => Type rac_disk => Choose External => Choose all the devices
=> Next
=> OMF
=> No gonna use archiving here and press next
=> Untick All
=> Choose the correct character set and untick the "remote_listener" in All initialization parameters button
=> Redo Log Groups

Create db sql:

/u01/app/oracle//admin/r/scripts

[oracle@n1 bin]$ pwd
/u01/app/oracle/product/10.2.0/crs_1/bin
[oracle@n1 bin]$ ./crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    n1         
ora....N1.lsnr application    ONLINE    ONLINE    n1         
ora.n1.gsd     application    ONLINE    ONLINE    n1         
ora.n1.ons     application    ONLINE    ONLINE    n1         
ora.n1.vip     application    ONLINE    ONLINE    n1         
ora....SM2.asm application    ONLINE    ONLINE    n2         
ora....N2.lsnr application    ONLINE    ONLINE    n2         
ora.n2.gsd     application    ONLINE    UNKNOWN   n2         
ora.n2.ons     application    ONLINE    UNKNOWN   n2         
ora.n2.vip     application    ONLINE    ONLINE    n2         
ora.r.db       application    ONLINE    ONLINE    n2         
ora.r.r1.inst  application    ONLINE    ONLINE    n1         
ora.r.r2.inst  application    ONLINE    ONLINE    n2    

[oracle@n1 bin]$ ./crs_start ora.r.r1.inst
Attempting to start `ora.r.r1.inst` on member `n1`
Start of `ora.r.r1.inst` on member `n1` succeeded.
[oracle@n1 bin]$ ./crs_start ora.n2.gsd  
CRS-1028: Dependency analysis failed because of:
'Resource in UNKNOWN state: ora.n2.gsd'

CRS-0223: Resource 'ora.n2.gsd' has placement error.

[oracle@n1 bin]$ ./crs_start ora.n2.ons
CRS-1028: Dependency analysis failed because of:
'Resource in UNKNOWN state: ora.n2.ons'

CRS-0223: Resource 'ora.n2.ons' has placement error.


[oracle@n1 bin]$ ./crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    n1         
ora....N1.lsnr application    ONLINE    ONLINE    n1         
ora.n1.gsd     application    ONLINE    ONLINE    n1         
ora.n1.ons     application    ONLINE    ONLINE    n1         
ora.n1.vip     application    ONLINE    ONLINE    n1         
ora....SM2.asm application    ONLINE    ONLINE    n2         
ora....N2.lsnr application    ONLINE    ONLINE    n2         
ora.n2.gsd     application    ONLINE    UNKNOWN   n2         
ora.n2.ons     application    ONLINE    UNKNOWN   n2         
ora.n2.vip     application    ONLINE    ONLINE    n2         
ora.r.db       application    ONLINE    ONLINE    n2         
ora.r.r1.inst  application    ONLINE    ONLINE    n1         
ora.r.r2.inst  application    ONLINE    ONLINE    n2   

crs_stop -all


连接时sqlplus时,需要注意,不能像单机那样去进入“sqlplus / as sysdba”
需要这么进入“sqlplus sys/oracle@r1 as sysdba”,否则会报这个错:
ORA-00304: requested INSTANCE_NUMBER is busy



[oracle@n2 ~]$ echo $ORACLE_SID
r2
[oracle@n2 ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Tue Apr 5 12:52:11 2011

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORA-00304: requested INSTANCE_NUMBER is busy

[oracle@n2 ~]$ sqlplus sys/oracle@r2 as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Tue Apr 5 12:59:24 2011

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> select status from v$instance;

STATUS
------------
OPEN


我最后把系统重启后,发现实例不会自动起来,而且crs_stat -t很慢,今天,检查了检查,发现是时间同步服务没开, 在打开了ntpd后,重启机器,很多相应进程一下蹿出来了。。。
还有就是各节点网关一定要配置,否则很有可能出一些其他问题。。。

我在搭建rac的时候,也没逃出时间服务的影响,本身认为,调过一次后,时间应该不会有多大差别,但是仅仅几分钟,我就发现了两个节点时间又开始拉开了好几分钟,另我无限郁闷,开始使用计划任务没2分钟同步也不启什么作用,后来还是用ntpd解决了问题。Thanks to ntpd !!!!
还有些小问题仍然有待继续解决。

费劲的搭了10多个小时,在安装时,出了各种错误外,vmware也很给力,出现过安装时hang住,还有安装库时很不自觉关机,注意是关机!!不是重启!!!不过这个意外也让我知道了,node1装完后,是通过scp拷备已安装文件过去node2的,因为,当关机时,提示22端口不可访问。

多灾多难的rac!!!!

加张最后建好库的图,以示小纪念 :)







&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
update at 5, April:
之前说到还有些小问题,就是一些应用抽风似的UNKNOWN。。。。


[oracle@n1 ~]$ crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora....SM1.asm application    ONLINE    UNKNOWN   n1         
ora....N1.lsnr application    ONLINE    ONLINE    n1         
ora.n1.gsd     application    ONLINE    UNKNOWN   n1         
ora.n1.ons     application    ONLINE    UNKNOWN   n1     
    
ora.n1.vip     application    ONLINE    ONLINE    n1         
ora....SM2.asm application    ONLINE    ONLINE    n2         
ora....N2.lsnr application    ONLINE    ONLINE    n2         
ora.n2.gsd     application    ONLINE    ONLINE    n2         
ora.n2.ons     application    ONLINE    ONLINE    n2         
ora.n2.vip     application    ONLINE    ONLINE    n2         
ora.r.db       application    ONLINE    UNKNOWN   n2         
ora.r.r1.inst  application    OFFLINE   OFFLINE              
ora.r.r2.inst  application    ONLINE    UNKNOWN   n2      
  

想把他弄起来,还报错。。。
[oracle@n1 ~]$ crs_start ora.n1.gsd
CRS-1028: Dependency analysis failed because of:
'Resource in UNKNOWN state: ora.n1.gsd'

CRS-0223: Resource 'ora.n1.gsd' has placement error.


居然还有这事,要亲命了。。。

后来,试了试停掉这些服务,再启动,居然好了。。。。
[oracle@n1 ~]$ crs_stop ora.n1.ASM1.asm
Attempting to stop `ora.n1.ASM1.asm` on member `n1`
Stop of `ora.n1.ASM1.asm` on member `n1` succeeded.
[oracle@n1 ~]$ crs_start ora.n1.ASM1.asm
Attempting to start `ora.n1.ASM1.asm` on member `n1`
Start of `ora.n1.ASM1.asm` on member `n1` succeeded.
[oracle@n1 ~]$ crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    n1 
ora....N1.lsnr application    ONLINE    ONLINE    n1         
ora.n1.gsd     application    ONLINE    ONLINE    n1         
ora.n1.ons     application    ONLINE    ONLINE    n1         
ora.n1.vip     application    ONLINE    ONLINE    n1         
ora....SM2.asm application    ONLINE    ONLINE    n2         
ora....N2.lsnr application    ONLINE    ONLINE    n2         
ora.n2.gsd     application    ONLINE    ONLINE    n2         
ora.n2.ons     application    ONLINE    ONLINE    n2         
ora.n2.vip     application    ONLINE    ONLINE    n2         
ora.r.db       application    OFFLINE   OFFLINE              
ora.r.r1.inst  application    OFFLINE   OFFLINE              
ora.r.r2.inst  application    ONLINE    UNKNOWN   n2  



后面两个是节点的实例,然后还有一个db,我想了下,觉得,启动db也许会先把实例给启好,所以,试了把先启db,居然也成了。。。。看来抽风背后有玄机。。。。
[oracle@n1 ~]$ crs_start ora.r.db
Attempting to start `ora.r.db` on member `n1`

Start of `ora.r.db` on member `n1` succeeded.
[oracle@n1 ~]$
[oracle@n1 ~]$ crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    n1         
ora....N1.lsnr application    ONLINE    ONLINE    n1         
ora.n1.gsd     application    ONLINE    ONLINE    n1         
ora.n1.ons     application    ONLINE    ONLINE    n1         
ora.n1.vip     application    ONLINE    ONLINE    n1         
ora....SM2.asm application    ONLINE    ONLINE    n2         
ora....N2.lsnr application    ONLINE    ONLINE    n2         
ora.n2.gsd     application    ONLINE    ONLINE    n2         
ora.n2.ons     application    ONLINE    ONLINE    n2         
ora.n2.vip     application    ONLINE    ONLINE    n2         
ora.r.db       application    ONLINE    ONLINE    n1         
ora.r.r1.inst  application    ONLINE    ONLINE    n1         
ora.r.r2.inst  application    ONLINE    ONLINE    n2       


太漂亮了。。。。终于全online了。。。

阅读(2987) | 评论(0) | 转发(0) |
0

上一篇:Use nid change dbname

下一篇:A deprecated parameter

给主人留下些什么吧!~~