太懒
分类: Oracle
2013-05-22 12:21:02
oracle 11g r203+Oracle Linux
准备工作
机器 |
系统 |
内存 |
用途 |
10.101.0.17 |
CentOS release 6.3 X86_64 |
8G |
ISCSI-TARGET |
10.101.5.70 |
Oracle Linux 6.4 i386 | 1G |
node1 |
10.101.5.71 |
Oracle Linux 6.4 i386 |
1G | node2 |
RAC1.
RAC2.
hosts
# add_to_hosts.txt
# iscsi-target
10.101.0.17 my2950.momo.org my2950
10.101.5.70 node1.momo.org node1
10.101.5.72 node1-vip.momo.org node1-vip
192.168.199.70 node1-priv
10.101.5.71 node2.momo.org node2
10.101.5.73 node2-vip.momo.org node2-vip
192.168.199.71 node2-priv
#SCAN
10.101.5.77 rac-scan1.momo.org rac-scan1
10.101.5.76 rac-scan2.momo.org rac-scan2
#end
首先在iscsi-target端配置iscsi服务
先划分卷
ocr1 ocr2 ocr3 各2G (留着以后测试恢复OCR,实际生产环境基本不需要这样,生产环境的数据安全一般由存储来提供)
asm1 asm2 各30G
redolog 5G
archivelog 10G
原来有个老的rac,先停止iscsi服务
删掉老的逻辑卷
[root@my2950 puppet]# vgdisplay datavg
--- Volume group ---
VG Name datavg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 23
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 11
Open LV 6
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.82 TiB
PE Size 64.00 MiB
Total PE 29808
Alloc PE / Size 12096 / 756.00 GiB
Free PE / Size 17712 / 1.08 TiB
VG UUID UYsB63-S48C-PYhK-airN-siRh-yxAM-dWQiDN
[root@my2950 puppet]#
[root@my2950 puppet]# lvremove /dev/datavg/archivelog
Do you really want to remove active logical volume archivelog? [y/n]: y
Logical volume "archivelog" successfully removed
[root@my2950 puppet]#
[root@my2950 puppet]# lvremove /dev/datavg/redolog
Do you really want to remove active logical volume redolog? [y/n]: y
Logical volume "redolog" successfully removed
[root@my2950 puppet]# lvremove /dev/datavg/asm2
Do you really want to remove active logical volume asm2? [y/n]: y
Logical volume "asm2" successfully removed
[root@my2950 puppet]# lvremove /dev/datavg/asm1
Do you really want to remove active logical volume asm1? [y/n]: y
Logical volume "asm1" successfully removed
[root@my2950 puppet]# lvremove /dev/datavg/asr1
One or more specified logical volume(s) not found.
[root@my2950 puppet]# lvremove /dev/datavg/ocr1
Do you really want to remove active logical volume ocr1? [y/n]: y
Logical volume "ocr1" successfully removed
[root@my2950 puppet]# lvremove /dev/datavg/ocr2
Do you really want to remove active logical volume ocr2? [y/n]: y
Logical volume "ocr2" successfully removed
[root@my2950 puppet]#
然后创建逻辑卷
[root@my2950 puppet]# lvcreate -L 2G -n ocr1 datavg
Logical volume "ocr1" created
[root@my2950 puppet]# lvcreate -L 2G -n ocr2 datavg
Logical volume "ocr2" created
[root@my2950 puppet]# lvcreate -L 2G -n ocr3 datavg
Logical volume "ocr3" created
[root@my2950 puppet]# lvcreate -L 30G -n asm1 datavg
Logical volume "asm1" created
[root@my2950 puppet]# lvcreate -L 30G -n asm2 datavg
Logical volume "asm2" created
[root@my2950 puppet]# lvcreate -L 5G -n redolog datavg
Logical volume "redolog" created
[root@my2950 puppet]# lvcreate -L 10G -n archivelog datavg
Logical volume "archivelog" created
接着编辑iscsi的配置文件,记得先备份
[root@my2950 puppet]# cd /etc/tgt/
[root@my2950 tgt]#
[root@my2950 tgt]#
[root@my2950 tgt]# ls
targets.conf targets.conf.orig
[root@my2950 tgt]#
[root@my2950 tgt]#
[root@my2950 tgt]# cp targets.conf targets.conf.20130517
以ocr3为例
backing-store /dev/datavg/ocr3
write-cache off
lun 2
HeaderDigest None
DataDigest None
InitialR2T Yes
ImmediateData No
MaxRecvDataSegmentLength 8192
MaxXmitDataSegmentLength 8192
MaxBurstLength 262144
FirstBurstLength 65536
DefaultTime2Wait 2
DefaultTime2Retain 20
MaxOutstandingR2T 8
DataPDUInOrder Yes
DataSequenceInOrder Yes
ErrorRecoveryLevel 0
#MaxConnections 1
#initiator-address 10.101.7.123
编辑完以后重启tgt服务
有防火墙的话打开端口,默认是3260
[root@my2950 tgt]# /etc/init.d/tgtd restart
tgt-admin --show|more
Target 1: iqn.2013-05.01.org.momo:ocr1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 2147 MB, Block size: 512Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/datavg/ocr1
Backing store flags:
Account information:
ACL information:
ALL
Target 2: iqn.2013-05.02.org.momo:ocr2
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00020000
SCSI SN: beaf20
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
--More--
LUN: 2
Type: disk
SCSI ID: IET 00020002
SCSI SN: beaf22
Size: 2147 MB, Block size: 512Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/datavg/ocr2
Backing store flags:
Account information:
ACL information:
ALL
Target 3: iqn.2013-05.03.org.momo:ocr3
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00030000
SCSI SN: beaf30
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 2
Type: disk
SCSI ID: IET 00030002
SCSI SN: beaf32
Size: 2147 MB, Block size: 512Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/datavg/ocr3
Backing store flags:
Account information:
ACL information:
ALL
Target 4: iqn.2013-05.04.org.momo:asm1
System information:
Driver: iscsi
--More--
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00040000
SCSI SN: beaf40
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 3
Type: disk
SCSI ID: IET 00040003
SCSI SN: beaf43
Size: 32212 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/datavg/asm1
Backing store flags:
Account information:
ACL information:
ALL
Target 5: iqn.2013-05.05.org.momo:asm2
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00050000
SCSI SN: beaf50
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 4
Type: disk
SCSI ID: IET 00050004
--More--
SCSI SN: beaf54
Size: 32212 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/datavg/asm2
Backing store flags:
Account information:
ACL information:
ALL
Target 6: iqn.2013-05.06.org.momo:redolog
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00060000
SCSI SN: beaf60
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 5
Type: disk
SCSI ID: IET 00060005
SCSI SN: beaf65
Size: 5369 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/datavg/redolog
Backing store flags:
Account information:
ACL information:
ALL
Target 7: iqn.2013-05.07.org.momo:archivelog
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
--More--
LUN: 0
Type: controller
SCSI ID: IET 00070000
SCSI SN: beaf70
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 6
Type: disk
SCSI ID: IET 00070006
SCSI SN: beaf76
Size: 10737 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/datavg/archivelog
Backing store flags:
Account information:
ACL information:
ALL
[root@my2950 tgt]#
iscsi服务器端配置工作做完了,转到客户端
[root@node1 ~]# ping my2950.momo.org
PING my2950.momo.org (10.101.0.17) 56(84) bytes of data.
64 bytes from my2950.momo.org (10.101.0.17): icmp_seq=1 ttl=64 time=0.894 ms
^C
--- my2950.momo.org ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 606ms
rtt min/avg/max/mdev = 0.894/0.894/0.894/0.000 ms
[root@node1 ~]#
[root@node1 ~]# iscsiadm -m discovery -t sendtargets -p my2950.momo.org
Starting iscsid: [ OK ]
10.101.0.17:3260,1 iqn.2013-05.01.org.momo:ocr1
10.101.0.17:3260,1 iqn.2013-05.02.org.momo:ocr2
10.101.0.17:3260,1 iqn.2013-05.03.org.momo:ocr3
10.101.0.17:3260,1 iqn.2013-05.04.org.momo:asm1
10.101.0.17:3260,1 iqn.2013-05.05.org.momo:asm2
10.101.0.17:3260,1 iqn.2013-05.06.org.momo:redolog
10.101.0.17:3260,1 iqn.2013-05.07.org.momo:archivelog
[root@node1 ~]#
iscsiadm -m node -T iqn.2013-05.01.org.momo:ocr1 --login
iscsiadm -m node -T iqn.2013-05.02.org.momo:ocr2 --login
iscsiadm -m node -T iqn.2013-05.03.org.momo:ocr3 --login
iscsiadm -m node -T iqn.2013-05.04.org.momo:asm1 --login
iscsiadm -m node -T iqn.2013-05.05.org.momo:asm2 --login
iscsiadm -m node -T iqn.2013-05.06.org.momo:redolog --login
iscsiadm -m node -T iqn.2013-05.07.org.momo:archivelog --login
fdisk 多出来的
Disk /dev/sdf: 2147 MB, 536870912 bytes
17 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 1037 * 512 = 530944 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdg: 2147 MB, 536870912 bytes
17 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 1037 * 512 = 530944 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdh: 2147 MB, 536870912 bytes
17 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 1037 * 512 = 530944 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdi: 32.2 GB, 32212254720 bytes
64 heads, 32 sectors/track, 30720 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdj: 32.2 GB, 32212254720 bytes
64 heads, 32 sectors/track, 30720 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdk: 5368 MB, 5368709120 bytes
166 heads, 62 sectors/track, 1018 cylinders
Units = cylinders of 10292 * 512 = 5269504 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdl: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
[root@node1 ~]#
[root@node1 ~]# ll /dev/sd*
brw-rw---- 1 root disk 8, 0 May 17 16:30 /dev/sda
brw-rw---- 1 root disk 8, 1 May 17 16:30 /dev/sda1
brw-rw---- 1 root disk 8, 2 May 17 16:30 /dev/sda2
brw-rw---- 1 root disk 8, 3 May 17 16:30 /dev/sda3
brw-rw---- 1 root disk 8, 16 May 17 16:30 /dev/sdb
brw-rw---- 1 root disk 8, 17 May 17 16:30 /dev/sdb1
brw-rw---- 1 root disk 8, 32 May 17 16:30 /dev/sdc
brw-rw---- 1 root disk 8, 33 May 17 16:30 /dev/sdc1
brw-rw---- 1 root disk 8, 48 May 17 16:30 /dev/sdd
brw-rw---- 1 root disk 8, 64 May 17 16:45 /dev/sde
brw-rw---- 1 root disk 8, 80 May 17 17:53 /dev/sdf
brw-rw---- 1 root disk 8, 96 May 17 17:53 /dev/sdg
brw-rw---- 1 root disk 8, 112 May 17 17:53 /dev/sdh
brw-rw---- 1 root disk 8, 128 May 17 17:53 /dev/sdi
brw-rw---- 1 root disk 8, 144 May 17 17:53 /dev/sdj
brw-rw---- 1 root disk 8, 160 May 17 17:53 /dev/sdk
brw-rw---- 1 root disk 8, 176 May 17 17:53 /dev/sdl
[root@node1 ~]#
使用UDEV
系统是rhel6的
echo "options=--whitelisted --replace-whitespace" >> /etc/scsi_id.config
f g h i j k l
for i in f g h i j k l ;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"oracle\", GROUP=\"oinstall\", MODE=\"0660\"" >> /etc/udev/rules.d/99-oracle-asmdevices.rules
done
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1IET_00010001", NAME="asm-diskf", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1IET_00020002", NAME="asm-diskg", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1IET_00030002", NAME="asm-diskh", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1IET_00040003", NAME="asm-diski", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1IET_00050004", NAME="asm-diskj", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1IET_00060005", NAME="asm-diskk", OWNER="oracle", GROUP="oinstall", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1IET_00070006", NAME="asm-diskl", OWNER="oracle", GROUP="oinstall", MODE="0660"
将这些RULE写到/etc/udev/rules.d/99-oracle-asmdevices.rules中
运行 /sbin/start_udev
[root@node1 ~]# ll /dev/asm*
brw-rw---- 1 oracle oinstall 8, 80 May 17 18:15 /dev/asm-diskf
brw-rw---- 1 oracle oinstall 8, 96 May 17 18:15 /dev/asm-diskg
brw-rw---- 1 oracle oinstall 8, 112 May 17 18:15 /dev/asm-diskh
brw-rw---- 1 oracle oinstall 8, 128 May 17 18:15 /dev/asm-diski
brw-rw---- 1 oracle oinstall 8, 144 May 17 18:15 /dev/asm-diskj
brw-rw---- 1 oracle oinstall 8, 160 May 17 18:15 /dev/asm-diskk
brw-rw---- 1 oracle oinstall 8, 176 May 17 18:15 /dev/asm-diskl
[root@node1 ~]#
同样的操作在node2执行一遍
用到的命令集合:
ping my2950.momo.org
iscsiadm -m discovery -t sendtargets -p my2950.momo.org
iscsiadm -m node -T iqn.2013-05.01.org.momo:ocr1 --login
iscsiadm -m node -T iqn.2013-05.02.org.momo:ocr2 --login
iscsiadm -m node -T iqn.2013-05.03.org.momo:ocr3 --login
iscsiadm -m node -T iqn.2013-05.04.org.momo:asm1 --login
iscsiadm -m node -T iqn.2013-05.05.org.momo:asm2 --login
iscsiadm -m node -T iqn.2013-05.06.org.momo:redolog --login
iscsiadm -m node -T iqn.2013-05.07.org.momo:archivelog --login
for i in f g h i j k l ;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"oracle\", GROUP=\"dba\", MODE=\"0660\"" >> /etc/udev/rules.d/99-oracle-asmdevices.rules
done
/etc/udev/rules.d/99-oracle-asmdevices.rules
/sbin/start_udev
配置SSH互信
[root@node1 ~]#
[root@node1 ~]# su =- oracle
[oracle@node1 ~]$
[oracle@node1 ~]$ id oracle
uid=1000(oracle) gid=1000(oinstall) groups=1000(oinstall),1100(dba),1200(oper)
[oracle@node1 ~]$
[oracle@node1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Created directory '/home/oracle/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
50:86:8f:74:cd:05:37:4e:b5:eb:f8:3f:fd:fb:ac:8e oracle@node1
The key's randomart image is:
+--[ RSA 2048]----+
| .oo.o=.. |
| oo. o+ . . |
| ..+ . . |
| ... . |
| S . |
| o |
| . . .|
| o o.|
| E.++X|
+-----------------+
[oracle@node1 ~]$
[oracle@node1 ~]$ pwd
/home/oracle[oracle@node1 ~]$ cd .ssh/
[oracle@node1 .ssh]$ ls
id_rsaid_rsa.pub known_hosts
[oracle@node1 .ssh]$
[oracle@node1 .ssh]$ more more id_rsa.pub >> authorized_keys
[oracle@node1 .ssh]$
[oracle@node1 ~]$
[oracle@node1 ~]$ scp -r ~/.ssh oracle@node2:~/
The authenticity of host 'node2 (10.101.5.71)' can't be established.
RSA key fingerprint is a2:a3:bd:7d:da:3b:9a:f8:3c:24:c0:ce:45:1c:df:d3.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,10.101.5.71' (RSA) to the list of known hosts.
oracle@node2's password:
id_rsa 0% 0 0.0KB/s --:-- ETA
id_rsa 100% 1675 1.6KB/s 00:00
id_rsa.pub 0% 0 0.0KB/s --:-- ETA
id_rsa.pub 100% 394 0.4KB/s 00:00
known_hosts 0% 0 0.0KB/s --:-- ETA
known_hosts 100% 399 0.4KB/s 00:00
[oracle@node1 ~]$
[oracle@node1 ~]$切换到node2上
[root@node2 ~]# su - oracle
[oracle@node2 ~]$
[oracle@node2 ~]$ cd .ssh/
[oracle@node2 .ssh]$ ls -l
total 16
-rw-r--r-- 1 oracle oinstall 394 May 21 09:53 authorized_keys
-rw------- 1 oracle oinstall 1675 May 21 09:53 id_rsa
-rw-r--r-- 1 oracle oinstall 394 May 21 09:53 id_rsa.pub
-rw-r--r-- 1 oracle oinstall 399 May 21 09:53 known_hosts
[oracle@node2 .ssh]$
[oracle@node2 .ssh]$ ssh oracle@node1
The authenticity of host 'node1 (10.101.5.70)' can't be established.
RSA key fingerprint is 13:50:e2:01:6d:39:cf:89:ba:64:c2:1a:7e:e0:9a:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,10.101.5.70' (RSA) to the list of known hosts.
Last login: Tue May 21 09:53:56 2013 from node2.momo.org
[oracle@node1 ~]$
SSH互信配置完成
下一步运行 runcluvfy.sh*
[oracle@node1 database]$ cd /nfs/oracle11G/i386/grid/
[oracle@node1 grid]$ pwd
/nfs/oracle11G/i386/grid
[oracle@node1 grid]$ ls
doc/ readme.html* rpm/ runInstaller* stage/
install/ response/ runcluvfy.sh* sshsetup/ welcome.html*
[oracle@node1 grid]$ ./runcluvfy.sh --help
ERROR:
Invalid command line syntax.
USAGE:
runcluvfy.sh [-help|-version]
runcluvfy.sh stage {-list|-help}
runcluvfy.sh stage {-pre|-post}[-verbose]
runcluvfy.sh comp {-list|-help}
runcluvfy.sh comp[-verbose]
[oracle@node1 grid]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "node1"
Destination Node Reachable?
------------------------------------ ------------------------
node1 yes
node2 yes
Result: Node reachability check passed from node "node1"
Checking user equivalence...
Check: User equivalence for user "oracle"
Node Name Status
------------------------------------ ------------------------
node2 passed
node1 passed
Result: User equivalence check passed for user "oracle"
ERROR:
Reference data is not available for verifying prerequisites on this operating system distribution
Verification cannot proceed
Pre-check for cluster services setup was unsuccessful on all the nodes.
[oracle@node1 grid]$
系统是Oracle Linux 6.4 I386,可能是太新,不管了,直接装,看看报什么错
先安装cvuqdisk
[root@node1 ~]# cd /nfs/oracle11G/i386/grid/rpm/
[root@node1 rpm]# ll
total 12
-rwxr-xr-x 1 root root 8233 Sep 22 2011 cvuqdisk-1.0.9-1.rpm
[root@node1 rpm]# rpm -Uhv cvuqdisk-1.0.9-1.rpm
Preparing... ########################################### [100%]
Using default group oinstall to install package
1:cvuqdisk ########################################### [100%]
[root@node1 rpm]#
node2同样操作
[oracle@node2 .ssh]$ su -
Password:
[root@node2 ~]# cd /nfs/oracle11G/i386/grid/rpm/
[root@node2 rpm]# ll
total 12
-rwxr-xr-x 1 root root 8233 Sep 22 2011 cvuqdisk-1.0.9-1.rpm
[root@node2 rpm]# rpm -Uhv cvuqdisk-1.0.9-1.rpm
Preparing... ########################################### [100%]
Using default group oinstall to install package
1:cvuqdisk ########################################### [100%]
[root@node2 rpm]#