分类: Oracle
2011-08-16 11:18:26
1. 新建用户以及组
新建oinstall、asmadmin、asmdba、asmoper、dba、oper组
#groupadd –g 500 oinstall
#groupadd –g 501 asmadmin
#groupadd –g 502 asmdba
#groupadd –g 503 asmoper
#groupadd –g 504 dba
#groupadd –g 505 oper
新建grid、oracle用户
#useradd –u 500 –d /home/grid –g 500 –G asmadmin asmdba asmoper grid
#useradd –u 501 –d /home/oracle –g 500 –G dba oper oracle
具体描述以及对应关系见下表
Description |
OS Group Name |
OS Users Assigned to this Group |
Oracle Privilege |
Oracle Group Name |
Oracle Inventory and Software Owner |
oinstall |
grid, oratest |
|
|
Oracle Automatic Storage Management Group |
asmadmin |
grid |
SYSASM |
OSASM |
ASM Database Administrator Group |
asmdba |
grid, oratest |
SYSDBA for ASM |
OSDBA for ASM |
ASM Operator Group |
asmoper |
grid |
SYSOPER for ASM |
OSOPER for ASM |
Database Administrator |
dba |
oratest |
SYSDBA |
OSDBA |
Database Operator |
oper |
oratest |
SYSOPER |
OSOPER |
注1:两台主机上新建的用户以及组的id都必须完全相同,最好能将用户的首要组也设置成一样
注2:用户和组的对应关系必须和上表严格一致,否则日后安装grid以及建库的时候会报错
2.新建文件系统或者目录
本次实施过程中,是将两块根盘做了raid1,然后使用lvm方式进行的划分,具体新建新的文件系统方法如下:(系统的根盘对应的vg是vg00)
#lvcreate –L 20480 –n lvu09 vg00 #划分出一个20G的lvu09
#fsadm –e ext3 /dev/vg00/lvu09
#mkdir /u09
#vi fstab #编辑fstab,让文件系统可以随即mount
#mount /dev/vg00/lvu09 /u09
#cd /u09
#mkdir –p /u09/grid
#mkdir –p /u09/11.2.0/grid
#mkdir –p /u09/11.2.0/oracle
#chown grid:oinstall /u09
#chown oracle:oinstall /u09/11.2.0/oracle
如果不是使用的lvm方式,只需要在/下面新建目录,在赋予权限就可以了
3.调整系统参数
3.1修改sysctl.conf,增加或修改以下参数:
kernel.shmmni=4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.file-max=6815744
fs.aio-max-nr=1048576
3.2修改/etc/security/limits.conf文件中增加以下:
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
3.3在/etc/pam.d/login 文件中增加:
session required /lib/security/pam_limits.so
session required pam_limits.so
3.4修改/etc/profile,增加:
if [ $USER = "oratest" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
3.5修改/etc/hosts文件
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
#public
192.168.161.63 tydic63
192.168.161.64 tydic64
#private
10.10.10.1 tydic63-priv
10.10.10.2 tydic64-priv
#virtual
192.168.161.73 tydic63-vip
192.168.161.74 tydic64-vip
#scan
192.168.161.65 tydic-cluster-scan
3.6配置网卡
将两台机器的eth0配置成192.168.161.网段,eth1配置成10.10.10网段
#ifconfig eth0 192.168.161.63 netmask 255.255.255.0
#ifconfig eth1 10.10.10.1 netmask 255.255.255.0
3.7配置网关
修改/etc/sysconfig/network-scripts/ifcfg-eth0
修改default gateway 192.168.161.1
即时生效:
# route add default gw 192.168.161.1
3.8配置dns
修改/etc/sysconfig/network-scripts/ifcfg-eth0
修改dns 202.96.134.133
修改/etc/resolv.conf
3.9修改hostname
#hostname tydic63
完成上面的修改后执行
#/sbin/sysctl –p
使修改生效,或者重启主机(个人建议重启一下,看看修改后是否主机重启是否存在问题)
3.10配置环境变量
配置grid用户的环境变量(tydic63上的ORACLE_SID=+ASM1,tydic64上的ORACLE_SID=+ASM2)
export ORACLE_HOSTNAME=tydic63
ORACLE_SID=+ASM1; export ORACLE_SID
ORACLE_BASE=/u09/app/grid; export ORACLE_BASE
ORACLE_HOME=/u09/app/11.2.0/grid; export ORACLE_HOME
ORACLE_TERM=xterm; export ORACLE_TERM
NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u09/app/common/oracle/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native; export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
配置oracle用户的环境变量(tydic63上的ORACLE_SID=1oratest1,tydic64上的ORACLE_SID=oratest2)
export ORACLE_HOSTNAME=tydic63
ORACLE_SID=oratest1; export ORACLE_SID
ORACLE_UNQNAME=oratest; export ORACLE_UNQNAME
ORACLE_BASE=/u09/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME
ORACLE_TERM=xterm; export ORACLE_TERM
NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u09/app/common/oracle/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native; export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
4.Centos5.3 64bit 安装数据库必须安装的系统包如下:
binutils
compat-libstdc++
elfutils-libelf
elfutils-libelf-devel
glibc
glibc-common
glibc-devel
gcc
gcc-c++
libaio
libaio-devel (含i386)
libgcc-
libstdc++
libstdc++-devel
make
sysstat
unixODBC (含i386)
unixODBC-devel (含i386)
pdksh
注:上述包都必须安装64位的,即x86-64的,但是带(i386)的包,就必须同时安装32位的包
安装方法如下:
首先检测上述包是否安装:
#rpm –q pdksh
没有就使用yum命令搜索
#yum search pdksh
再用yum命令将搜索出来的包安装
#yum install pdksh .x86_64
注:本次安装的操作系统centos53(64bit)所有的系统包都可以在网络上搜索,所以配置网络的时候可以将dns配置好,直接利用网络资源
5.配置ssh信任
配置群集每个节点上的SSH。在每个节点上以grid用户登陆执行下列任务:
#su - grid
#mkdir ~/.ssh
#chmod 700 ~/.ssh
#/usr/bin/ssh-keygen -t rsa 这里可以接受其默认设置
RSA公钥被写入~/.ssh/id_rsa.pub文件,私钥写入~/.ssh/id_rsa文件。
在tydic63节点上以Grid用户登陆,生成一个authorized_keys文件,然后复制到tydic64上:
#su - grid
#cd ~/.ssh
#cat id_rsa.pub >> authorized_keys
#scp authorized_keys tydic64:/home/grid/.ssh/
接下来,在tydic64上以Grid用户登陆,执行下面的命令:
#su - grid
#cd ~/.ssh
#cat id_rsa.pub >> authorized_keys
#scp authorized_keys tydic63:/home/grid/.ssh/
现在在两台服务器上的authorized_keys文件都包括了所有节点的公钥。
为了使每个群集成员节点上的SSH用户都对等,在每个节点上执行下面的命令:
#su - grid
#ssh tydic63 date
#ssh tydic64 date
#exec /usr/bin/ssh-agent $SHELL
#/usr/bin/ssh-add
使用oracle用户也进行一次同样的操作
现在在这两台服务器之间grid、oracle用户应该可以不要密码使用SSH和SCP了。
注:这里也可以不配置ssh,在后面安装grid和oracle程序的时候,系统有选项可以自动配置(经过了验证),但个人建议,我们先配置好,到时候在验证,否则如果系统自动配置出现问题,再去解决可能会比较麻烦
6.配置ntp
修改/etc/sysconfig/ntpd 增加一个-x参数
# Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
# Set to 'yes' to sync hw clock after successful ntpdate
SYNC_HWCLOCK=no
# Additional options for ntpdate
NTPDATE_OPTIONS=""
改完后,重起ntp服务
#/sbin/service ntpd restart
7.安装ASM、划分存储并配置ASM
7.1安装ASM的rpm包
oracleasm-2.6.18-128.el5-2.0.5-1.el5.x86_64.rpm
oracleasmlib-2.0.4-1.el5.x86_64.rpm
oracleasm-support-2.1.3-1.el5.x86_64.rpm
上面使ASM对于linux的rpm包
#rpm -Uvh oracleasm-****.x86_64.rpm
7.2配置ASMLIB
#/usr/sbin/oracleasm configure ;显示当前配置
ORACLEASM_ENABLED=false
ORACLEASM_UID=
ORACLEASM_GID=
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
#/usr/sbin/oracleasm configure –i;开始配置
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
完成后将在系统中建立 /etc/sysconfig/oracleasm 、/dev/oracleasm 并Mounts ASMLib driver file system
#/etc/init.d/oracleasm start
Creating /dev/oracleasm mount point: [ OK ]
#/etc/init.d/oracleasm enable
Writing Oracle ASM library driver configuration: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]
或者执行
#/usr/sbin/oracleasm init
7.3划分存储
#fdisk /dev/sdb
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-261, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-261, default 261):
Using default value 261
Command (m for help): w
The partition table has been altered!
注1:如果该硬盘之前已经被ASM使用过了,就需要重新格式化,再划分
注2:ASM对硬盘的大小是有限制的,不能太小,太小后面安装grid的时候会报错,具体值暂时没有查到。
#fdisk -l /dev/sdb
检查划分的结果
依次操作,分别在三个磁盘完成建立分区操作
7.4ASM划分空间
[root@tydic63 ~]# oracleasm createdisk CRSVOL1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
[root@tydic63 ~]# oracleasm createdisk DATAVOL1 /dev/sdc1
Writing disk header: done
Instantiating disk: done
[root@tydic63 ~]# oracleasm createdisk FRAVOL1 /dev/sdd1
Writing disk header: done
Instantiating disk: done
到tydi64,找到tydic63建好的三个asm磁盘:
[root@tydic64 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "CRSVOL1"
Instantiating disk "DATAVOL1"
Instantiating disk "FRAVOL1"
确认在两个节点都能看到已建好的asm磁盘:
[root@tydic63 ~]# oracleasm listdisks
CRSVOL1
DATAVOL1
FRAVOL1
[root@tydic64 ~]# oracleasm listdisks
CRSVOL1
DATAVOL1
FRAVOL1
注1:磁盘名称是ASCII大写字母、数字和下划线。它们必须以字母开始。
注2:ASMLib只可以配置磁盘分区,不可以配置整个磁盘
8.安装grid软件(Oracle Grid Infrastructure for a Cluster)
8.1安装 cvuqdisk Package for Linux(两个节点都要安装)
按以下方式完成安装:
先从grid用户开始:
$su 转到root用户
改环境变量
#CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
开始安装
#rpm -iv cvuqdisk-1.0.7-1.rpm
Preparing packages for installation...
cvuqdisk-1.0.7-1
8.2安装环境检测
上传grid软件、database软件到/public目录下
用grid用户登入系统
$cd /public/grid (该目录是grid安装文件存在目录)
$./runcluvfy.sh stage -pre crsinst -n tydic63,tydic64 -fixup -verbose
获得结果Pre-check for cluster services setup was successful on all the nodes.则表明安装环境具备。
如果出现Please run the following script on each node as "root" user to execute the fixups:‘/tmp/CVU_11.2.0.1.0_grid/runfixup.sh'
则需要使用root用户执行上面额命令,然后再检查环境,直到环境具备。
8.3安装grid软件(Oracle Grid Infrastructure for a Cluster)
使用xwindows软件进行安装
使用grid用户登入系统
$export DISPLAY=IP:0.0
$cd /public/grid (该目录是grid安装文件存在目录)
$./runInstaller
选"Advanced Installation
选择语言,添加Simplified Chinese
配置SCAN参数,不需配GNS
添加tydic64节点,并测试ssh,如前面配置ssh正确,测试顺利通过。(该处ssh connectivity选项,可以在线配置ssh)
/etc/hosts文件无误,系统可正确识别出公、私网段
OCR磁盘选用ASM类型
为OCR磁盘建立并选择ASM磁盘组,如下图配置
选Use same passwords for these accounts
选Do not use Intelligent Platform Management Interface (IPMI)
配置asm的各个组
分配安装路径,如环境变量无配置失误,则自动填写的路径与环境变量一致
OUI路径 开始检查
检查结果
安装
用root用户在两节点执行下图脚本
orainstaRoot.sh这个脚本执行很容易通过
执行root.sh的结果:
Tydic63
# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u09/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-02-08 15:39:09: Parsing the host name
2010-02-08 15:39:09: Checking for super user privileges
2010-02-08 15:39:09: User has super user privileges
Using configuration parameter file: /u09/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on centos-release-5-3.el5.centos.1
CRS-2672: Attempting to start 'ora.gipcd' on 'tydic63'
CRS-2672: Attempting to start 'ora.mdnsd' on 'tydic63'
CRS-2676: Start of 'ora.gipcd' on 'tydic63' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'tydic63' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'tydic63'
CRS-2676: Start of 'ora.gpnpd' on 'tydic63' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'tydic63'
CRS-2676: Start of 'ora.cssdmonitor' on 'tydic63' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'tydic63'
CRS-2672: Attempting to start 'ora.diskmon' on 'tydic63'
CRS-2676: Start of 'ora.diskmon' on 'tydic63' succeeded
CRS-2676: Start of 'ora.cssd' on 'tydic63' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'tydic63'
CRS-2676: Start of 'ora.ctssd' on 'tydic63' succeeded
ASM created and started successfully.
DiskGroup CRS created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'tydic63'
CRS-2676: Start of 'ora.crsd' on 'tydic63' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 0c3ca42e13dc4f52bf3845185dc9c0cc.
Successfully replaced voting disk group with +CRS.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 0c3ca42e13dc4f52bf3845185dc9c0cc (ORCL:CRSVOL1) [CRS]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'tydic63'
CRS-2677: Stop of 'ora.crsd' on 'tydic63' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'tydic63'
CRS-2677: Stop of 'ora.asm' on 'tydic63' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'tydic63'
CRS-2677: Stop of 'ora.ctssd' on 'tydic63' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'tydic63'
CRS-2677: Stop of 'ora.cssdmonitor' on 'tydic63' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'tydic63'
CRS-2677: Stop of 'ora.cssd' on 'tydic63' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'tydic63'
CRS-2677: Stop of 'ora.gpnpd' on 'tydic63' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'tydic63'
CRS-2677: Stop of 'ora.gipcd' on 'tydic63' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'tydic63'
CRS-2677: Stop of 'ora.mdnsd' on 'tydic63' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'tydic63'
CRS-2676: Start of 'ora.mdnsd' on 'tydic63' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'tydic63'
CRS-2676: Start of 'ora.gipcd' on 'tydic63' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'tydic63'
CRS-2676: Start of 'ora.gpnpd' on 'tydic63' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'tydic63'
CRS-2676: Start of 'ora.cssdmonitor' on 'tydic63' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'tydic63'
CRS-2672: Attempting to start 'ora.diskmon' on 'tydic63'
CRS-2676: Start of 'ora.diskmon' on 'tydic63' succeeded
CRS-2676: Start of 'ora.cssd' on 'tydic63' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'tydic63'
CRS-2676: Start of 'ora.ctssd' on 'tydic63' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'tydic63'
CRS-2676: Start of 'ora.asm' on 'tydic63' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'tydic63'
CRS-2676: Start of 'ora.crsd' on 'tydic63' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'tydic63'
CRS-2676: Start of 'ora.evmd' on 'tydic63' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'tydic63'
CRS-2676: Start of 'ora.asm' on 'tydic63' succeeded
CRS-2672: Attempting to start 'ora.CRS.dg' on 'tydic63'
CRS-2676: Start of 'ora.CRS.dg' on 'tydic63' succeeded
tydic63 2010/02/08 15:43:23 /u09/app/11.2.0/grid/cdata/tydic63/backup_20100208_154323.olr
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 18047 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u09/app/oraInventory
'UpdateNodeList' was successful.
Tydic64
# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u09/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-02-08 15:43:47: Parsing the host name
2010-02-08 15:43:47: Checking for super user privileges
2010-02-08 15:43:47: User has super user privileges
Using configuration parameter file: /u09/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
ADVM/ACFS is not supported on centos-release-5-3.el5.centos.1
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node tydic63, number 1, and is terminating
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'tydic64'
CRS-2677: Stop of 'ora.cssdmonitor' on 'tydic64' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'tydic64'
CRS-2677: Stop of 'ora.gpnpd' on 'tydic64' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'tydic64'
CRS-2677: Stop of 'ora.gipcd' on 'tydic64' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'tydic64'
CRS-2677: Stop of 'ora.mdnsd' on 'tydic64' succeeded
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'tydic64'
CRS-2676: Start of 'ora.mdnsd' on 'tydic64' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'tydic64'
CRS-2676: Start of 'ora.gipcd' on 'tydic64' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'tydic64'
CRS-2676: Start of 'ora.gpnpd' on 'tydic64' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'tydic64'
CRS-2676: Start of 'ora.cssdmonitor' on 'tydic64' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'tydic64'
CRS-2672: Attempting to start 'ora.diskmon' on 'tydic64'
CRS-2676: Start of 'ora.diskmon' on 'tydic64' succeeded
CRS-2676: Start of 'ora.cssd' on 'tydic64' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'tydic64'
CRS-2676: Start of 'ora.ctssd' on 'tydic64' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'tydic64'
CRS-2676: Start of 'ora.asm' on 'tydic64' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'tydic64'
CRS-2676: Start of 'ora.crsd' on 'tydic64' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'tydic64'
CRS-2676: Start of 'ora.evmd' on 'tydic64' succeeded
tydic64 2010/02/08 15:46:08 /u09/app/11.2.0/grid/cdata/tydic64/backup_20100208_154608.olr
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 19967 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u09/app/oraInventory
'UpdateNodeList' was successful.
执行完脚本,继续进行,最后会出现一个报错,不必管他,只是个dns的校验错误,不影响的。
8.4使用asmca建立数据与Fast Recovery Area用到的磁盘组
选择磁盘组,点Create,建立RACDB_DATA及FRA磁盘组
9.安装数据库软件(oracle database)
使用xwindows软件进行安装
使用oracle用户登入系统
$export DISPLAY=IP:0.0
$cd /public/database (该目录是oracle安装文件存在目录)
$./runInstaller
不选”I wish to receive security updates via My Oracle Support” Email也可不填
选Install database software only
选Real Application Clusters database installation,能看到两个节点已自动被选中(该处的ssh connectivity选项可以在线配置ssh)
选择语言 添加Simplified Chinese
选Enterprise Edition,缺省组件如图
安装路径自动按环境变量填入
数据库工作组选择,注意要有oper组,否则稍后用DBCA建库时可能发生不能识别出ASM磁盘组现象
检查
结果
点Finish开始安装
在两节点上用root用户执行脚本,完成Oracle软件安装
执行root.sh的结果:
# ./root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= ora11g
ORACLE_HOME= /u09/app/oracle/product/11.2.0/dbhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
10.建库
使用xwindows软件进行安装
使用oracle用户登入系统
$export DISPLAY=IP:0.0
$dbca
选Oracle Real Application Clusters database
选Create a Database
选Custom Database
填入oracle sid,选所有节点
选Configure Enterprise Manager / Configure Database Control for local management
选Use the Same Administrative Password for All Accounts
配置数据文件位置如下图,RACDB_DATA为前步建好的ASM磁盘组
配置ASMSNMP密码(前面安装grid是设置的密码)
配置Fast Recovery Area,选取前步所建的FRA磁盘组,容量为该磁盘组容量的85%
继续,用缺省配置
配置SGA、字符集等(将sga调整小的时候会报错,可以忽略)
配置系统表空间等
生成建库脚本,开始
建库完成之后,可以用grid用户执行
$crs_stat –t
检查所有节点的资源组是否都启动。
目前发现11g的listener文件是在grid用户的$ORACLE_HOME/network/admin下
而tnsname文件则是在oracle用户下的$ORACLE_HOME/network/admin下,
其次oracle11g的alterlog文件的位置是可以配置的,不固定在$ORACLE_BASE/admin/sid/bdump下。