分类: Oracle
2013-08-14 14:53:42
本文主要介绍了Oracle 11g RAC 的安装,10g 版本的数据库在11g RAC 环境下的创建和迁移,以及该版本数据库如何升级到11g 版本等操作过程。对于11g RAC 的结构和11g 的一些新特性也做了一些分析和总结。
Oracle 的11g 版的RAC ,较之于10g 的RAC 有较大改动的地方。在11g 中,CRS 软件和ASM 软件一起安装在一个独立的用户中, 称之为grid infrastructur ;而RDBMS 软件是安装在另一个用户下。我们在这里将这两个用户名称分别命名为grid 和ora11g 。
如果要安装Oracle 11g RAC 的服务器集群已经安装了10g RAC ,那么就需要先清理掉该环境中的所有RAC 配置后才能开始安装11g RAC 。
(mikix西游 @mikixiyou 文档,原文链接: http://mikixiyou.iteye.com/blog/1558992 )
因此,这不是升级,而是初始安装11g RAC 。
在linux 下使用rm 方式直接清理10g 的安装配置文件。
rm -f /etc/init.d/init.cssd
rm -f /etc/init.d/init.crs
rm -f /etc/init.d/init.crsd
rm -f /etc/init.d/init.evmd
rm -f /etc/rc2.d/K96init.crs
rm -f /etc/rc2.d/S96init.crs
rm -f /etc/rc3.d/K96init.crs
rm -f /etc/rc3.d/S96init.crs
rm -f /etc/rc5.d/K96init.crs
rm -f /etc/rc5.d/S96init.crs
rm -rf /etc/oracle/scls_scr
rm -f /etc/inittab.crs
cp /etc/inittab.orig /etc/inittab
格式化vote disk 和ocr 配置的裸设备文件。
dd if=/dev/zero of=/dev/raw/raw1 bs=8192K count=10
dd if=/dev/zero of=/dev/raw/raw2 bs=8192K count=10
再删除所有的Oracle 安装文件和配置文件,彻底清理老环境。
Oracle 11g RAC 和10g RAC 有一些差别。11g 中将CRS 和ASM 集中到一个叫做Grid Infrastructure 软件中。
两台服务器,挂载一个同时读写的存储,属于RAC 的基本配置。
两台服务器之间有一个心跳连接网络。
两台主机上hosts 文件的配置如下:
192.168.15.193 serv-scan
192.168.15.89 serv1
192.168.15.189 serv1-vip
10.100.15.89 serv1-priv
192.168.15.90 serv2
192.168.15.190 serv2-vip
10.100.15.90 serv2-priv
这里有一个重要的关键点,请注意。
相对10g RAC 配置来说,在11g RAC 中多了一个IP 地址配置项,称之为SCAN IP 。这是新出现在11g 版主的配置要求。
对于SCAN IP ,我有话说。
我认为这个IP 是一个鸡肋。它的出现是主要是为了解决客户端负载均衡配置时,增加或删除节点操作发生后,减少客户端的配置修改需求。
在我们的基于应用服务器的数据库库访问应用环境中,修改量不那么大。而且我们的服务器网络环境中也没有DNS 的配置,为了这个功能还要加DNS 的配置,额外增加一个DNS 服务风险点。这是不值得的。
幸运的是,这个SCAN IP 我们其实可以不用的。这里hosts 文件有配置,而实际网络没有这个IP 时,在安装Grid Infrastructure 的校验步骤中会报警,但忽略也可以继续安装RAC 。
操作系统的补丁包
建议使用 system-config-packages命令在vnc下打开图形化安装工具,将开发包全部安装上,免得烦人。
创建用户
分别创建两个用户grid 和ora11g ,前者用于安装grid infrastructure ,后者用于安装rdbms。
groupadd -g 1000 oinstall
groupadd -g 1200 asmadmin
groupadd -g 1201 asmdba
groupadd -g 1202 asmoper
useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -s /bin/bash -c "Grid Infrastructure Owner" grid
groupadd -g 1300 dba
groupadd -g 1301 oper
useradd -m -u 1120 -g oinstall -G dba,oper,asmdba -d /home/ora11g -s /bin/bash -c "Oracle Software Owner" ora11g
mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0/grid
chown -R grid:oinstall /u01
mkdir -p /u01/app/ora11g
chown oracle:oinstall /u01/app/ora11g
chmod -R 775 /u01
设置集群节点间无密码的SSH 连接
mkdir ~/.ssh
chmod 700 ~/.ssh
/usr/bin/ssh-keygen -t dsa
touch ~/.ssh/authorized_keys
ls -l ~/.ssh
ssh serv1 cat ~/.ssh/id_dsa >> ~/.ssh/authorized_keys
ssh serv1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh serv2 cat ~/.ssh/id_dsa >> ~/.ssh/authorized_keys
ssh serv2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys serv2:.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
在VNC 上登录到grid 用户下,打开安装包中的runInstaller 文件,按照下列步骤进行安装。
安装贴图,我上传不了,在此处就过滤掉了。
SCAN Name 配置的内容就是hosts 中配置的条目,端口可以自定义。
这个信息出现在数据库实例的初始化参数remote_listener 中。(作用未知)
SQL> show parameter listener
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string (DESCRIPTION=(ADDRESS_LIST=(AD
DRESS=(PROTOCOL=TCP)(HOST=serv
1-vip)(PORT=1522))))
remote_listener string serv-scan:1590
SCAN 的虚拟IP 的解析,Oracle 推荐了两种方法DNS 和GNS 。
l Using DNS
To use the DNS method for defining your SCAN, the network administrator must create a single name that resolves to three separate IP addresses using round-robin algorithms. Regardless of how many systems are part of your cluster, Oracle recommends that 3 IP addresses are configured to allow for failover and load-balancing.
It is important that the IP addresses are on the same subnet as the public network for the server. The other two requirements are that the name (not including the domain suffix) are 15 characters or less in length and that the name can be resolved without using the domain suffix. Also, the IP addresses should not be specifically assigned to any of the nodes in the cluster.
You can test the DNS setup by running an nslookup on the scan name two or more times. Each time, the IP addresses should be returned in a different order:
nslookup mydatabase-scan
l Grid Naming Solutions (GNS)
Using GNS assumes that a server is running on the public network with enough available addresses to assign the required IP addresses and the SCAN VIP. Only one static IP address is required to be configured and it should be in the DNS domain.
Oracle 推出这个SCAN ,最主要的目的是减少客户端访问数据库时的JDBC 连接配置,而将数据库IP 暴露在DNS 或者GNS 下。这个不靠谱。
最终我们哪个也不用,伪造一下DNS 的域名解析机制,从而继续使用grid 。
伪造方法是这样:在/etc/hosts 中增加一个条目,配置SCAN 的IP 地址和名称;再到/usr/bin/nslookup 文件修改,将其备份一份为nslookup.original, 修改文件内容为:
/usr/bin@serv1=>servdb1$more nslookup
#!/bin/bash
HOSTNAME=${1}
if [[ $HOSTNAME = "serv-scan" ]]; then
echo "Server: 24.154.1.34"
echo "Address: 24.154.1.34#53"
echo "Non-authoritative answer:"
echo "Name: serv-scan"
echo "Address: 192.168.15.193"
else
/usr/bin/nslookup.original $HOSTNAME
fi
这样就避开了SCAN 机制。
如果nslookup 不修改,会报警,但也可以通过,继续安装。
在配置OCR 和voting disk 时,我们选择ASM 存储,将CRS 的配置文件和数据库文件统一保存在ASM 上。
配置文件以前是裸设备,现在就它们加入到磁盘组CRS 中。
在这一步,我们就开始就创建ASM 实例了,而在10g 中,ASM 实例的创建在DBMS 中才开始做的。
到最后检验步骤。
在这一步,是校验系统所有的配置信息。这里检测到系统的NTP 协议没有启动,给了一个失败信息。
可以去服务中启动ntpd ,也可以忽略掉这个失败信息。
这里也可能出现SCAN 的失败信息,一样也可以忽略。
最后,提示执行shell 脚本。
节点 1
[root@serv1 ~]# /u01/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2012-04-24 11:04:37: Parsing the host name
2012-04-24 11:04:37: Checking for super user privileges
2012-04-24 11:04:37: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'serv1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'serv1'
CRS-2676: Start of 'ora.gipcd' on 'serv1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'serv1'
CRS-2676: Start of 'ora.gpnpd' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'serv1'
CRS-2676: Start of 'ora.cssdmonitor' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'serv1'
CRS-2672: Attempting to start 'ora.diskmon' on 'serv1'
CRS-2676: Start of 'ora.diskmon' on 'serv1' succeeded
CRS-2676: Start of 'ora.cssd' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'serv1'
CRS-2676: Start of 'ora.ctssd' on 'serv1' succeeded
已成功创建并启动 ASM
已成功创建磁盘组 CRS
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'serv1'
CRS-2676: Start of 'ora.crsd' on 'serv1' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 1b0e7d8ac5134f48bfe705a6df385dd2 .
Successfully replaced voting disk group with +CRS.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 1b0e7d8ac5134f48bfe705a6df385dd2 (/dev/raw/raw1) [CRS]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'serv1'
CRS-2677: Stop of 'ora.crsd' on 'serv1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'serv1'
CRS-2677: Stop of 'ora.asm' on 'serv1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'serv1'
CRS-2677: Stop of 'ora.ctssd' on 'serv1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'serv1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'serv1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'serv1'
CRS-2677: Stop of 'ora.cssd' on 'serv1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'serv1'
CRS-2677: Stop of 'ora.gpnpd' on 'serv1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'serv1'
CRS-2677: Stop of 'ora.gipcd' on 'serv1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'serv1'
CRS-2677: Stop of 'ora.mdnsd' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'serv1'
CRS-2676: Start of 'ora.mdnsd' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'serv1'
CRS-2676: Start of 'ora.gipcd' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'serv1'
CRS-2676: Start of 'ora.gpnpd' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'serv1'
CRS-2676: Start of 'ora.cssdmonitor' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'serv1'
CRS-2672: Attempting to start 'ora.diskmon' on 'serv1'
CRS-2676: Start of 'ora.diskmon' on 'serv1' succeeded
CRS-2676: Start of 'ora.cssd' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'serv1'
CRS-2676: Start of 'ora.ctssd' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'serv1'
CRS-2676: Start of 'ora.asm' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'serv1'
CRS-2676: Start of 'ora.crsd' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'serv1'
CRS-2676: Start of 'ora.evmd' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'serv1'
CRS-2676: Start of 'ora.asm' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.CRS.dg' on 'serv1'
CRS-2676: Start of 'ora.CRS.dg' on 'serv1' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'serv1'
CRS-2676: Start of 'ora.registry.acfs' on 'serv1' succeeded
serv1 2012/04/24 11:08:49 /u01/app/11.2.0/grid/cdata/serv1/backup_20120424_110849.olr
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
正在启动 Oracle Universal Installer...
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' 成功
[root@serv1 ~]#
节点 2
[root@serv2 ~]# /u01/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
……………………………………………………………………………………
……………………………………………………………………………………
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2012-04-24 11:21:47: Parsing the host name
2012-04-24 11:21:47: Checking for super user privileges
2012-04-24 11:21:47: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node serv1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'serv2'
CRS-2676: Start of 'ora.mdnsd' on 'serv2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'serv2'
CRS-2676: Start of 'ora.gipcd' on 'serv2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'serv2'
CRS-2676: Start of 'ora.gpnpd' on 'serv2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'serv2'
CRS-2676: Start of 'ora.cssdmonitor' on 'serv2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'serv2'
CRS-2672: Attempting to start 'ora.diskmon' on 'serv2'
CRS-2676: Start of 'ora.diskmon' on 'serv2' succeeded
CRS-2676: Start of 'ora.cssd' on 'serv2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'serv2'
CRS-2676: Start of 'ora.ctssd' on 'serv2' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'serv2'
CRS-2676: Start of 'ora.drivers.acfs' on 'serv2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'serv2'
CRS-2676: Start of 'ora.asm' on 'serv2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'serv2'
CRS-2676: Start of 'ora.crsd' on 'serv2' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'serv2'
CRS-2676: Start of 'ora.evmd' on 'serv2' succeeded
serv2 2012/04/24 11:24:07 /u01/app/11.2.0/grid/cdata/serv2/backup_20120424_112407.olr
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
???? Oracle Universal Installer...
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' ???
执行成功后,在grid 用户下,执行crs_stat –t ,显示结果如下:
/home/grid@serv1=>+ASM1$crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.CRS.dg ora....up.type ONLINE ONLINE serv1
ora....ER.lsnr ora....er.type ONLINE ONLINE serv1
ora....N1.lsnr ora....er.type ONLINE ONLINE serv1
ora.asm ora.asm.type ONLINE ONLINE serv1
ora.eons ora.eons.type ONLINE ONLINE serv1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE serv1
ora.oc4j ora.oc4j.type OFFLINE OFFLINE
ora.ons ora.ons.type ONLINE ONLINE serv1
ora....ry.acfs ora....fs.type ONLINE ONLINE serv1
ora.scan1.vip ora....ip.type ONLINE ONLINE serv1
ora....SM1.asm application ONLINE ONLINE serv1
ora....V1.lsnr application ONLINE ONLINE serv1
ora.serv1.gsd application OFFLINE OFFLINE
ora.serv1.ons application ONLINE ONLINE serv1
ora.serv1.vip ora....t1.type ONLINE ONLINE serv1
ora....SM2.asm application ONLINE ONLINE serv2
ora....V2.lsnr application ONLINE ONLINE serv2
ora.serv2.gsd application OFFLINE OFFLINE
ora.serv2.ons application ONLINE ONLINE serv2
ora.serv2.vip ora....t1.type ONLINE ONLINE serv2
/home/grid@serv1=>+ASM1$
这个安装过程很简单,不解释。一般都不会报错的。
使用dbca 创建数据库,过程不解释。
因为没有采用scan ip ,所以在初始化参数local_listener 和remote_listener 需要做一个修改。
如果要将原来的10gRAC 的库升级到目前11gRAC ,该如何操作呢?
首先,要将10g 的数据库迁移过来,无缝打开。这就要在11gRAC 中,新安装一个10g 的RDBMS,使得数据库能打开。这里有一点要注意,11gR2 的集群配置是动态的,而老版本的库如10.2 版本的库要求集群配置是固定的,所以需要将11G 的集群配置也设置为固定的。使用root 执行./crsctl pin css -n node1 node2 命令即可。
其次,在10g 的rdbms 下执行一个11g 的升级校验。这需要做,否则升级不了。
最后,在11g 的rdbms 下按照升级指导文档进行升级操作。
关于Oracle 10g RAC的安装在http://mikixiyou.iteye.com/blog/1555489 一文中有介绍,可参考。