Chinaunix首页 | 论坛 | 博客
  • 博客访问: 178535
  • 博文数量: 18
  • 博客积分: 804
  • 博客等级: 军士长
  • 技术积分: 220
  • 用 户 组: 普通用户
  • 注册时间: 2007-02-01 13:30
文章分类

全部博文(18)

文章存档

2015年(1)

2012年(2)

2011年(1)

2010年(2)

2009年(12)

分类: LINUX

2012-10-16 23:12:21

RHEL5.5_64bit+ASM+Oracle RAC

参考: http://candon123.blog.51cto.com/704299/336002

1.Network TOP

2.软环境:

NO.

Item

Public network
Private network

Vip network

Scan ip

hardware

OS

App

1

Node01

FQDN

node01.studios.com

node01-priv

node01-vip.studios.com

oracle-san.studios.com 192.168.1.120

1.cpu:i7-2600 mem:2G(oracle require) disk:120G

RHEL5.5_64bit

ASM/oracle grid oracle 11g/2

IP Addr

192.168.1.111

172.25.100.111

192.168.1.115

2

Node02

FQDN

node02.studios.com

node02-priv

node02-vip.studios.com

IP Addr

192.168.1.112

172.25.100.112

192.168.1.116

3

Openfiler

FQDN

san.studios.com

san-priv.studios.com

MEM:1G DISK:20G+3*80G

Openfiler

openfiler

IP Addr

192.168.1.249

172.25.100.249

4

DNS

FQDN

MEM:2G DISK:120G

RHEL5.5_64bit

VSFTPD createrepo

IP Addr

192.168.1.250

5

Clent

192.168.1.100

Win7

Oracle client

3.OS install: 1)install DNS/YUM

1.OS install 1)disabled firewall(iptables) 2)disabled selinux 2.install privacy yum resource: #rpm -ivh vsftpd* #rpm -ivh *createrepo #chkconfig vsftpd on #servi vsftpd start #createrepo -s md5 /var/ftp/pub/rhel5.5/

2) Install Openfiler
Download:
After installed openfiler:
user:openfiler password: password

System->Network Access Configuration

Volumes

Services:iSCSI Target enable; iSCSI Initiatior Enabled

在dns/yum与openfiler设置完成后,下面正式进行node01/node02的设置与配置。

3)node01(node02)安装与配置 1.OS install

1)disabled firewall(iptables) 2)disabled selinux 2.setup yum repos(private yum resource) #vi /etc/yum.repos.d/rhel-source.repo
[rhel-source-studios] name=Red Hat Enterprise Linux $releasever Beta - $basearch - Source baseurl=ftp://192.168.1.250/pub/rhel5.5/ enabled=1 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta,file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
#yum clean all
3.Setup the hosts #vi /etc/hosts
#127.0.0.1 node01.studios.com node01 localhost.localdomain localhost #::1 localhost6.localdomain6 localhost6 #Eth0 for public network (eth0) 192.168.1.111 node01.studios.com node01 192.168.1.112 node02.studios.com node02 #Eth1 for private network (eth1) 172.25.100.111 node01-priv.studios.com node01-priv 172.25.100.112 node02-priv.studios.com node02-priv #Virtual IP for Public (eth0:1) 192.168.1.115 node01-vip.studios.com node01-vip 192.168.1.116 node02-vip.studios.com node02-vip #SCAN IP 192.168.1.120 oracle-scan.studios.com oracle-scan #Eth0 for SAN public IP (Openfile eth0) 192.168.1. 249 san.studios.com san #Eth1 for SAN private IP (Openfile eth1) 172.25.100.249 san-priv

4.setup san storge disk:
#yum -y install iscsi-initiator-utils* #service iscsid start
#chkconfig iscsid on

mount disk:
#iscsiadm -m discovery -t sendtargets -p 192.168.1.249
#iscsiadm -m discovery -t sendtargets -p 192.168.1.249:3260 >>/etc/rc.local
#iscsiadm -m node -T
iqn.2006-01.com.openfiler:tsn.982fa0420fc2 -p 192.168.1.249:3260 -l >>/etc/rc.local #fdisk -l (/dev/sdb |/dev/sdc /dev/sdd)
#fisk /dev/sd* (进行分区) (/dev/sdb1 /dev/sdc1 /dev/sdd1)

(以个人iscsi为准)

4.Oracle 相关环境配置
1).Install Software for oracle

binutils-2.17.50.0.6 compat-libstdc++-33-3.2.3 compat-libstdc++-33-3.2.3 (32 bit) elfutils-libelf-0.125 elfutils-libelf-devel-0.125 gcc-4.1.2 gcc-c++-4.1.2 glibc-2.5-24 glibc-2.5-24 (32 bit) glibc-common-2.5 glibc-devel-2.5 glibc-devel-2.5 (32 bit) glibc-headers-2.5 ksh-20060214 libaio-0.3.106 libaio-0.3.106 (32 bit) libaio-devel-0.3.106 libaio-devel-0.3.106 (32 bit) libgcc-4.1.2 libgcc-4.1.2 (32 bit) libstdc++-4.1.2 libstdc++-4.1.2 (32 bit) libstdc++-devel 4.1.2 make-3.81

          sysstat-7.0.2

          unixODBC 

          unixODBC-devel (官方提供)

#yum -y install binutils compat-libstdc++ elfutils-libelf elfutils-libelf-devel gcc gcc-c++ glibc glibc-common glibc-devel glibc-headers ksh libaio libaio-devel libgcc libstdc++ libstdc++-devel make sysstat unixODBC unixODBC-devel

2) add users and groups:
groupadd -g 600 oinstall
groupadd -g 601 asmadmin
groupadd -g 602 asmdba
groupadd -g 603 asmoper
groupadd -g 604 dba
groupadd -g 605 oper
useradd -u 600 -g oinstall -G asmdba,dba,oper -m -d /home/oracle oracle useradd -u 601 -g oinstall -G asmadmin,asmdba,asmoper -m -d /home/grid grid

####make directory for the oralce#####

mkdir -p /u01/app/11.2.0/grid

mkdir -p /u01/app/grid

mkdir -p /u01/app/oracle

chown -R grid:oinstall /u01

chown -R grid:oinstall /u01/app/11.2.0/grid

chown -R grid:oinstall /u01/app/grid

chown -R oracle:oinstall /u01/app/oracle

chmod -R 775 /u01/

3) 相关参数配置(node01 node02): (1).#vi /etc/sysctl.conf

net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmall = 2097152 kernel.shmmax = 536870912 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576

#sysctl -p (使自己的配置生效)

(2)./etc/security/limits.conf (添加oracle的相关limits)

oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536

(3)./etc/pam.d/login 追加:pam限制。
#echo "session required pam_limits.so " >>/etc/pam.d/login (4).#/etc/profile 添加:

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi

umask 022 fi

(5).设置用户.bash_profile
For oracle用户(/home/oracle/.bash_profile)
(node01与node02相应,只是把sid作了变更。)

ORACLE_SID=orc1; export ORACLE_SID   #node02 变更ORACLE_SID=orc2 ORACLE_UNQNAME=racdb; export ORACLE_UNQNAME
JAVA_HOME=/usr/local/java; export JAVA_HOME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME ORACLE_PATH=/u01/app/common/oracle/sql; export ORACLE_PATH

ORACLE_TERM=xterm; export ORACLE_TERM NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11 PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin PATH=${PATH}:/u01/app/common/oracle/bin export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib export CLASSPATH THREADS_FLAG=native; export THREADS_FLAG export TEMP=/tmp export TMPDIR=/tmp umask 022

For grid用户(/home/grid/.bash_profile) (node01与node02相应,只是把sid作了变更。)

ORACLE_SID=+ASM1; export ORACLE_SID #node02 变更ORACLE_SID=+ASM2 JAVA_HOME=/usr/local/java; export JAVA_HOME ORACLE_BASE=/u01/app/grid; export ORACLE_BASE ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME ORACLE_PATH=/u01/app/oracle/common/oracle/sql; export ORACLE_PATH ORACLE_TERM=xterm; export ORACLE_TERM NLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"; export NLS_DATE_FORMAT TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11 PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin PATH=${PATH}:/u01/app/common/oracle/bin


export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib export CLASSPATH THREADS_FLAG=native; export THREADS_FLAG export TEMP=/tmp export TMPDIR=/tmp umask 022

(6).安装asm download software:oracleasm-support oracleasmlib oracleasm (根据自己系统版本下载) # uname -crm
2.6.9-5.ELsmp i686
#rpm -ivh oracleasm*
oracleasm-2.6.18-194.el5-2.0.5-1.el5.x86_64.rpm oracleasm-support-2.1.7-1.el5.x86_64.rpm
oracleasmlib-2.0.4-1.el5.x86_64.rpm

(7).配置ssh通道: 1)node01:

2)node02:

#su - oracle
#mkdir ~/.ssh chmod 700 ~/.ssh #/usr/bin/ssh-keygen -t rsa #/usr/bin/ssh-keygen -t dsa

#su - oracle
$mkdir ~/.ssh chmod 700 ~/.ssh
$/usr/bin/ssh-keygen -t rsa
#(直接回车,默认就可) 

$/usr/bin/ssh-keygen -t dsa #(直接回车,默认就可) 

$touch .ssh/authorized_keys
$ssh node01 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys 

$ssh node01 cat /home/oracle/.ssh/id_dsa.pub >> authorized_keys 

$ssh node02 cat /home/oracle/.ssh/id_dsa.pub >> authorized_keys 

$ssh node02 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys 

$scp authorized_keys node02:/home/oracle/.ssh/
$ssh node01 date
$ssh node02 date
$ssh node01-priv date
$ssh node01-priv date
$ssh node02-priv date
$ssh node01-priv date
$ssh node02-priv date

以上是作为oracle用户的配置,grid用户的ssh配置与oracle相同。

(8).去掉本机的ntpd时间同步。

#service ntpd stop #chkconfig ntpd on #mv /etc/ntpd.conf /etc/ntpd.backup

(8).重新启动(node01/node02) 按上面配置检测一下,看看配置是否和上面的配置相同。

(9).ASM磁盘配置:(仅在node01上配置即可)
[root@node01 file]# oracleasm configure #检测现在oracleasm的配置 ORACLEASM_ENABLED=false ORACLEASM_UID= ORACLEASM_GID= ORACLEASM_SCANBOOT=true ORACLEASM_SCANORDER=""

   ORACLEASM_SCANEXCLUDE=""
[root@node01 file]# oracleasm configure -i
#oracleasm configure -i 

Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: grid Default group to own the driver interface []: asmadmin Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done
# oracleasm init #######node01:#######

#oracleasm createdisk CRSV /dev/sdc1 #oracleasm createdisk DATAV /dev/sdb1 #oracleasm createdisk FRAV /dev/sdd1 #oracleasm listdisks #####node02查看分区类型######### #oracleasm scandisks

#oracleasm listdisks

至此oracle grid的基本环境已经配置完成。

#创建asm分区

5.安装Grid Infrastructure (Node01) 上传oracle的相关基础包到node01: /var/software

linux_11gR2_grid.zip linux_11gR2_database_1of2.zip linux_11gR2_database_2of2.zip
#cd /var/software #unzip linux_11gR2_grid.zip #cd grid/rpm #rpm -ivh cvuqdisk-1.0.7-1 #export CVUQDISK_GRP=oinstall #echo "export CVUQDISK_GRP=oinstall" >>/etc/profile #su - grid $cd /var/software/grid/ $./runcluvfy.sh stage -pre crsinst -n node01,node02 -fixup -verbose

(node01 node02都需要安装) (node01 node02都需要安装)

(检查grid的安装环境)

......
Fixup information has been generated for following node(s): node2,node1 Please run the following script on each node as "root" user to execute the fixups: '/tmp/CVU_11.2.0.1.0_grid/runfixup.sh' Pre-check for cluster services setup was unsuccessful on all the nodes.
(在node01 node02 用root用户执行 /tmp/CVU_11.2.0.1.0_grid/runfixup.sh)
重启执行:./runcluvfy.sh stage -pre crsinst -n node01,node02 -fixup -verbose
Pre-check for cluster services setup was successful (检测成功后,可以正常执行下一步)

安装grid infrastructure [grid@node1 grid]$ ./runInstaller

不配置GNS SCAN名称和/etc/hosts中的对应 cluster名称可以自己定义一个

选择网卡用途eth0公网 eth1私网->asm

C

[root@node01 ~]# cd /u01/app/oraInventory/ 
[root@node01 oraInventory]# ./orainstRoot.sh 
[root@node02 ~]# cd /u01/app/oraInventory/ 
[root@node02 oraInventory]# ./orainstRoot.sh
[root@node01 ~]# cd /u01/app/11.2.0/grid/ [root@node01 grid]# ./root.sh [root@node02 ~]# cd /u01/app/11.2.0/grid/ [root@node02 grid]# ./root.sh

2个节点脚本执行完成后 最后是 'UpdateNodeList' was successful. 表示执行成功 执行完毕后 点击 ok 开始剩余内容配置

S


至此grid infrastructure安装完成。

6. 安装oracle 11g r2 1).用ASM配置磁盘组:

#su - grid

$asmca

创建DATA FRA磁盘:

2).安装oracle 11g R2 node01:

#su - oracle $cd /var/software $sudo unzip linux_11gR2_database $cd database [oracle@node1 database]$ ./runInstaller

用root在node01,node02上执行root.sh

安装数据库: [oracle@node01 ~]$ dbca  (图片略)

检查rac的状态:

[grid@node01 ~]$ crsctl status resource –t [grid@node02~]$ crsctl status resource –t 

[oracle@node01 ~]$ srvctl start nodeapps -n node01     RAC的启动顺利/关闭正好相反[oracle@node01 ~]$ srvctl start nodeapps -n node02

[oracle@node01 ~]$ srvctl start asm -n node01
[oracle@node01 ~]$ srvctl start asm -n node02
[oracle@node01 ~]$ srvctl start instance -d oracledb -i orc1 -o mount [oracle@node01 ~]$ srvctl start instance -d oracledb -i orc2 -o open


阅读(4051) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~