Chinaunix首页 | 论坛 | 博客
  • 博客访问: 44590
  • 博文数量: 14
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 43
  • 用 户 组: 普通用户
  • 注册时间: 2014-08-21 14:04
文章分类
文章存档

2015年(6)

2014年(8)

我的朋友

分类: Oracle

2014-09-26 11:04:34

原文地址:RAC升级(11.2.0.1升级到11.2.0.3) 作者:hxl

环境:
OS:Red Hat Linux As6
DB:11.2.0.1

11.2.0.1升级到11.2.0.3还是比较方便的,我这里采用out of place的方式进行升级,升级的软件安装到新的目录,因为采用的是out of place的方式升级,因为新旧目录都安装有软件,占用的空间比较大,所以要尽量留多一些的空余磁盘空间,我们这里分三部分升级,升级顺序是先升级grid,然后升级rdbms,最后升级数据字典.

--------------------------------------------------升级grid-------------------------------------------------------
1.解压缩p10404530_112030_Linux-x86-64_3of7.zip
unzip p10404530_112030_Linux-x86-64_3of7.zip

2.升级前的检查
 [grid@node1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -n node1,node2 -rolling -src_crshome /u01/app/grid/11.2.0 -dest_crshome /u01/app/grid/11.2.0.3 -dest_version 11.2.0.3.0 -fixup -fixupdir /home/grid/fixup


Performing pre-checks for cluster services setup 


Checking node reachability...
Node reachability check passed from node "node1"

Checking user equivalence...
User equivalence check passed for user "grid"

Checking CRS user consistency
CRS user consistency check successful

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
TCP connectivity check passed for subnet "192.168.56.0"




Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"
TCP connectivity check passed for subnet "172.16.10.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "172.16.10.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...


Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "172.16.10.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "172.16.10.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking OCR integrity...

OCR integrity check passed

Checking ASMLib configuration.
Check for ASMLib configuration passed.
Total memory check passed
Available memory check passed
Swap space check failed
Check failed on nodes: 
        node2
Free disk space check passed for "node2:/u01/app/grid/11.2.0,node2:/tmp"
Free disk space check failed for "node1:/u01/app/grid/11.2.0,node1:/tmp"
Check failed on nodes: 
        node1
Check for multiple users with UID value 501 passed 
User existence check passed for "grid"
Group existence check passed for "oinstall"
Membership check for user "grid" in group "oinstall" [as Primary] passed
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
Check for Oracle patch "9413827 or 9706490" in home "/u01/app/grid/11.2.0" failed
Check failed on nodes: 
        node2,node1
There are no oracle patches required for home "/u01/app/grid/11.2.0".
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make"
Package existence check passed for "binutils"
Package existence check passed for "gcc(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "glibc(x86_64)"
Package existence check passed for "compat-libstdc++-33(x86_64)"
Package existence check passed for "elfutils-libelf(x86_64)"
Package existence check passed for "elfutils-libelf-devel"
Package existence check passed for "glibc-common"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "glibc-headers"
Package existence check passed for "gcc-c++(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "libgcc(x86_64)"
Package existence check passed for "libstdc++(x86_64)"
Package existence check passed for "libstdc++-devel(x86_64)"
Package existence check passed for "sysstat"
Package existence check passed for "pdksh"
Package existence check passed for "expat(x86_64)"
Check for multiple users with UID value 0 passed 
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed
Package existence check failed for "cvuqdisk"
Check failed on nodes: 
        node2,node1

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
NTP Configuration file check passed


Checking daemon liveness...
Liveness check failed for "ntpd"
Check failed on nodes: 
        node2
PRVF-5508 : NTP configuration file is present on at least one node on which NTP daemon or service is not running.
Clock synchronization check using Network Time Protocol(NTP) failed

Core file name pattern consistency check passed.

User "grid" is not part of "root" group. Check passed
Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: node2,node1

File "/etc/resolv.conf" is not consistent across nodes

UDev attributes check for OCR locations started...
UDev attributes check passed for OCR locations 

UDev attributes check for Voting Disk locations started...
UDev attributes check passed for Voting Disk locations 

Time zone consistency check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Checking Oracle Cluster Voting Disk configuration...

ASM Running check passed. ASM is running on all specified nodes

Oracle Cluster Voting Disk configuration check passed

Clusterware version consistency passed
Fixup information has been generated for following node(s):
node2,node1
Please run the following script on each node as "root" user to execute the fixups:
'/tmp/CVU_11.2.0.4.0_grid/runfixup.sh'

Pre-check for cluster services setup was unsuccessful on all the nodes.

确保以上的每一项都检查通过以后才继续安装grid,从以上检查输出看出缺少安装补丁9413827 或是 9706490,所以需要先安装补丁,这里安装任何一个即可,我们这里安装9413827.

根据提示分别在节点1和节点2上执行修复脚本/tmp/CVU_11.2.0.4.0_grid/runfixup.sh

[root@node1 ~]# /tmp/CVU_11.2.0.4.0_grid/runfixup.sh
Response file being used is :/tmp/CVU_11.2.0.4.0_grid/fixup.response
Enable file being used is :/tmp/CVU_11.2.0.4.0_grid/fixup.enable
Log file location: /tmp/CVU_11.2.0.4.0_grid/orarun.log
Installing Package /tmp/CVU_11.2.0.4.0_grid//cvuqdisk-1.0.9-1.rpm
Preparing...                ########################################### [100%]
   1:cvuqdisk               ########################################### [100%]
[root@node1 ~]#

[root@node2 ~]# /tmp/CVU_11.2.0.4.0_grid/runfixup.sh
Response file being used is :/tmp/CVU_11.2.0.4.0_grid/fixup.response
Enable file being used is :/tmp/CVU_11.2.0.4.0_grid/fixup.enable
Log file location: /tmp/CVU_11.2.0.4.0_grid/orarun.log
Installing Package /tmp/CVU_11.2.0.4.0_grid//cvuqdisk-1.0.9-1.rpm
Preparing...                ########################################### [100%]
   1:cvuqdisk               ########################################### [100%]
[root@node2 ~]#

3.安装9413827补丁
具体的安装过程可以参考
http://blog.chinaunix.net/uid-77311-id-4130089.html

4.创建新目录
我们这里采用Out of place的方式升级,所以要分别在两个节点上创建新的按照目录.
[root@node1 /]# mkdir -p /u01/app/grid/11.2.0.3
[root@node1 grid]# chown -R grid:oinstall  /u01/app/grid/11.2.0.3
[root@node2 /]# mkdir -p /u01/app/grid/11.2.0.3
[root@node2 soft]# chown grid:oinstall  /u01/app/grid/11.2.0.3

5.安装grid软件
[grid@node1 grid]$unset ORACLE_HOME ORACLE_BASE ORACLE_SID
[grid@node1 grid]$ ./runInstaller



























先在第一个节点上执行脚本

脚本执行的过程中会自动关闭crs,但另外一个节点的是可用的.

[root@node1 log]# /u01/app/grid/11.2.0.3/rootupgrade.sh

Performing root user operation for Oracle 11g

 

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/grid/11.2.0.3

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying coraenv to /usr/local/bin ...

 

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/grid/11.2.0.3/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

 

ASM upgrade has started on first node.

 

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node1'

CRS-2673: Attempting to stop 'ora.crsd' on 'node1'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node1'

CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'node1'

CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node1'

CRS-2673: Attempting to stop 'ora.racdb.kettle.svc' on 'node1'

CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.scan1.vip' on 'node1'

CRS-2677: Stop of 'ora.scan1.vip' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.scan1.vip' on 'node2'

CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node1' succeeded

CRS-2676: Start of 'ora.scan1.vip' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'node2'

CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'node2' succeeded

CRS-2677: Stop of 'ora.racdb.kettle.svc' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.node1.vip' on 'node1'

CRS-2677: Stop of 'ora.node1.vip' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.node1.vip' on 'node2'

CRS-2676: Start of 'ora.node1.vip' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.racdb.db' on 'node2'

CRS-2676: Start of 'ora.racdb.db' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.racdb.kettle.svc' on 'node2'

CRS-2676: Start of 'ora.racdb.kettle.svc' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.CRS.dg' on 'node1'

CRS-2673: Attempting to stop 'ora.racdb.db' on 'node1'

CRS-2677: Stop of 'ora.CRS.dg' on 'node1' succeeded

CRS-2677: Stop of 'ora.racdb.db' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.DATA.dg' on 'node1'

CRS-2673: Attempting to stop 'ora.REC.dg' on 'node1'

CRS-2677: Stop of 'ora.REC.dg' on 'node1' succeeded

CRS-2677: Stop of 'ora.DATA.dg' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'node1'

CRS-2677: Stop of 'ora.asm' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.ons' on 'node1'

CRS-2673: Attempting to stop 'ora.eons' on 'node1'

CRS-2677: Stop of 'ora.ons' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.net1.network' on 'node1'

CRS-2677: Stop of 'ora.net1.network' on 'node1' succeeded

CRS-2677: Stop of 'ora.eons' on 'node1' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node1' has completed

CRS-2677: Stop of 'ora.crsd' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'node1'

CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'

CRS-2673: Attempting to stop 'ora.evmd' on 'node1'

CRS-2673: Attempting to stop 'ora.asm' on 'node1'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'node1'

CRS-2677: Stop of 'ora.cssdmonitor' on 'node1' succeeded

CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'node1' succeeded

CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'node1' succeeded

CRS-2677: Stop of 'ora.asm' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'node1'

CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.diskmon' on 'node1'

CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'

CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded

CRS-2677: Stop of 'ora.diskmon' on 'node1' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node1' has completed

CRS-4133: Oracle High Availability Services has been stopped.

OLR initialization - successful

Replacing Clusterware entries in upstart

clscfg: EXISTING configuration version 5 detected.

clscfg: version 5 is 11g Release 2.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Preparing packages for installation...

cvuqdisk-1.0.9-1

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

在节点2上执行

[root@node2 /]# /u01/app/grid/11.2.0.3/rootupgrade.sh

Performing root user operation for Oracle 11g

 

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/grid/11.2.0.3

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying coraenv to /usr/local/bin ...

 

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/grid/11.2.0.3/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node2'

CRS-2673: Attempting to stop 'ora.crsd' on 'node2'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node2'

CRS-2673: Attempting to stop 'ora.CRS.dg' on 'node2'

CRS-2673: Attempting to stop 'ora.racdb.kettle.svc' on 'node2'

CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'node2'

CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.scan1.vip' on 'node2'

CRS-2677: Stop of 'ora.scan1.vip' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.scan1.vip' on 'node1'

CRS-2676: Start of 'ora.scan1.vip' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'node1'

CRS-2677: Stop of 'ora.racdb.kettle.svc' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.racdb.db' on 'node2'

CRS-2672: Attempting to start 'ora.eons' on 'node1'

CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'node1' succeeded

CRS-2677: Stop of 'ora.CRS.dg' on 'node2' succeeded

CRS-2677: Stop of 'ora.racdb.db' on 'node2' succeeded

CRS-2674: Start of 'ora.eons' on 'node1' failed

CRS-2672: Attempting to start 'ora.racdb.db' on 'node1'

CRS-5017: The resource action "ora.racdb.db start" encountered the following error:

ORA-00845: MEMORY_TARGET not supported on this system

. For details refer to "(:CLSN00107:)" in "/u01/app/grid/11.2.0.3/log/node1/agent/crsd/oraagent_oracle/oraagent_oracle.log".

 

CRS-2674: Start of 'ora.racdb.db' on 'node1' failed

CRS-2679: Attempting to clean 'ora.racdb.db' on 'node1'

CRS-2681: Clean of 'ora.racdb.db' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.DATA.dg' on 'node2'

CRS-2672: Attempting to start 'ora.eons' on 'node1'

CRS-2673: Attempting to stop 'ora.REC.dg' on 'node2'

CRS-2677: Stop of 'ora.DATA.dg' on 'node2' succeeded

CRS-2677: Stop of 'ora.REC.dg' on 'node2' succeeded

CRS-2674: Start of 'ora.eons' on 'node1' failed

CRS-2672: Attempting to start 'ora.racdb.db' on 'node1'

CRS-5017: The resource action "ora.racdb.db start" encountered the following error:

ORA-00845: MEMORY_TARGET not supported on this system

. For details refer to "(:CLSN00107:)" in "/u01/app/grid/11.2.0.3/log/node1/agent/crsd/oraagent_oracle/oraagent_oracle.log".

 

CRS-2674: Start of 'ora.racdb.db' on 'node1' failed

CRS-2679: Attempting to clean 'ora.racdb.db' on 'node1'

CRS-2681: Clean of 'ora.racdb.db' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'node2'

CRS-2672: Attempting to start 'ora.eons' on 'node1'

CRS-2677: Stop of 'ora.asm' on 'node2' succeeded

CRS-2674: Start of 'ora.eons' on 'node1' failed

CRS-2672: Attempting to start 'ora.racdb.db' on 'node1'

CRS-5017: The resource action "ora.racdb.db start" encountered the following error:

ORA-00845: MEMORY_TARGET not supported on this system

. For details refer to "(:CLSN00107:)" in "/u01/app/grid/11.2.0.3/log/node1/agent/crsd/oraagent_oracle/oraagent_oracle.log".

 

CRS-2674: Start of 'ora.racdb.db' on 'node1' failed

CRS-2679: Attempting to clean 'ora.racdb.db' on 'node1'

CRS-2681: Clean of 'ora.racdb.db' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node2'

CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.node2.vip' on 'node2'

CRS-2677: Stop of 'ora.node2.vip' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.node2.vip' on 'node1'

CRS-2676: Start of 'ora.node2.vip' on 'node1' succeeded

CRS-2672: Attempting to start 'ora.eons' on 'node1'

CRS-2674: Start of 'ora.eons' on 'node1' failed

CRS-2672: Attempting to start 'ora.racdb.db' on 'node1'

CRS-5017: The resource action "ora.racdb.db start" encountered the following error:

ORA-00845: MEMORY_TARGET not supported on this system

. For details refer to "(:CLSN00107:)" in "/u01/app/grid/11.2.0.3/log/node1/agent/crsd/oraagent_oracle/oraagent_oracle.log".

 

CRS-2674: Start of 'ora.racdb.db' on 'node1' failed

CRS-2679: Attempting to clean 'ora.racdb.db' on 'node1'

CRS-2681: Clean of 'ora.racdb.db' on 'node1' succeeded

CRS-2673: Attempting to stop 'ora.ons' on 'node2'

CRS-2673: Attempting to stop 'ora.eons' on 'node2'

CRS-2677: Stop of 'ora.ons' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.net1.network' on 'node2'

CRS-2677: Stop of 'ora.net1.network' on 'node2' succeeded

CRS-2677: Stop of 'ora.eons' on 'node2' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node2' has completed

CRS-2677: Stop of 'ora.crsd' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'node2'

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'node2'

CRS-2673: Attempting to stop 'ora.ctssd' on 'node2'

CRS-2673: Attempting to stop 'ora.evmd' on 'node2'

CRS-2673: Attempting to stop 'ora.asm' on 'node2'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'node2'

CRS-2677: Stop of 'ora.cssdmonitor' on 'node2' succeeded

CRS-2677: Stop of 'ora.gpnpd' on 'node2' succeeded

CRS-2677: Stop of 'ora.evmd' on 'node2' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'node2' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'node2' succeeded

CRS-2677: Stop of 'ora.asm' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'node2'

CRS-2677: Stop of 'ora.cssd' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.diskmon' on 'node2'

CRS-2673: Attempting to stop 'ora.gipcd' on 'node2'

CRS-2677: Stop of 'ora.gipcd' on 'node2' succeeded

CRS-2677: Stop of 'ora.diskmon' on 'node2' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node2' has completed

CRS-4133: Oracle High Availability Services has been stopped.

OLR initialization - successful

Replacing Clusterware entries in upstart

 

clscfg: EXISTING configuration version 5 detected.

clscfg: version 5 is 11g Release 2.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.

Started to upgrade the CSS.

Started to upgrade the CRS.

The CRS was successfully upgraded.

Oracle Clusterware operating version was successfully set to 11.2.0.3.0

 

ASM upgrade has finished on last node.

 

 

Preparing packages for installation...

cvuqdisk-1.0.9-1

Configure Oracle Grid Infrastructure for a Cluster ... succeeded


脚本执行完成后,升级grid也就完成了,这个时候要修改每个节点grid用户的ORACLE_HOME的环境变量,要指向新的目录.

# .bash_profile


# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi


# User specific environment and startup programs


export EDITOR=vi
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/grid
export ORACLE_HOME=/u01/app/grid/11.2.0.3
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin:/bin:/sbin
export NLS_DATE_FORMAT="YYYY-MM-DD HH24:MI:SS"
PATH=$PATH:$HOME/bin


export PATH


最后就是要验证升级情况了

[grid@node1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.3.0]
[grid@node1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.3.0]
[grid@node1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [node1] is [11.2.0.3.0]
可以看到grid已经升级到了11.2.0.3,到这来grid已经升级完毕,接下来升级rdbms.


--------------------------------------------------升级rdbms和数据字典-------------------------------------------------------
1.创建新安装目录
这里rdbms也采用out of place的方式升级,所以要同样创建新目录
[root@node1 oracle]# mkdir -p /u01/product/oracle/11.2.0.3/db_1
[root@node1 oracle]# chown -R oracle:oinstall /u01/product/oracle/11.2.0.3/db_1
[root@node2 oracle]# mkdir -p /u01/product/oracle/11.2.0.3/db_1
[root@node2 oracle]# chown -R oracle:oinstall /u01/product/oracle/11.2.0.3/db_1

2.解压缩升级包
unzip "p10404530_112030_Linux-x86-64_1of7.zip"
unzip "p10404530_112030_Linux-x86-64_2of7.zip"

3.在一个节点上使用oralce账户登录运行runInstaller



























分别在节点1和节点2上执行该脚本,执行完该脚本后,点击ok,系统会自动弹出升级数据字典的界面窗口.







这里是因为/dev/shm必须大于MEMORY_TARGET,如果/dev/shm比MEMORY_TARGET小就会报错,解决办法是增加/dev/shm的大小
在每个节点上操作
mount -o remount,size=2G /dev/shm
vi /etc/fstab


[root@node1 u01]# vi /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Feb 18 16:49:37 2014
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=df2aa512-50a8-4caa-bdea-985b760067fa /                       ext4    defaults        1 1
UUID=a98f2bf9-f6aa-4cfa-a853-b3ed66eb1422 swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults,size=2g        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/sdi1               /u02                    ext4    defaults        1 2


在两个节点都执行完成后,点击Retry继续















升级完成后需要校验下升级情况

1.查看是否有失效对象
SQL> select distinct object_name FROM dba_invalid_objects;

2.查看各组件版本情况

SQL> Select Comp_Name, Version, Status From Sys.Dba_Registry;
OWB
11.2.0.1.0                     VALID


Oracle Application Express
3.2.1.00.10                    VALID


Oracle Enterprise Manager
11.2.0.3.0                     VALID


OLAP Catalog
11.2.0.3.0                     VALID


Spatial
11.2.0.3.0                     VALID


Oracle Multimedia
11.2.0.3.0                     VALID


Oracle XML Database
11.2.0.3.0                     VALID


Oracle Text
11.2.0.3.0                     VALID


Oracle Expression Filter
11.2.0.3.0                     VALID


Oracle Rules Manager
11.2.0.3.0                     VALID


Oracle Workspace Manager
11.2.0.3.0                     VALID


Oracle Database Catalog Views
11.2.0.3.0                     VALID


Oracle Database Packages and Types
11.2.0.3.0                     VALID


JServer JAVA Virtual Machine
11.2.0.3.0                     VALID


Oracle XDK
11.2.0.3.0                     VALID


Oracle Database Java Packages
11.2.0.3.0                     VALID


OLAP Analytic Workspace
11.2.0.3.0                     VALID


Oracle OLAP API
11.2.0.3.0                     VALID


Oracle Real Application Clusters
11.2.0.3.0                     VALID


SQL> 



到这里整个过程升级完成,发现数据库字典升级是比较花时间的,我这里升级数据库字典大概花了4个多小时.

说明:
若在升级数据字典的时候不使用dbua图形界面的话,可以采用如下的脚本进行数据字典的升级

1.升级模式启动数据库
startup upgrade
2.运行升级程序包
@/u01/product/oracle/11.2.0.3/db_1/rdbms/admin/catupgrd.sql
3.重新启动数据库,运行升级后的包
startup
@/u01/product/oracle/11.2.0.3/db_1/rdbms/admin/utlu112s.sql  --升级结果
@/u01/product/oracle/11.2.0.3/db_1/rdbms/admin/catuppst.sql  --执行一些未在升级包中的操作
@/u01/product/oracle/11.2.0.3/db_1/rdbms/admin/utlrp.sql     --重新编译不正确的包和java代码

阅读(1304) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~