欢迎加入IT云增值在线QQ交流群:342584734
分类:
2005-10-22 00:26:28
1 Preparation
1.1 Backup all users data
1.2 Backup volume manager configuration
# vxdg list > vxdg_list
# vxdisk -s list > vxdisk_s_list
# vxprint -ht > vxprint _ht
# vxprint -g dg_name -hmvps > dg_name_dump
1.3 Login to a node as super user
1.4 Check cluster status
# scstat
1.5 Disable each resource in cluster
# scrgadm -pv | grep "Res enabled"
# scswitch -n -j resource
1.6 Switch each resource group offline
# scswitch -F -g resource_group
1.7 Move each resource group into the unmanaged state
# scswitch -u -g resource_group
1.8 Check the results of step 1.5, 1.6 and 1.7
# scstat -g
1.9 Shutdown cluster
# scshutdown
2 Upgrading Volume Manager on an encapsulated boot disk
2.1 Boot node to single non-cluster user mode
# reboot -- -xs
2.2 Mount Volume Manager 3.5 CDROM
2.3 Check if it can be upgraded without problems
# /cdrom/volume_manager/scripts/upgrade_start -check
2.4 Begin upgrade
# /cdrom/volume_manager/scripts/upgrade_start
2.5 Reboot node
# reboot -- -xs
2.6 Mount /opt manually if its on own position
2.7 Remove Vxvm 3.2 patches and packages
# patchrm vxvm_patch_id
# pkgrm VRTSvmsa VRTSvmdoc VRTSvmdev VRTSvmman VRTSvxvm VRTSlic
2.8 Reboot node
# reboot -- -xs
2.9 Add Vxvm 3.5 license package
# cd /cdrom/volume_manager/pkgs
# pkgadd -d . VRTSvlic
2.10 Add Vxvm 3.5 package
# pkgadd -d . VRTSvxvm
Note: If warnings are displayed that include the string /etc/vx, ignore them and continue.
2.11 Complete upgrading
# /cdrom/volume_manager/scripts/upgrade_finish
2.12 Reboot and reconfigure node
# reboot -- -xr
2.13 Install additional packages
# cd /cdrom/volume_manager/pkgs
# pkgadd -d . VRTSvmdoc VRTSvmman
# pkgadd -a ../scripts/VRTSobadm -d . VRTSob VRTSobgui
# pkgadd -d . VRTSfspro VRTSvmpro
3 Recover cluster resource and resource group
3.1 Check all diskgroups are OK
# vxdg list
# vxprint ht
3.2 Check resource group state
# scstat
# scrgadm -pv
3.3 Move each resource group into the managed state
# scswitch -o -g resource_group
3.4 Switch each resource group online
# scswitch -Z -g resource_group
3.5 Enable each resource in cluster
# scswitch -e -j resourc
3.6 Check resource group state again
# scstat
# scrgadm -pv