Chinaunix首页 | 论坛 | 博客
  • 博客访问: 484581
  • 博文数量: 125
  • 博客积分: 3178
  • 博客等级: 中校
  • 技术积分: 1329
  • 用 户 组: 普通用户
  • 注册时间: 2006-04-19 08:20
文章分类

全部博文(125)

文章存档

2014年(2)

2013年(2)

2012年(3)

2011年(3)

2009年(2)

2008年(17)

2007年(52)

2006年(44)

我的朋友

分类: 系统运维

2011-10-22 23:45:32

前几日,在测试DRD,出了不少错误,今天加班中升级DRD版本,验证了一把,正确,于是赶紧指南 l  DRD介绍:

DRD,系统映像克隆,用于操作系统的历史备份.当前OS出现故障,可以快速DRD恢复作业环境.

l  测试环境:

HP-UX 11.23, IA ,RX6600

当前系统存在一个镜像,镜像盘为c2t1d0,c2t3d0,

用于DRD盘为c2t0d0.

l  未升级时DRD版本:

1.         查看DRD版本信息及磁盘信息

-bash-4.2# swlist -l product |grep -i drd

  DRD                   A.3.0.0.1027   Dynamic Root Disk

 

-bash-4.2# ioscan -fnCdisk          

Class        I  H/W Path        Driver         S/W State   H/W Type

Description

============================================================================

disk   5  0/0/2/1.0.16.0.0  sdisk    CLAIMED     DEVICE  TEAC    DVD-ROM DW-224EV

                               /dev/dsk/c0t0d0   /dev/rdsk/c0t0d0

disk   2  0/4/1/0.0.0.0.0   sdisk  CLAIMED  DEVICE  HP EG0146FARTR   刚插新盘

                              /dev/dsk/c2t0d0     /dev/rdsk/c2t0d0

                               /dev/dsk/c2t0d0s1   /dev/rdsk/c2t0d0s1

                               /dev/dsk/c2t0d0s2   /dev/rdsk/c2t0d0s2

                               /dev/dsk/c2t0d0s3   /dev/rdsk/c2t0d0s3

 

disk    3  0/4/1/0.0.0.1.0   sdisk   CLAIMED DEVICE   HP EG0146FARTR 主镜像盘

                               /dev/dsk/c2t1d0     /dev/rdsk/c2t1d0 

                               /dev/dsk/c2t1d0s1   /dev/rdsk/c2t1d0s1

                               /dev/dsk/c2t1d0s2   /dev/rdsk/c2t1d0s2

                               /dev/dsk/c2t1d0s3   /dev/rdsk/c2t1d0s3

disk   7  0/4/1/0.0.0.3.0   sdisk    CLAIMED    DEVICE   HPDG0146FAMWL  备镜像盘

                               /dev/dsk/c2t3d0     /dev/rdsk/c2t3d0 

                               /dev/dsk/c2t3d0s1   /dev/rdsk/c2t3d0s1

                               /dev/dsk/c2t3d0s2   /dev/rdsk/c2t3d0s2

                               /dev/dsk/c2t3d0s3   /dev/rdsk/c2t3d0s3

2.         执行DRD克隆但失败

 

-bash-4.2# drd clone -v -x overwrite=true -t  /dev/dsk/c2t0d0

 

=======  10/19/11 11:48:36 EAT  BEGIN Clone System Image (user=root)

         (jobid=rx6600a)

 

       * Reading Current System Information

       * Selecting System Image To Clone

       * Selecting Target Disk

       * The disk "/dev/dsk/c2t0d0" contains data which will be overwritten.

       * Selecting Volume Manager For New System Image

       * Analyzing For System Image Cloning

       * Creating New File Systems

       * Copying File Systems To New System Image

       * Making New System Image Bootable

ERROR:   Making the file system bootable on clone fails.

         - Mounting the file system fails.

         - Validating the DRD registry fails.

         - The device special file "/dev/dsk/c2t2d0" cannot be identified in

           the system configuration information.

       * Making New System Image Bootable failed with 1 error.

       * Unmounting New System Image Clone

       * System image: "sysimage_001" on disk "/dev/dsk/c2t0d0"

ERROR:   Unmounting the file system fails.

         - Unmounting the clone image fails.

         - The "umount" command returned  "1". The "sync" command returned

           "0". The error messages produced are the following: "umount:

cannot

           unmount /dev/drd00/lvol3 : Invalid argument

           umount: return error 1.

           "

       * Unmounting New System Image Clone failed with 1 error.

       * Cleaning up after clone errors.

 

=======  10/19/11 12:19:10 EAT  END Clone System Image failed with 2 errors.

         (user=root)  (jobid=rx6600a)

 

-bash-4.2#

 

3、没有成功,怀疑是新盘系统识别后没有创建设备文件,执行insf -e

-bash-4.2# insf -e   

insf: Installing special files for asio0 instance 2 address 0/0/1/2

insf: Installing special files for sdisk instance 5 address 0/0/2/1.0.16.0.0

insf: Installing special files for sctl instance 3 address 0/0/2/1.0.16.7.0

insf: Installing special files for sdisk instance 2 address 0/4/1/0.0.0.0.0

insf: Installing special files for sdisk instance 3 address 0/4/1/0.0.0.1.0

insf: Installing special files for sdisk instance 7 address 0/4/1/0.0.0.3.0

insf: Installing special files for ipmi instance 0 address 250/0

……

 

4、再次执行DRD克隆

-bash-4.2#  drd clone -v -x overwrite=true -t  /dev/dsk/c2t0d0

 

=======  10/20/11 09:09:00 EAT  BEGIN Clone System Image (user=root)

         (jobid=rx6600a)

       * Reading Current System Information

       * Selecting System Image To Clone

       * Selecting Target Disk

ERROR:   Selection of the target disk fails.

         - Selecting the target disk fails.

         - Validation of the disk "/dev/dsk/c2t0d0" fails with the following

           error(s):

         - The disk "/dev/dsk/c2t0d0" is in use by a volume group on the

           system.

       * Selecting Target Disk failed with 1 error.

=======  10/20/11 09:09:25 EAT  END Clone System Image failed with 1 error.

         (user=root)  (jobid=rx6600a)

 

5.         由于刚才的操作没有成功,但是创建了一个新与DRD相关的VG,需要删除.

-bash-4.2# vgchange -a n /dev/drd00 去激活此VG

Volume group "/dev/drd00" has been successfully changed.

-bash-4.2# vgexport /dev/drd00

-bash-4.2# pvcreate -f /dev/rdsk/c2t0d0

Physical volume "/dev/rdsk/c2t0d0" has been successfully created.

-bash-4.2#  drd clone -v -x overwrite=true -t  /dev/dsk/c2t0d0

 

=======  10/20/11 10:49:29 EAT  BEGIN Clone System Image (user=root)

         (jobid=rx6600a)

 

       * Reading Current System Information

       * Selecting System Image To Clone

       * Selecting Target Disk

       * The disk "/dev/dsk/c2t0d0" contains data which will be overwritten.

       * Selecting Volume Manager For New System Image

       * Analyzing For System Image Cloning

       * Creating New File Systems

       * Copying File Systems To New System Image

       * Making New System Image Bootable

ERROR:   Making the file system bootable on clone fails.

         - Mounting the file system fails.

         - Validating the DRD registry fails.

         - The device special file "/dev/dsk/c2t2d0" cannot be identified in

           the system configuration information.

       * Making New System Image Bootable failed with 1 error.

       * Unmounting New System Image Clone

       * System image: "sysimage_001" on disk "/dev/dsk/c2t0d0"

ERROR:   Unmounting the file system fails.

         - Unmounting the clone image fails.

         - The "umount" command returned  "1". The "sync" command returned

           "0". The error messages produced are the following: "umount:

cannot

           unmount /dev/drd00/lvol3 : Invalid argument

           umount: return error 1.

           "

       * Unmounting New System Image Clone failed with 1 error.

       * Cleaning up after clone errors.=======  10/20/11 11:19:54 EAT  END

Clone System Image failed with 2 errors.

         (user=root)  (jobid=rx6600a)

 

怀疑是DRD版本太低引起.于是到HP网站一个最新版本,大约30M,需要注册帐号,免费下载

l  DRD升级后:

1.         升级DRD

#swinstall  –s  /tmp/DRD_1123_WEB1107.depot  –x  autoboot=true  \*

….

-bash-4.2# swlist -l product |grep -i drd

  DRD                   B.1123.A.3.9.432 Dynamic Root Disk

-bash-4.2# pvcreate -f /dev/rdsk/c2t0d0                     

Physical volume "/dev/rdsk/c2t0d0" has been successfully created.

 

-bash-4.2# drd clone -v -x overwrite=true -t  /dev/dsk/c2t0d0

=======  10/22/11 13:48:42 EAT  BEGIN Clone System Image (user=root)

         (jobid=rx6600a)

 

       * Reading Current System Information

       * Selecting System Image To Clone

       * Selecting Target Disk

NOTE:    There may be LVM 2 volumes configured that will not be recognized.

ERROR:   Selection of the target disk fails.

         - Selecting the target disk fails.

         - Validation of the disk "/dev/dsk/c2t0d0" fails with the following

           error(s):

         - The disk "/dev/dsk/c2t0d0" is in use on the system.

       * Selecting Target Disk failed with 1 error.

       * DRD operation failed, contents of /var/opt/drd/tmp copied to

         /var/opt/drd/save.

 

=======  10/22/11 13:48:51 EAT  END Clone System Image failed with 1 error.

         (user=root)  (jobid=rx6600a)

 

 

 

 

-bash-4.2# vgdisplay 

--- Volume groups ---

VG Name                     /dev/vg00

VG Write Access             read/write    

VG Status                   available                

Max LV                      255   

Cur LV                      8     

Open LV                     8     

Max PV                      16    

Cur PV                      2     

Act PV                      2     

Max PE per PV               4356        

VGDA                        4  

PE Size (Mbytes)            32             

Total PE                    8692   

Alloc PE                    2208   

Free PE                     6484   

Total PVG                   0       

Total Spare PVs             0             

Total Spare PVs in use      0                    

 

VG Name                     /dev/drd00

VG Write Access             read/write    

VG Status                   available                

Max LV                      255   

Cur LV                      8     

Open LV                     8     

Max PV                      16    

Cur PV                      1     

Act PV                      1     

Max PE per PV               4356        

VGDA                        2  

PE Size (Mbytes)            32             

Total PE                    4346   

Alloc PE                    1104   

Free PE                     3242   

Total PVG                   0       

Total Spare PVs             0             

Total Spare PVs in use      0                    

 

去激活DRD VG

-bash-4.2# vgchange -a n /dev/drd00

Volume group "/dev/drd00" has been successfully changed.

===

 PS:如果你有没去激活此VG那么会提示如下错误:

-bash-4.2# vgexport /dev/drd00

vgexport: Volume group "/dev/drd00" is still active.

vgexport: Couldn't export volume group "/dev/drd00".

===

 

删除上面的DRD VG信息

-bash-4.2# vgexport /dev/drd00 

-bash-4.2# drd clone -v -x overwrite=true -,t  /dev/dsk/c2t0d0

 

=======  10/22/11 15:19:55 EAT  BEGIN Clone System Image (user=root)

         (jobid=rx6600a)

 

       * Reading Current System Information

       * Selecting System Image To Clone

       * Selecting Target Disk

NOTE:    There may be LVM 2 volumes configured that will not be recognized.

       * The disk "/dev/dsk/c2t0d0" contains data which will be overwritten.

       * Selecting Volume Manager For New System Image

       * Analyzing For System Image Cloning

       * Creating New File Systems

       * Copying File Systems To New System Image

       * Making New System Image Bootable

       * Unmounting New System Image Clone

       * System image: "sysimage_001" on disk "/dev/dsk/c2t0d0"

 

=======  10/22/11 15:52:04 EAT  END Clone System Image succeeded. (user=root)

         (jobid=rx6600a)

 

-bash-4.2# drd status

=======  10/24/11 11:41:24 EAT  BEGIN Displaying DRD Clone Image Information
         (user=root)  (jobid=rx6600a)


       * Clone Disk:               /dev/dsk/c2t0d0
       * Clone EFI Partition:      AUTO file present, Boot loader present
       * Clone Rehost Status:      SYSINFO.TXT not present
       * Clone Creation Date:      10/22/11 15:20:07 EAT
       * Last Sync Date:           None
       * Clone Mirror Disk:        None
       * Mirror EFI Partition:     None
       * Original Disk:            /dev/dsk/c2t1d0
       * Original EFI Partition:   AUTO file present, Boot loader present
       * Original Rehost Status:   SYSINFO.TXT not present
       * Booted Disk:              Clone Disk (/dev/dsk/c2t0d0)
       * Activated Disk:           Clone Disk (/dev/dsk/c2t0d0)

=======  10/24/11 11:41:39 EAT  END Displaying DRD Clone Image Information
         succeeded. (user=root)  (jobid=rx6600a)

-bash-4.2#
-bash-4.2#

 

l  设置下次启动路径:

查看当前启动路径:

-bash-4.2# lvlnboot -v

Boot Definitions for Volume Group /dev/vg00:

Physical Volumes belonging in Root Volume Group:

        /dev/dsk/c2t1d0s2 (0/4/1/0.0.0.1.0) -- Boot Disk

        /dev/dsk/c2t3d0s2 (0/4/1/0.0.0.3.0) -- Boot Disk

Boot: lvol1     on:     /dev/dsk/c2t1d0s2

                        /dev/dsk/c2t3d0s2

Root: lvol3     on:     /dev/dsk/c2t1d0s2

                        /dev/dsk/c2t3d0s2

Swap: lvol2     on:     /dev/dsk/c2t1d0s2

                        /dev/dsk/c2t3d0s2

Dump: lvol2     on:     /dev/dsk/c2t3d0s2, 0

 

-bash-4.2# setboot

Primary bootpath : 0/4/1/0.0.0.1.0

HA Alternate bootpath : 0/4/1/0.0.0.3.0

Alternate bootpath : 0/4/2/0

Autoboot is ON (enabled)

 

-bash-4.2# drd activate -x alternate_bootdisk=/dev/dsk/c2t0d0

 

=======  10/22/11 15:59:10 EAT  BEGIN Activate Inactive System Image

         (user=root)  (jobid=rx6600a)

 

       * Checking for Valid Inactive System Image

       * Reading Current System Information

       * Locating Inactive System Image

       * Determining Bootpath Status

       * Primary bootpath : /dev/dsk/c2t1d0 before activate.

       * Primary bootpath : /dev/dsk/c2t0d0 after activate.

       * Alternate bootpath : unknown before activate.

       * Alternate bootpath : /dev/dsk/c2t0d0 after activate.

       * HA Alternate bootpath : /dev/dsk/c2t3d0 before activate.

       * HA Alternate bootpath : /dev/dsk/c2t3d0 after activate.

       * Activating Inactive System Image

 

=======  10/22/11 15:59:50 EAT  END Activate Inactive System Image succeeded.

         (user=root)  (jobid=rx6600a)

 

-bash-4.2#

-bash-4.2# setboot                                          

Primary bootpath : 0/4/1/0.0.0.0.0

HA Alternate bootpath : 0/4/1/0.0.0.3.0

Alternate bootpath : 0/4/1/0.0.0.0.0  à新增加

 

Autoboot is ON (enabled)

-bash-4.2#

l  通过DRD盘引导成功:

-bash-4.2# shutdown -ry 0

Shutdown cannot be run from a mounted file system -- exiting shutdown.

Change directories to the root volume ("/" will work) and try again.

-bash-4.2# pwd

/tmp

-bash-4.2# cd /

-bash-4.2# shutdown -ry 0

 

SHUTDOWN PROGRAM

10/22/11 16:02:34 EAT

 

Broadcast Message from root (console) Sat Oct 22 16:02:34...

SYSTEM BEING BROUGHT DOWN NOW ! ! !

 

 

/sbin/auto_parms: DHCP access is disabled (see /etc/auto_parms.log)

……    

 

Loading.: HP-UX Alternate Boot: 0/4/1/0.0.0.0.0

Starting: HP-UX Alternate Boot: 0/4/1/0.0.0.0.0

 

(C) Copyright 1999-2006,2009 Hewlett-Packard Development Company, L.P.

All rights reserved

 

HP-UX Boot Loader for IPF  --  Revision 2.030

 

Press Any Key to interrupt Autoboot

\EFI\HPUX\AUTO ==> boot vmunix -lq

Seconds left till autoboot -   0

AUTOBOOTING...> System Memory = 16353 MB

loading section 0

............................................................. (complete)

loading section 1

................ (complete)

loading symbol table

loading System Directory (boot.sys) to MFS

....

loading MFSFILES directory (bootfs) to MFS

...............

Launching /stand/vmunix

SIZE: Text:31143K + Data:7903K + BSS:7897K = Total:46944K

 

Console is on a Serial Device

Booting kernel..    

 

l  DRD命令指南:

drd activate -x alternate_bootdisk=/dev/disk/disk1

 

drd deactivate -x alternate_bootdisk=/dev/dsk/c1t1d0

drd deactivate -x alternate_bootdisk=/dev/disk/disk1

 

drd runcmd swinstall -s patchsvr:/var/opt/patches PHCO_0001

 

To set the inactive system image as the primary boot disk:

 

           drd activate

 

drd activate -x reboot=true

 

-bash-4.2# man drd

 

 drd(1M)                               drd(1M)

 

 NAME

      drd  - manage an inactive system image

 

 SYNOPSIS

      drd  [-?] [-x -?]

 

      drd clone [-?] [-p] [-v] -t target_device_file [-x option=value]

           [-x -?] [-X option_file]

 

      drd mount  [-?] [-p] [-v] [-x option=value] [-x -?] [-X option_file]

 

      drd umount  [-?] [-p] [-v] [-x option=value] [-x -?] [-X option_file]

 

      drd runcmd  [-?] [-v] [-x option=value] [-x -?] [-X option_file] cmd

          

 

      drd activate  [-?] [-p] [-v] [-x option=value] [-x -?] [-X option_file]

 

      drd deactivate  [-?] [-p] [-v] [-x option=value] [-x -?] [-X

           option_file]

Standard input

 DESCRIPTION

      The drd command provides a command line interface to Dynamic Root Disk

      (DRD) tools.  The drd command has five major modes of operation:

 

           clone

                Clones a booted system to an inactive system image.  The drd

                clone mode copies the LVM volume group or VxVM disk group

                containing the volume on which the root file system ("/") is

                mounted.

 

           mount

                Mounts all file systems in an inactive system image.  The

                mount point of the root file system is either

                /var/opt/drd/mnts/sysimage_000 or

                /var/opt/drd/mnts/sysimage_001.  If the inactive system

                image was created by the most recent drd clone command, the

                mount point of the root file system is

                /var/opt/drd/mnts/sysimage_001.  If the inactive system

                image was the booted system when the most recent drd clone

                command was run, the mount point of the root file system is

                /var/opt/drd/mnts/sysimage_000.

 

           umount

                Unmounts all file systems in the inactive system image

                previously mounted by a drd mount command.

 

           runcmd

                Runs a command on an inactive system image.  Only a select

                group of commands may be run by the runcmd mode.  These are

                commands which have been verified to have no affect on the

 

                                    - 1 -       Formatted:  October 17, 2011

 

 drd(1M)                                                             drd(1M)

 

                booted system when executed by drd runcmd.  Such commands

                are referred to as DRD-Safe.  The commands kctune,

                swinstall, swjob, swlist, swmodify, swremove, swverify, and

                view are currently certified DRD-Safe.  An attempt to

                execute any other command will result in a runcmd error.  In

                addition, not every software package may safely be processed

                by sw* commands.  The DRD-Safe SW-DIST commands are aware of

                running in a DRD session and will reject any unsafe

                packages.  For more information about DRD-Safe packages, see

                drd-runcmd(1M).

Standard input

           activate

                Sets the inactive system image to be the primary boot disk

                the next time the system is booted.

 

           deactivate

                Sets the active system image (i.e. the booted image) to be

                the primary boot disk the next time the system is booted.

 

    Options

      drd recognizes the following options:

 

           -?   Displays the usage message.

 

           -x -?

                Displays the list of possible -x (extended) options.

 

    Return Values

      drd major modes return the following values:

 

           0    Success.

           1    Error.

           2    Warning.

Standard input

    EXAMPLES

      To display drd usage information:

 

           drd -?

 

      To display drd extended option usage:

 

           drd -x -?

 

      To display usage for the drd clone command:

 

           drd clone -?

 

      To clone the root LVM volume group or VxVM disk group  to a physical

      device:

 

                                    - 2 -       Formatted:  October 17, 2011

 

 drd(1M)                                                             drd(1M)

 

           For 11iv2:

 

           drd clone -t /dev/dsk/c1t1d0

 

           For 11iv3:

 

           drd clone -t /dev/disk/disk1

 

      To preview the clone of the root LVM volume group or VxVM disk group

      to a physical device:

 

      For 11iv2:

 

           drd clone -p -t /dev/dsk/c1t15d0

 

      For 11iv3:

 

           drd clone -p -t /dev/disk/disk7

 

      To display all drd clone extended options:

 

           drd clone -x -?

 

      To mount the inactive system image:

 

           drd mount

 

      If the system image mounted was created by the most recent drd clone

      command, the root file system will be mounted at

      /var/opt/drd/mnts/sysimage_001.

 

      If the system image was booted when the most recent drd clone command

      was run, the root file system will be mounted at

      /var/opt/drd/mnts/sysimage_000.

 

      To display all drd mount extended options:

 

           drd mount -x -?

 

      To unmount the inactive system image:

 

           drd umount

 

      To display all drd umount extended options:

 

           drd umount -x -?

 

      To see the software that is installed on the inactive system image

      (without any need to mount the image first):

Standard input

                                    - 3 -       Formatted:  October 17, 2011

Standard input

 drd(1M)                                                             drd(1M)

 

           drd runcmd swlist

 

      See drd-runcmd(1m) for more information about the runcmd mode of drd.

 

      To install PHCO_0001 from the depot /var/opt/patches, located on the

      system patchsvr:

 

           drd runcmd swinstall -s patchsvr:/var/opt/patches PHCO_0001

 

      To run a preview installation of PHCO_0001 from the depot

      /var/opt/patches, located on the system patchsvr:

 

           drd runcmd swinstall -p -s patchsvr:/var/opt/patches PHCO_0001

 

      To verify all software on the inactive system image:

 

           drd runcmd swverify

 

      To remove PHKL_9999 from the inactive system image:

 

           drd runcmd swremove PHKL_9999

Standard input

      To view the swagent log on the inactive system image:

 

           drd runcmd view /var/adm/sw/swagent.log

 

      To display all drd runcmd extended options:

 

           drd runcmd -x -?

 

      To set the inactive system image as the primary boot disk:

 

           drd activate

 

      To set the inactive system image as the primary boot disk and a

      different disk as the alternate boot disk:

 

      For 11iv2:

 

           drd activate -x alternate_bootdisk=/dev/dsk/c1t1d0

 

      For 11iv3:

 

           drd activate -x alternate_bootdisk=/dev/disk/disk1

Standard input

      To boot to the inactive system image immediately:

Standard input

           drd activate -x reboot=true

Standard input

      To display all drd activate extended options:

Standard input

                                    - 4 -       Formatted:  October 17, 2011

Standard input

 drd(1M)                                                             drd(1M)

Standard input

           drd activate -x -?

Standard input

      To restore the active (booted) system image as the primary boot disk:

Standard input

           drd deactivate

Standard input

      To restore the active (booted) system image as the primary boot disk

      and set a different disk as the alternate boot disk:

Standard input

      For 11iv2:

Standard input

           drd deactivate -x alternate_bootdisk=/dev/dsk/c1t1d0

Standard input

      For 11iv3:

Standard input

           drd deactivate -x alternate_bootdisk=/dev/disk/disk1

Standard input

      To display all drd deactivate extended options:

Standard input

           drd deactivate -x -?

Standard input

 AUTHORd input

      drd was developed by Hewlett-Packard Development Company, L.P.

Standard input

 FILESrd input

      /var/opt/drd/drd.log               Log file.

Standard input

 SEE ALSOinput

      drd-clone(1M), drd-mount(1M), drd-umount(1M), drd-runcmd(1M), drd-

      activate(1M), drd-deactivate(1M)

Standard input

      Dynamic Root Disk Administrator's Guide, available at

     

Standard input

                                    - 5 -       Formatted:  October 17, 2011

Standard input

Standard input: END

阅读(6702) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~