Chinaunix首页 | 论坛 | 博客
  • 博客访问: 2586879
  • 博文数量: 448
  • 博客积分: 11301
  • 博客等级: 上将
  • 技术积分: 5699
  • 用 户 组: 普通用户
  • 注册时间: 2004-10-08 12:30
  • 认证徽章:


















2015-06-12 11:10:20

Applies to:

Solaris SPARC Operating System - Version 10 3/05 and later
Solaris x64/x86 Operating System - Version 10 3/05 and later
All Platforms


How to backup and restore Solaris ZFS root pool

The following procedure can be used to backup and restore a ZFS root pool (rpool) using the tools that are provided in Solaris 10 and above. It is advised that the reader becomes comfortable with this procedure and attempts a restore before deploying this into production environments.  YOu can find in the ZFS admin guide the official procedures to do it, for solaris 10, solaris 11 and solaris 11.1 (see note below to the correct links)



  1. Advisory patches
    At the time of writing, the reader is advised to install Kernel Feature Patch 139555-08 (SPARC) or 139556-08 (x86) to address the following issue:

    CR #6794452
    Synopsis: zfs receive cannot restore rpool

    Without this patch it may not be possible to restore a recursively sent root pool, ie: streams created with the -R switch to zfs send.
  2. Recovery Media
    You should use at least the same release of Solaris 10 that you are trying to restore from, ie: if the root pool backup streams were from Solaris 10 10/08 then you must boot from at least a Solaris 10 10/08 media in order to perform the restore.  This is because ZFS pools and filesystems have version numbers that can be upgraded by updating to a more recent release of Solaris 10 or by installing the kernel feature patch associated with that release, eg:

    Solaris 10 10/08 (SPARC) ships with kernel feature patch 137137-09 which allows zpool version 10 functionality.

    In order to understand a ZFS pool at version 10, the system must boot from a kernel at 137137-09 or later.
    As such booting from a Solaris 10 5/08 DVD would not have sufficient knowledge to restore a version 10 pool.


In this procedure, it is assumed the root pool will be called 'rpool' as is the given standard during installation. 
It will also refer to a simple number of filesystems:


This may need to be adjusted depending upon the filesystems that were created as part of the Solaris installation.  Furthermore it does not take into account any Live Upgrade created boot environments or cloned filesystems.  The decision has been made to send each filesystem individually, where possible in case just one filesystem needs to be restored from a stream file.

Backing up a Solaris ZFS root pool

Take a copy of the properties that are set in the rpool and all filesystems, plus volumes that are associated with it:

# zpool get all rpool
# zfs get all rpool
# zfs get all rpool/ROOT
# zfs get all rpool/ROOT/s10u7
# zfs get all rpool/export
# zfs get all rpool/export/home
# zfs get all rpool/dump
# zfs get all rpool/swap


(repeat for all ZFS filesystems)

Save this data for reference in case it is required later on.
Now snapshot the rpool with a suitable snapshot name:

# zfs snapshot -r rpool@backup

This will create a recursive snapshot of all descendants, including rpool/export, rpool/export/home as well as rpool/dump and rpool/swap (volumes). 


Note, once the command:

# zfs snapshot -r rpool@backup

is executed and the system is actively using the swap, it is poosible rpool might run out of space before the user is able to delete rpool/swap@backup.

In some situations, one may see:

# zfs snapshot -r rpool@today
cannot create snapshot 'rpool/swap@today': out of space
no snapshots were created


swap and dump are not required to be included in the backup, so they should be destroyed thus:

# zfs destroy rpool/swap@backup
# zfs destroy rpool/dump@backup

Then for each filesystem, send the data to a backup file/location.  Make sure that there is sufficient capacity in your backup location as "zfs send" does not understand when the destination becomes full (eg: a multi-volume tape).  In this example /backup is an NFS mounted filesystem from a suitably capacious server:

# zfs send -v rpool@backup > /backup/rpool.dump
# zfs send -v rpool/ROOT@backup > /backup/rpool.ROOT.dump
# zfs send -vR rpool/ROOT/s10u7
@backup > /backup/rpool.ROOT.s10u7.dump 
# zfs send -v rpool/export@backup > /backup/rpool.export.dump
# zfs send -v rpool/export/home@backup > /backup/rpool.export.home.dump

These dump files can then be archived onto non-volatile storage for safe keeping, eg: magnetic tape.

Restoring a Solaris ZFS root pool

If it is necessary to rebuild/restore a root pool, locate the known good copies of the zfs streams that were created in the Backing Up section of this document and make sure these are readily available.  In order to restore the root pool, first boot from a Solaris 10 DVD or network (jumpstart) into single user mode.  Depending upon whether the booted root filesystem is writable, it may be necessary to tell ZFS to use a temporary location for the mountpoint.   


A root pool at the time of writing must:

  • Live on a disk with an SMI disk label
  • Be composed of a single slice, not an entire disk
    (USE the cXtYdZs0 syntax and NOT cXtYdZ which would use the entire disk and EFI label)
  • Ensure that the pool is created with the same version as the original rpool.
    You can find the pool version from the zpool upgrade version output.
    There is also a matrix for ZFS pool and filesystem versions and the Oracle Solaris releases that support them:
    ZFS Filesystem and Zpool Version Matrix [ID 1359758.1].

    Use -o version= in the zpool create command.

In this example, disk c3t1d0s0 contains an SMI label, where slice 0 is using the entire capacity of the disk.  Change the controller and target numbers accordingly.   

# zpool create -fo altroot=/var/tmp/rpool -o cachefile=/etc/zfs/zpool.cache -m legacy rpool c3t1d0s0


Ensure the dump files are available for reading.  If these exist on tape, then a possible location would be /dev/rmt/0n, however in this example the dump files are made available by mounting up the backup filesystem from an NFS server.

Once the dump files are available, restore the filesystems that make up the root pool.  If Kernel Feature Patch 139555/139556-08 is installed you can use the flags '-Fdu' to ensure that the filesystems are not mounted.  It is important to restore these in the correct hierarchical order: 

# zfs receive -Fd rpool < /backup/rpool.dump
# zfs receive -Fd rpool < /backup/rpool.ROOT.dump
# zfs receive -Fd rpool < /backup/rpool.ROOT.s10u7.dump
# zfs receive -Fd rpool < /backup/rpool.export.dump
# zfs receive -Fd rpool < /backup/rpool.export.home.dump


This will restore the filesystems, but remember an rpool will also have a dump and swap device, made up from zvol's.  See the manual to create the swap and dump: 



(adjust the size of the dump and swap volumes according to the configuration of the system being restored).
Now to make the disk ZFS bootable, a boot block must be installed in SPARC, and the correct GRUB in x86, so pick one of the following according to the platform being restored:

# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c3t1d0s0    (SPARC)
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c3t1d0s0                   (x86)


Once the boot block has been installed, it is then necessary to set the bootable dataset/filesystem within this rpool.   
To do this run the following zpool commands (Makes the rpool/ROOT/s10u7 the bootable dataset):

# zpool set bootfs=rpool/ROOT/s10u7 rpool  


Set the failmode property of the rpool to continue.  This differs to "data" zfs pools which use wait by default, so it's important to set this correctly:

# zpool set failmode=continue rpool


Check to see if canmount is set to noauto, if not set:

# zfs get canmount rpool/ROOT/s10u7             
# Check that canmount is set to noauto and if not ...

# zfs set canmount=noauto rpool/ROOT/s10u7      

# ... set it appropriately


Temporarily disable the canmount property for the following filesystems, to prevent these from mounting when it comes to setting the mountpoint property:

# zfs set canmount=noauto rpool
# zfs set canmount=noauto rpool/export


Set the mountpoint properties for the various filesystems:

# zfs set mountpoint=/ rpool/ROOT/s10u7
# zfs set mountpoint=/rpool rpool


Set the dump device correctly


# dumpadm -d /dev/zvol/dsk/rpool/dump



Please see the official documentation about how to backup or restore a ZFS rpool in:

     Solaris 10: Recovering the ZFS Root Pool or Root Pool Snapshots

     Solaris 11: Archiving Snapshots and Root Pool Recovery

     Solaris 11.1: Archiving Snapshots and Root Pool Recovery  

   Note: in solaris 11.1 is introduce GRUB 2 so it is different the way to install, bootadm command must be used to install GRUB 2 see the manual for solaris 11.1 to see how to do it.

阅读(1666) | 评论(0) | 转发(0) |

登录 注册