欢迎加入IT云增值在线QQ交流群:342584734
分类:
2006-04-23 15:14:17
Solstice DiskSuite[TM] (SDS) is Sun's volume management software, that can be used to configure software RAID devices. This document provides information on what to backup to save SDS's configuration, and, in the event of a disaster, how to use that information to restore the SDS configuration.
NOTE: This information has been based on SDS 4.2.1. In other versions of the software, the information may differ.
There are a number of different aspects to SDS's configuration :
CONFIGURATION INFORMATION |
WHERE THIS INFORMATION IS RECORDED |
location of metadb |
/etc/system and /etc/lvm/mddb.cf |
local metadevice configuration |
metadb and /etc/lvm/md.cf |
metaset's name, hosts, and disks |
metadb |
metaset's metadevice configuration |
metaset's disks' private slice |
When a system boots, DiskSuite first needs to determine the location of the metadb. This information is saved on the root disk in /etc/system and /etc/lvm/mddb.cf, e.g. :
/etc/system:
set md:mddb_bootlist1="dad:7:16 dad:7:1050 dad:7:2084"
/etc/lvm/mddb.cf:
#metadevice database location file do not hand edit
#driver minor_t daddr_t checksum
dad 7 16 -278
dad 7 1050 -1312
dad 7 2084 -2346
Once the metadb have been located, they can be accessed to determine the configuration of any local metadevice.
The metadb also contain basic information on any metasets configured. This includes the metaset's name, the names of hosts that can access that metaset, and the disks used within that metaset.
When a disk is added to a metaset, SDS will initialise the disk's VTOC (partition information). The following SunSolve document has more information on this :
Document ID:14861 Title:Solstice DiskSuite[TM] - disks added to diskset get repartitioned automatically
The 'private' slice 7 contains the configuration information for the metadevices created within that metaset.
Different commands change different parts of the configuration :
metadb : will change /etc/system and /etc/lvm/mddb.cf. When adding a new metadb, it will also copy all the metadb information into the new metadb.
metainit, metareplace, metaclear : will change the configuration information for a metadevice in all of the metadb, and also update /etc/lvm/md.cf accordingly.
metaset -s
metastat -s
metainit, metareplace, metaclear -s
files
/etc/system, /etc/lvm/mddb.cf, /etc/lvm/md.cf
metadb raw devices
use dd to backup one of the slices listed in 'metadb'
example:
dd if=/dev/rdsk/c0t0d0s7 of=/opt/sun/sdsbackup/metadb.c0t0d0s7.dd bs=2048k
command outputs
“metaset” for disks list in the metaset.
"metastat -p" and "metastat –p -s
Recovering /etc/system or /etc/lvm/mddb.cf
If there is a problem with the metadb information in /etc/system ,add the line into /etc/system based on /etc/lvm/mddb.cf.
For example:
/etc/lvm/mddb.cf:
#metadevice database location file do not hand edit
#driver minor_t daddr_t checksum
dad 7 16 -278
dad 7 1050 -1312
dad 7 2084 -2346
Then, you can add the line into /etc/system:
set md:mddb_bootlist1="dad:7:16 dad:7:1050 dad:7:2084"
If there is a problem with /etc/lvm/mddb.cf or both files, they will need to be restored from backup.
Recovering metadb
In the event that all copies of the metadb have a problem, use dd to restore the dd backup.
Example:
dd if= /opt/sun/sdsbackup/metadb.c0t0d0s7.dd of=/dev/rdsk/c0t0d0s7 bs=2048k
Alternatively, initialise an empty metadb, and recreate all the local metadevices manually.
Recover metadevice configuration.
When metadevice configuration lost, the recover steps should keep in high careful, this may cause data lost. And all recover steps should based on original backup of /etc/lvm/md.cf file or metastat –p’s output.
Example:
# more md.cf (from backup)
# metadevice configuration file
# do not hand edit
d20 -m d21 d22 1
d21 1 1 c0t0d0s1
d22 1 1 c0t1d0s1
d81 -r c2t16d0s2 c2t17d0s2 c2t18d0s2 c2t19d0s2 c2t20d0s2 -i 32b
Recover d20 steps:
# metainit d21 1 1 c0t0d0s1
# metainit d22 1 1 c0t1d0s1
# metainit d20 –m d21
and confirm d20 data correct, if no problem do following:
# metattach d20 d22
If d20 data not ok, then try the other half mirror:
# metaclear d20
# metainit d20 –m d22
and test d20 data again, if confirm data ok:
# metattach d20 d21
Recover d81 (Radi 5)steps:
# metainit d81 –r -k c2t16d0s2 c2t17d0s2 c2t18d0s2 c2t19d0s2 c2t20d0s2 -i 32b
For metaset example:
# metastat –s testset –p (from backup)
d10 -m d11 d12 1
d11 1 1 c0t0d0s0
d12 1 1 c0t1d0s0
Recover d10:
# metainit –s testset d11 1 1 c0t0d0s0
# metainit –s testset d12 1 1 c0t1d0s0
# metainit –s testset d10 –m d11
confirm d10’s data, follow the d20 recover steps.
Important :
RAID5 metadevices should be recreated using the metainit "-k" option so that it does not re-initialize the data.
Mirror metadevices should be recreated with only one submirror, and once the data is confirmed to be ok, the second submirror can be metattach'ed
Recovering metasets
If only one disk in a metaset fails, delete and re-add the disk into the metaset. This will re-initialise the 'private' slice, and copy the metaset's . Then use metareplace to recover any affected metadevices.
If the whole metaset is lost, re-initialise the disks into an empty metaset, and recreate all the metaset's metadevices manually. There is no equivalent of /etc/lvm/md.cf for metasets, so this requires that the output from "metastat -s
Important :
RAID5 metadevices should be recreated using the metainit "-k" option so that it does not re-initialize the data.
Mirror metadevices should be recreated with only one submirror, and once the data is confirmed to be ok, the second submirror can be metattach'ed
Recover soft partition
Because soft partition just separate metadevice into little ones, but not cause any data init, just backup all soft partition make command, and after recover the non-softpartition metadevice and use the same commands to recover the soft partitions.