分类:
2005-02-02 13:32:19
Solstice Disksuite
Location of components
----------------------command line utilities in /usr/opt/SUNWmd/sbin driver modules in /kernel/drv and /kernel/misc daemons in /usr/opt/SUNWmd/sbin administrative files in /etc/opt/SUNWmd
metadevices are named /dev/md/{dsk|rdsk}/dn, with n from 0 to 127 by default
the packages are SUNWmd and SUNWmdg (the GUI)
Solstice Disksuite
Location of components
----------------------command line utilities in /usr/opt/SUNWmd/sbin driver modules in /kernel/drv and /kernel/misc daemons in /usr/opt/SUNWmd/sbin administrative files in /etc/opt/SUNWmd
metadevices are named /dev/md/{dsk|rdsk}/dn, with n from 0 to 127 by default
the packages are SUNWmd and SUNWmdg (the GUI)
Useful options
--------------The -f option can be used with most commands to force the operation. This is needed when doing an operation on a mounted filesystem.
md.tab file
-----------The /etc/opt/SUNWmd/md.tab file can be used to configure ODS automatically.
# metastat -p > This will output your configuration in md.tab format
# metainit -a
This command reads the md.tab file and sets up the configuration accordingly
Creating replicas
-----------------
# metadb -a -f c0t3d0s7
Before you can use Disksuite software, you must create the metadevice state database. The replica can exist on a dedicated disk partition or within a concat, stripe, or logging metadevice.
There MUST be 3 replicas or Disksuite S/W cannot be used correctly.
Se Documentation for more info on replicas.
it modifies /etc/system, /etc/opt/SUNWmd/mddb.cf
you can also modify the md.tab file to create a configuration
Concatenation
-------------
# metainit d1 3 1 c0t1d0s2 1 c1t1d0s2 1 c2t1d0s2
where d1 is the metadevice, 3 is the number of components to concatenate and 1 is the number of components per device
Simple Stripe
-------------
# metainit d2 1 3 c0t1d0s2 c1t1d0s2 c2t1d0s2 -i 16k
where d2 is the metadevice, 1 is the number of components to concatenate and 3 is the number of slices to stripe across -i 16k indicates the amount of data to write to each disk in stripe before moving to next one
Concat/Stripe
-------------
# metainit d3 3 3 c0t1d0s2 c1t1d0s2 c2t1d0s2 -i 16k 3 c3t1d0s2 c4t1d0s2 c5t1d0s2 -i 16k 3 c6t1d0s2 c7t1d0s2 c8t1d0s2 -i 16k
Here there are three stripes concatenated together. d3 is the metadevice.
The first 3 is the number of components to concatenate.
The second and subsequent 3's indicate the number of slices to
stripe across. The options there are as in the simple stripe.
Extending a metadevice
----------------------
# metattach d1 c3t1d0s2
extends a metadevice by concatenating a slice to the end. It does not add a filesystem.
# growfs /dev/md/rdsk/d1
If the metadevice is not mounted, the above command extends the filesystem to include the added section. You cannot shrink this filesystem later.
# growfs -M /export/home /dev/md/rdsk/d1
If the metadevice is mounted, the above command will extend the filesystem to include the concatenated section. Again, you cannot shrink the filesystem later.
Removing a metadevice
---------------------
# metaclear d3
d3 is the metadevice.
# metaclear -a -f
clears all metadevices. Don't do this unless you want to blow away your entire configuration.
The devices cannot be open for use.., i.e. mounted
Viewing your configuration and status
-------------------------------------
# metastat
shows the configuration and status of all metadevices
# metastat d3
will tell the configuration and status of just metadevice d3
# metadb
tells the location and status of locally configured replicas
Hot Spare pools
---------------
# metainit hsp001
sets up a pool called hsp001. It contains no disks yet.
# metahs -a hsp001 c0t1d0s4
adds a slice to the hot spare pool.
NOTE: it is advisable to add disks/slices to the pool in order of smallest to largest.
This way the smallest hotspare capable of replacing a disk will kick in.
# metahs -s all c1t1d0s4
adds a slice to all pools
# metaparam -h hsp001 d1
makes a hot spare pool available to the metadevice d1 {submirror or RAID5}
# metahs -e c1t1d0s4
reenables a a hot spare that was previously unavailable
# metahs -r hsp001 c1t1d0s4 c2t1d0s4
replaces the first disk listed with the second
# metahs -d all c1t1d0s4
removes a disk from all hot spare pools
# metahs -d hsp001 c1t1d0s4
removes a slice from hsp001
# metahs -d hsp001
removes a hot spare pool
# metahs -i
# metastat
tell you the status
Mirrors
-------
# metainit d0 -m d1
makes a one-way mirror. d0 is the device to mount, but d1 is the only one associated with an actual device.
A "one-way mirror" is not really a mirror yet. There's only one place where the data is actually stored, namely d1.
# metattach d0 d2
attaches d2 to the d0 mirror. Now there are 2 places where the data are stored, d1 and d2. But you mount the metadevice d0.
# metadetach d0 d1
detaches d1 from the d0 mirror
# metaoffline d0 d2
# metaonline d0 d2
suspends/resumes use of d2 device on d0 mirror
# metareplace d0 c1t0d0s2 c4t1d0s2
replaces first disk listed with second on the d0 mirror
# metareplace -e d0 c1t1d0s2
re-enables a disk that has been errored.
Mirroring root
--------------
You must take a few extra steps to mirror the root partition
# metainit d1 1 1 c0t3d0s0 options
you can do almost everything the same way, except specify -s metadevices in shared diskset are called /dev/md/ hot spare pools within a shared diskset are named disksets are only supported on SSA disks and disks are repartitioned when put into a diskset unless slice 2 is zeroed out and slice 7 has cylinders 0-4 or 0-5 allocated to it for the diskset metadb
# metaset -s adds hosts to a set
# metaset -s adds drives to a set. Notice we do not specify slice.
# metaset -s # metaset -s removes hosts and drives
# metaset -s take control of a diskset. the -f option will force control but will panic other machine, unless it has been released from other host.
# metaset -s releases control of a diskset
Troubleshooting info to gather
------------------------------
output from following...
# metastat
# metadb -i
# prtvtoc on relevant devices
# mount
# /var/adm/messages
给metadb分区做好准备:
# halt
OK boot -s
# swap -l
# swap -d /dev/dsk/slice
# swap -l
# format
# prtvtoc /dev/rdsk/slice ]
# newfs 新的分区
# swap -a swap slice (重新激活swap分区)
# exit
disk分布情况:
0: c1t0d0
1: c1t1d0
2: c1t2d0
3: c1t3d0
4: c1t4d0
5: c1t5d0 注:0,1做镜像;2,3,4,5,做RAID5
系统盘(c1t0d0)分区及镜像盘(c1t1d0)情况:
c1t0d0s0(d10) d0 c1t1d0s0(d20) / 1024M
c1t0d0s1(d11) d1 c1t1d0s1(d21) swap 8192M
c1t0d0s2 c1t1d0s2 overlap 69999M
c1t0d0s3(d13) d3 c1t1d0s3(d23) /usr 4096M
c1t0d0s4(d14) d4 c1t1d0s4(d24) /opt 10240M
c1t0d0s5(d15) d5 c1t1d0s5(d25) /var 2048M
c1t0d0s6(d16) d6 c1t1d0s6(d26) /export/home free
c1t0d0s7 c1t1d0s7 /metaDB 50M
系统安装完成后,使用Solaris8 software 2 of 2安装Disksuite_4.2.1软件,安装路径:/cdrom/cdrom0/Solaris_8/EA/installer &选择默认安装就可以。
用root用户登录,运行以下命令:
#prtvtoc /dev/rdsk/c1t0d0s2 |fmthard - s - /dev/rdsk/c1t1d0s2
fmthard:New volume table of contents now in place
上面命令将第二块硬盘(c1t1d0)的文件分区表调整为和引导盘一致。
{{{
# prtvtoc /dev/rdsk/c0t0d0s2 > boot-vtoc.tab
# fmthard -s boot-vtoc.tab /dev/rdsk/c0t1d0s2
以上命令将第2硬盘的文件分区表调整为和引导盘一致
}}}
#umount /metaDB
#rm -r /metaDB
#vi /etc/vfstab
将下面的这行注释或删掉,如下:
#/dev/dsk/c1t0d0s7 /dev/rdsk/c1t0d0s7 /metaDB ufs 1 yes -
一 RAID1
对各个分区逐一作镜像:
1 先生成replicas,这是DiskSuite内部使用的。
#metadb -a -f -c 3 c1t0d0s7 c1t1d0s7
#metadb
2 Creating a mirror from swap
#metainit -f d11 1 1 c1t0d0s1
#metainit d21 1 1 c1t1d0s1
#metainit d1 -m d11
#vi /etc/vfstab
/dev/dsk/c1t0d0s1 - - swap - no -
should be changed to:
/dev/md/dsk/d1 - - swap - no -
#reboot
#metattach d1 d21
3 Creating a mirror form /usr
#metainit -f d13 1 1 c1t0d0s3
#metainit d23 1 1 c1t1d0s3
#metainit d3 -m d13
#vi /etc/vfstab
/dev/dsk/c1t0d0s3 /dev/rdsk/c1t0d0s3 /usr ufs 1 yes -
should be changed to:
/dev/md/dsk/d3 /dev/md/rdsk/d3 /usr ufs 1 yes -
#reboot
#metattach d3 d23
4 Creating a mirror form /opt
#metainit -f d14 1 1 c1t0d0s4
#metainit d24 1 1 c1t1d0s4
#metainit d4 -m d14
#vi /etc/vfstab
/dev/dsk/c1t0d0s4 /dev/rdsk/c1t0d0s4 /opt ufs 1 yes -
should be changed to:
/dev/md/dsk/d4 /dev/md/rdsk/d4 /opt ufs 1 yes -
#reboot
#metattach d4 d24
5 Creating a mirror form /var
#metainit -f d15 1 1 c1t0d0s5
#metainit d25 1 1 c1t1d0s5
#metainit d5 -m d15
#vi /etc/vfstab
/dev/dsk/c1t0d0s5 /dev/rdsk/c1t0d0s5 /var ufs 1 yes -
should be changed to :
/dev/md/dsk/d5 /dev/md/rdsk/d5 /var ufs 1 yes -
#reboot
#metattach d5 d25
6 Creating a mirror from /export/home
..
7 Creating a mirror from /
#metainit -f d10 1 1 c1t0d0s0
#metainit d20 1 1 c1t1d0s0
#metainit d0 -m d10
#metaroot d0
#lockfs -fa
#reboot
#metattach d0 d20
#metastat(检查镜像进度)
镜像完成后,还需如下操作:
修改EEPROM
ok devalias(察看启动设备)
ok nvalias rootdisk /pci@8,600000/SUNW,plc@4/fp@0,0/disk@0,0
ok nvalias mirrdisk /pci@8,600000/SUNW,plc@4/fp@0,0/disk@1,0
ok setenv boot-device rootdisk mirrdisk
eeprom命令:
boot-device=rootdisk mirrdisk
use-nvramrc?=true
nvramrc=nvalias mirrdisk /pci@8,600000/SUNW,plc@4/fp@0,0/disk@1,0
nvalias rootdisk /pci@8,600000/SUNW,plc@4/fp@0,0/disk@0,0
#ls -l c1t0d0s0
内容对应/pci@8,600000/SUNW,plc@4/fp@0,0/disk@0,0
#ls -l c1t1d0s0
内容对应/pci@8,600000/SUNW,plc@4/fp@0,0/disk@1,0
测试:
ok boot rootdisk
ok boot mirrdisk 均启动正常
维护:
可生成自切换的能启动的镜像硬盘:
#installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c1t1d0s0
然后在ok提示符下修改启动值,把镜像盘列入备用启动中:
ok setenv boot-device rootdisk mirrdisk
ok reset-all
假如c1t0d0主盘坏掉,停机更换硬盘后需要如下的操作:
ok boot mirrdisk -s
#metadb -d c1t0d0s7
#prtvtoc /dev/rdsk/c1t1d0s2 |fmthard - s - /dev/rdsk/c1t0d0s2
#metadb -a -f -c 3 c1t0d0s7
#halt
ok boot mirrdisk
#metareplace -e d0 c1t0d0s0
d0:device c1t0d0s0 is enabled
#metareplace -e d1 c1t0d0s1
#metareplace -e d3 c1t0d0s3
#metareplace -e d4 c1t0d0s4
#metareplace -e d5 c1t0d0s5
#metareplace -e d6 c1t0d0s6
有-f参数
#metastat(检查恢复进度)
#metastat d*
如果此时处于syncing状态,则显示百分比。
okey: 正常状态,RAID可用。
maintenance:单个磁盘有故障,需维护。
maintenance/last erred:有超过一个磁盘的故障,但数据也可能是假的。
#metadb -i(metaDB状态查询)
disksuite的配置文件/etc/lvm/md.tas(手工将RAID的建立过程记录该文件中)
检查此文件的命令# metainit -n -a
建立此文件中所有的raid # metainit -a
vfstab失效恢复
#fsck /dev/md/rdsk/d0
#mount -o ro /dev/md/rdsk/d0 /
#metaroot d0
#reboot
二 RAID5
#metainit d55 -r c1t2d0s2 c1t3d0s2 c1t4d0s2 c1t5d0s2
#metainit d55(检查raid5的进度)
作完后
#reboot
#newfs /dev/md/rdsk/d55
#mkdir /raid5
#vi /etc/vfstab
添加一行:
/dev/md/dsk/d55 /dev/md/rdsk/d55 /raid5 ufs 2 yes -
#reboot
维护:
RAID5坏掉一个盘的恢复方法:
例:c1t4d0坏了,更换新硬盘后作如下操作:
ok boot -r
#metareplace -e d55 c1t4d0s2
#metastat d55
卸载raid5的方法:
#umount /raid5
#metaclear d55
#vi /etc/vfstab
去掉下面一行
/dev/md/dsk/d55 /dev/md/rdsk/d55 /raid5 ufs 2 yes -
三 建立hot-spare
创建一个hot spare pool
#metainit hsp001 dev-name(c1t4d0s6)
将hsp交给raid使用(或叫绑定)
#metaparam -h hsp001 d10
#metaparam -h hsp001 d20
将hsp脱离(或叫删除)
#metaparam -h none d10
#metaparam -h none d20
增加hsp(往hsp上加盘)
#metahs -a hsp001 c1t5d0s6
#metastat
四 raid0+1
举个例子:
#metainit d1 1 3 c0t1d0s0 c0t2d0s0 c0t3d0s0
#metainit d2 1 3 c0t4d0s0 c0t5d0s0 c0t6d0s0
#metainit d0 -m d1
#metattach d0 d2
五 raid1+0
举个例子:
#metainit d1 3 1 c0t1d0s0 c0t2d0s0 c0t3d0s0
#metainit d2 3 1 c0t4d0s0 c0t5d0s0 c0t6d0s0
#metainit d0 -m d1
#metattach d0 d2