solaris zfs补录
恢复已摧毁的池:
1.删除一个存储池:
[root@node01 /]#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zfspool 118G 282K 118G 0% ONLINE -
[root@node01 /]#zpool destroy zfspool
[root@node01 /]#zpool list
no pools available
2.恢复存储池:
[root@node01 /]#zpool import -D
pool: zfspool
id: 12556987331220532754
state: ONLINE (DESTROYED)
action: The pool can be imported using its name or numeric identifier.
config:
zfspool ONLINE
mirror ONLINE
c4t0d0 ONLINE
c4t3d0 ONLINE
c4t2d0 ONLINE
c0t1d0 ONLINE
spares
c4t4d0
c4t26d0
[root@node01 /]#zpool list
no pools available
下来使用-Df选项进行恢复:
[root@node01 /]#zpool list
no pools available
[root@node01 /]#zpool import -Df zfspool
[root@node01 /]#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zfspool 118G 290K 118G 0% ONLINE -
[root@node01 /]#zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfspool 5.00G 112G 24.5K /zfspool
zfspool/u01 5.00G 112G 26K /u01
zfspool/u01/zfsvol 22.5K 117G 22.5K -
zfspool/u02 26K 112G 26K /u02/
zfspool/u03 26.5K 112G 26.5K /u03
zfspool/u04 24.5K 112G 24.5K /u04
理解存储池中的设备:
[root@node01 /]#zpool create -f zfspool c4t0d0
[root@node01 /]#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zfspool 16.8G 88K 16.7G 0% ONLINE -
新创建了一个存储池zfspool,其中只有一块物理磁盘c4t0d0.下面是如何添加设备,扩展zfspool。
添加一个虚拟设备到zfspool:
[root@node01 /]#zpool add zfspool c4t3d0
[root@node01 /]#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zfspool 33.5G 91K 33.5G 0% ONLINE -
创建一个双路镜像存储池:
[root@node01 /]#zpool create -f zfspool2 mirror c4t4d0 c4t26d0
[root@node01 /]#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
zfspool 33.5G 91K 33.5G 0% ONLINE -
zfspool2 16.8G 89K 16.7G 0% ONLINE -
设备的脱机与联机:
[root@node01 /]#zpool status zfspool2
pool: zfspool2
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zfspool2 ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c4t26d0 ONLINE 0 0 0
errors: No known data errors
[root@node01 /]#zpool offline zfspool2 c4t4d0
Bringing device c4t4d0 offline
现在c4t4d0已offline,如下所示:
[root@node01 /]#zpool status zfspool2
pool: zfspool2
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zfspool2 DEGRADED 0 0 0
mirror DEGRADED 0 0 0
c4t4d0 OFFLINE 0 0 0
c4t26d0 ONLINE 0 0 0
errors: No known data errors
现在使c4t4d0联机:
[root@node01 /]#zpool online zfspool2 c4t4d0
Bringing device c4t4d0 online
[root@node01 /]#zpool status zfspool2
pool: zfspool2
state: ONLINE
scrub: resilver completed with 0 errors on Thu Nov 6 10:38:07 2008
config:
NAME STATE READ WRITE CKSUM
zfspool2 ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c4t26d0 ONLINE 0 0 0
errors: No known data errors
清楚错误:
[root@node01 /]#zpool offline zfspool2 c4t4d0
Bringing device c4t4d0 offline
下来清除已脱机的设备:
[root@node01 /]#zpool clear zfspool2 c4t4d0
[root@node01 /]#zpool status zfspool2
pool: zfspool2
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scrub: resilver completed with 0 errors on Thu Nov 6 10:38:07 2008
config:
NAME STATE READ WRITE CKSUM
zfspool2 DEGRADED 0 0 0
mirror DEGRADED 0 0 0
c4t4d0 OFFLINE 0 0 0
c4t26d0 ONLINE 0 0 0
errors: No known data errors
下来替换已脱机的设备:
[root@node01 /]#zpool replace zfspool2 c4t4d0 c4t0d0
这意思是说用c4t0d0设备替换已损坏的c4t4d0设备。
[root@node01 /]#zpool status zfspool2
pool: zfspool2
state: ONLINE
scrub: resilver completed with 0 errors on Thu Nov 6 10:43:08 2008
config:
NAME STATE READ WRITE CKSUM
zfspool2 ONLINE 0 0 0
mirror ONLINE 0 0 0
c4t0d0 ONLINE 0 0 0
c4t26d0 ONLINE 0 0 0
errors: No known data errors
查看存储池的状态:
[root@node01 /]#zpool list -o name,size,capacity zfspool2
NAME SIZE CAP
zfspool2 16.8G 0%
查看存储池的I/O统计信息:
[root@node01 /]#zpool iostat zfspool2
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
zfspool2 163K 16.7G 0 0 76 2.14K
[root@node01 /]#zpool iostat zfspool2 2
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
zfspool2 163K 16.7G 0 0 75 2.12K
zfspool2 163K 16.7G 0 0 0 0
zfspool2 163K 16.7G 0 0 0 0
zfspool2 163K 16.7G 0 0 0 0
zfspool2 163K 16.7G 0 0 0 0
zfspool2 163K 16.7G 0 0 0 0
^C
[root@node01 /]#zpool iostat -v zfspool2
capacity operations bandwidth
pool used avail read write read write
----------- ----- ----- ----- ----- ----- -----
zfspool2 163K 16.7G 0 0 73 2.07K
mirror 163K 16.7G 0 0 73 2.07K
c4t0d0 - - 0 1 1.58K 13.5K
c4t26d0 - - 0 0 1.66K 8.67K
----------- ----- ----- ----- ----- ----- -----
[root@node01 /]#zpool status -x
all pools are healthy
阅读(1806) | 评论(0) | 转发(0) |