1、创建raid分区
[root@ora /]# fdisk /dev/sda
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-130, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-130, default 130):
Using default value 130
把分区转换成raid的分区
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): l
0 Empty 1e Hidden W95 FAT1 75 PC/IX be Solaris boot
1 FAT12 24 NEC DOS 80 Old Minix bf Solaris
2 XENIX root 39 Plan 9 81 Minix / old Lin c1 DRDOS/sec (FAT-
3 XENIX usr 3c PartitionMagic 82 Linux swap c4 DRDOS/sec (FAT-
4 FAT16 <32M 40 Venix 80286 83 Linux c6 DRDOS/sec (FAT-
5 Extended 41 PPC PReP Boot 84 OS/2 hidden C: c7 Syrinx
6 FAT16 42 SFS 85 Linux extended da Non-FS data
7 HPFS/NTFS 4d QNX4.x 86 NTFS volume set db CP/M / CTOS / .
8 AIX 4e QNX4.x 2nd part 87 NTFS volume set de Dell Utility
9 AIX bootable 4f QNX4.x 3rd part 8e Linux LVM df BootIt
a OS/2 Boot Manag 50 OnTrack DM 93 Amoeba e1 DOS access
b W95 FAT32 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O
c W95 FAT32 (LBA) 52 CP/M 9f BSD/OS e4 SpeedStor
e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs
f W95 Ext'd (LBA) 54 OnTrackDM6 a5 FreeBSD ee EFI GPT
10 OPUS 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/
11 Hidden FAT12 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b
12 Compaq diagnost 5c Priam Edisk a8 Darwin UFS f1 SpeedStor
14 Hidden FAT16 <3 61 SpeedStor a9 NetBSD f4 SpeedStor
16 Hidden FAT16 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary
17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fd Linux raid auto
18 AST SmartSleep 65 Novell Netware b8 BSDI swap fe LANstep
1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid ff BBT
1c Hidden W95 FAT3
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@ora /]# fdisk /dev/sdb
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-130, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-130, default 130):
Using default value 130
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@ora /]#
2、创建raid1
[root@ora /]# mdadm -Cv /dev/md1 -l1 -n2 /dev/sda1 /dev/sdb1
mdadm: size set to 1044096K
mdadm: array /dev/md1 started.
3、创建文件系统
[root@ora /]# mkfs.ext3 /dev/md1
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
130560 inodes, 261024 blocks
13051 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
16320 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information:
done
This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@ora /]# tune2fs -c 0 -i 0 /dev/md1
4、查看状态
[root@ora /]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb1[1] sda1[0]
1044096 blocks [2/2] [UU]
unused devices:
[root@ora /]# mdadm -Ds /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Tue Feb 3 11:24:42 2009
Raid Level : raid1
Array Size : 1044096 (1019.63 MiB 1069.15 MB)
Device Size : 1044096 (1019.63 MiB 1069.15 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Feb 3 11:38:57 2009
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : d9960b10:40065e3b:c489a6b6:cff2ef48
Events : 0.18
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
[root@ora /]#
注:创建raid后,硬盘有一个同步过程,系统在做磁盘同步时,是不影响读写磁盘的,如果系统资源忙时,系统程序优先占用资源,如果磁盘很多,可以很清楚显示同步的百分比。
同时,如果分区在盘阵上,做双机双柜的要特别注意,必须在一台机上先做好raid,等同步完了再启动另一台机;在做双机双柜时,如果两台机同时启动时,如果一台机没有做完同步,必定会出现双机都对磁盘做同步的操作,就会出现双机同时读写磁盘,使得文件系统损坏,数据丢失特故障。最好的方法时,在做raid时,只启动一台机器,等raid做完磁盘同步后,再启动另外一台机,因为raid磁盘做完同步后,另一台机器启动后,是不用再做同步了,系统会自动搜索到raid镜像磁盘。
5、挂载磁盘
[root@ora /]# mount /dev/md1 /mnt
6、测试
[root@ora /]# umount /mnt
关闭
[root@ora /]# mdadm -S /dev/md1
通过mdadm -E /dev/sdMN 磁盘属于那个阵列
[root@ora /]# mdadm -E /dev/sda1
/dev/sda1:
Magic : a92b4efc
Version : 00.90.00
UUID : d9960b10:40065e3b:c489a6b6:cff2ef48
Creation Time : Tue Feb 3 11:24:42 2009
Raid Level : raid1
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Update Time : Tue Feb 3 11:27:37 2009
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : ea63b01f - correct
Events : 0.8
Number Major Minor RaidDevice State
this 0 8 1 0 active sync /dev/sda1
0 0 8 1 0 active sync /dev/sda1
1 1 8 17 1 active sync /dev/sdb1
启动
[root@ora /]# mdadm -A /dev/md1 /dev/sda1 /dev/sdb1
mdadm: /dev/md1 has been started with 2 drives.
[root@ora /]# mdadm -Ds /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Tue Feb 3 11:24:42 2009
Raid Level : raid1
Array Size : 1044096 (1019.63 MiB 1069.15 MB)
Device Size : 1044096 (1019.63 MiB 1069.15 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Feb 3 12:31:12 2009
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : d9960b10:40065e3b:c489a6b6:cff2ef48
Events : 0.20
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
[root@ora /]#
7、保存所有配置
[root@ora /]# echo DEVICE /dev/sda1 /dev/sdb1 > /etc/mdadm.conf
[root@ora /]# mdadm -Ds >> /etc/mdadm.conf
[root@ora /]# cat /etc/mdadm.conf
DEVICE /dev/sda1 /dev/sdb1
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=d9960b10:40065e3b:c489a6b6:cff2ef48
devices=/dev/sda1,/dev/sdb1
[root@ora /]# mdadm -As /dev/md1
mdadm: /dev/md1 has been started with 2 drives.
系统会自动查找raid的配置文件
[root@ora /]# mdadm -Ds /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Tue Feb 3 11:24:42 2009
Raid Level : raid1
Array Size : 1044096 (1019.63 MiB 1069.15 MB)
Device Size : 1044096 (1019.63 MiB 1069.15 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Feb 3 11:38:06 2009
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : d9960b10:40065e3b:c489a6b6:cff2ef48
Events : 0.16
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
[root@ora /]# mount /dev/md1 /mnt
8、模拟磁盘故障
设置sdb1失效
[root@ora /]# mdadm /dev/md1 -f /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md1
[root@ora /]# mdadm -Ds /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Tue Feb 3 11:24:42 2009
Raid Level : raid1
Array Size : 1044096 (1019.63 MiB 1069.15 MB)
Device Size : 1044096 (1019.63 MiB 1069.15 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Feb 3 11:32:59 2009
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
UUID : d9960b10:40065e3b:c489a6b6:cff2ef48
Events : 0.11
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 0 0 - removed
2 8 17 - faulty /dev/sdb1
移除sdb1
[root@ora /]# mdadm /dev/md1 -r /dev/sdb1
mdadm: hot removed /dev/sdb1
[root@ora /]# mdadm -Ds /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Tue Feb 3 11:24:42 2009
Raid Level : raid1
Array Size : 1044096 (1019.63 MiB 1069.15 MB)
Device Size : 1044096 (1019.63 MiB 1069.15 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Feb 3 11:33:15 2009
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : d9960b10:40065e3b:c489a6b6:cff2ef48
Events : 0.12
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 0 0 - removed
重新添加sdb1
[root@ora /]# mdadm /dev/md1 -a /dev/sdb1
mdadm: hot added /dev/sdb1
[root@ora /]# mdadm -Ds /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Tue Feb 3 11:24:42 2009
Raid Level : raid1
Array Size : 1044096 (1019.63 MiB 1069.15 MB)
Device Size : 1044096 (1019.63 MiB 1069.15 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Feb 3 11:33:35 2009
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 57% complete 磁盘会重新同步,如果是两机,也必须在一台机上完成,再启动另一台机
UUID : d9960b10:40065e3b:c489a6b6:cff2ef48
Events : 0.13
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 0 0 - removed
2 8 17 1 spare rebuilding /dev/sdb1
[root@ora /]# mdadm -Ds /dev/md1
/dev/md1:
Version : 00.90.01
Creation Time : Tue Feb 3 11:24:42 2009
Raid Level : raid1
Array Size : 1044096 (1019.63 MiB 1069.15 MB)
Device Size : 1044096 (1019.63 MiB 1069.15 MB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Tue Feb 3 11:33:40 2009
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : d9960b10:40065e3b:c489a6b6:cff2ef48
Events : 0.14
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
[root@ora /]#
其它:
创建一个raid0
mdadm -C /dev/md0 -l0 -n2 /dev/sda1 /dev/sdb1
禁用md0
mdadm -S /dev/md0
启用md0
mdadm -A /dev/md0 /dev/sda1 /dev/sdb1
创建raid5
mdadm -Cv /dev/md5 -l5 -n3 /dev/sda1 /dev/sdb1 /dev/sdc1 -x1 /dev/sdd1
mdadm /dev/md5 -f /dev/sda1
mdadm /dev/md5 -r /dev/sda1
cat /proc/mdstat (sdd1自动加入到raid5)
mdadm -Ds /dev/md5
创建阵列
mdadm可以支持LINEAR、RAID0 (striping)、 RAID1(mirroring)、 RAID4、RAID5、RAID6和MULTIPATH的阵列模式。
创建命令格式如下:
mdadm [mode] [options]
[mode]表示引用的模式
指定软RAID设备号,如/dev/md1
[options]可以有长短两种表示方式
表示组成阵列的磁盘分区,可以用shell的方法简短表示
[mode] 有7种:
Assemble:将以前定义的某个阵列加入当前在用阵列。
Build:Build a legacy array ,每个device 没有 superblocks
Create:创建一个新的阵列,每个device 具有 superblocks
Manage: 管理阵列,比如 add 或 remove
Misc:允许单独对阵列中的某个 device 做操作,比如抹去superblocks 或 终止在用的阵列。
Follow or Monitor:监控 raid 1,4,5,6 和 multipath 的状态
Grow:改变raid 容量或 阵列中的 device 数目
可用的 [options]:
-A, --assemble:加入一个以前定义的阵列
-B, --build:Build a legacy array without superblocks.
-C, --create:创建一个新的阵列
-Q, --query:查看一个device,判断它为一个 md device 或是 一个 md 阵列的一部分
-D, --detail:打印一个或多个 md device 的详细信息
-E, --examine:打印 device 上的 md superblock 的内容
-F, --follow, --monitor:选择 Monitor 模式
-G, --grow:改变在用阵列的大小或形态
-h, --help:帮助信息,用在以上选项后,则显示该选项信息
--help-options
-V, --version
-v, --verbose:显示细节
-b, --brief:较少的细节。用于 --detail 和 --examine 选项
-f, --force
-c, --config= :指定配置文件,缺省为 /etc/mdadm/mdadm.conf
-s, --scan:扫描配置文件或 /proc/mdstat以搜寻丢失的信息。配置文件/etc/mdadm/mdadm.conf
create 或 build 使用的选项:
-c, --chunk=:Specify chunk size of kibibytes. 缺省为 64.
--rounding=: Specify rounding factor for linear array (==chunk size)
-l, --level=:设定 raid level.
--create可用:linear, raid0, 0, stripe, raid1,1, mirror, raid4, 4, raid5, 5, raid6, 6, multipath, mp.
--build可用:linear, raid0, 0, stripe.
-p, --parity=:设定 raid5 的奇偶校验规则:eft-asymmetric, left-symmetric, right-asymmetric, right-symmetric, la, ra, ls, rs.缺省为left-symmetric
--layout=:类似于--parity
-n, --raid-devices=:指定阵列中可用 device 数目,这个数目只能由 --grow 修改
-x, --spare-devices=:指定初始阵列的富余device 数目
-z, --size=:组建RAID1/4/5/6后从每个device获取的空间总数
--assume-clean:目前仅用于 --build 选项
-R, --run:阵列中的某一部分出现在其他阵列或文件系统中时,mdadm会确认该阵列。此选项将不作确认。
-f, --force:通常mdadm不允许只用一个device 创建阵列,而且创建raid5时会使用一个device作为missing drive。此选项正相反。
-a, --auto{=no,yes,md,mdp,part,p}{NN}: