分类: LINUX
2008-04-04 20:51:17
In the end I want to have the following situation:
[root@server1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
4.1G 2.0G 2.0G 51% /
/dev/sda1 190M 13M 168M 7% /boot
tmpfs 151M 0 151M 0% /dev/shm
[root@server1 ~]# fdisk -l
Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0008b885
Device Boot Start End Blocks Id System
/dev/sda1 * 1 25 200781 83 Linux
/dev/sda2 26 652 5036377+ 8e Linux LVM
Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/dm-0: 4462 MB, 4462739456 bytes
255 heads, 63 sectors/track, 542 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 637 MB, 637534208 bytes
255 heads, 63 sectors/track, 77 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x30307800
Disk /dev/dm-1 doesn't contain a valid partition table
[root@server1 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name VolGroup00
PV Size 4.80 GB / not usable 22.34 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 153
Free PE 1
Allocated PE 152
PV UUID op2n3N-rck1-Pywc-9wTY-EUxQ-KUcr-2YeRJ0
[root@server1 ~]# vgdisplay
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.78 GB
PE Size 32.00 MB
Total PE 153
Alloc PE / Size 152 / 4.75 GB
Free PE / Size 1 / 32.00 MB
VG UUID jJj1DQ-SvKY-6hdr-3MMS-8NOd-pb3l-lS7TA1
[root@server1 ~]# lvdisplay
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID yt5b4f-m2XC-F3aP-032r-ulAT-Re5P-lmh6hy
LV Write Access read/write
LV Status available
# open 1
LV Size 4.16 GB
Current LE 133
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol01
VG Name VolGroup00
LV UUID VrPqpP-40ym-55Gs-ShVm-Hlzs-Jzot-oYnonY
LV Write Access read/write
LV Status available
# open 1
LV Size 608.00 MB
Current LE 19
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1
[root@server1 ~]#
2 Installing mdadm
The most important tool for setting up RAID is mdadm. Let's install it like this:
yum install mkinitrd mdadm
Afterwards, we load a few kernel modules (to avoid a reboot):
modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10
Now run
cat /proc/mdstat
The output should look as follows:
[root@server1 ~]# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices:
[root@server1 ~]#
3 Preparing /dev/sdb
To create a RAID1 array on our already running system, we must prepare the /dev/sdb hard drive for RAID1, then copy the contents of our /dev/sda hard drive to it, and finally add /dev/sda to the RAID1 array.
First, we copy the partition table from /dev/sda to /dev/sdb so that both disks have exactly the same layout:
sfdisk -d /dev/sda | sfdisk /dev/sdb
The output should be as follows:
[root@server1 ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb
Checking that no-one is using this disk right now ...
OK
Disk /dev/sdb: 652 cylinders, 255 heads, 63 sectors/track
sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0
Device Boot Start End #sectors Id System
/dev/sdb1 * 63 401624 401562 83 Linux
/dev/sdb2 401625 10474379 10072755 8e Linux LVM
/dev/sdb3 0 - 0 0 Empty
/dev/sdb4 0 - 0 0 Empty
Successfully wrote the new partition table
Re-reading the partition table ...
If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
[root@server1 ~]#
should now show that both HDDs have the same layout:
[root@server1 ~]# fdisk -l
Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0008b885
Device Boot Start End Blocks Id System
/dev/sda1 * 1 25 200781 83 Linux
/dev/sda2 26 652 5036377+ 8e Linux LVM
Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 25 200781 83 Linux
/dev/sdb2 26 652 5036377+ 8e Linux LVM
Disk /dev/dm-0: 4462 MB, 4462739456 bytes
255 heads, 63 sectors/track, 542 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 637 MB, 637534208 bytes
255 heads, 63 sectors/track, 77 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x30307800
Disk /dev/dm-1 doesn't contain a valid partition table
[root@server1 ~]#
Next we must change the partition type of our two partitions on /dev/sdb to Linux raid autodetect:
fdisk /dev/sdb
[root@server1 ~]# fdisk /dev/sdb
Command (m for help): <- m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)
Command (m for help): <- t
Partition number (1-4): <- 1
Hex code (type L to list codes): <- L
0 Empty 1e Hidden W95 FAT1 80 Old Minix be Solaris boot
1 FAT12 24 NEC DOS 81 Minix / old Lin bf Solaris
2 XENIX root 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT-
3 XENIX usr 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT-
4 FAT16 <32M 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT-
5 Extended 41 PPC PReP Boot 85 Linux extended c7 Syrinx
6 FAT16 42 SFS 86 NTFS volume set da Non-FS data
7 HPFS/NTFS 4d QNX4.x 87 NTFS volume set db CP/M / CTOS / .
8 AIX 4e QNX4.x 2nd part 88 Linux plaintext de Dell Utility
9 AIX bootable 4f QNX4.x 3rd part 8e Linux LVM df BootIt
a OS/2 Boot Manag 50 OnTrack DM 93 Amoeba e1 DOS access
b W95 FAT32 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O
c W95 FAT32 (LBA) 52 CP/M 9f BSD/OS e4 SpeedStor
e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs
f W95 Ext'd (LBA) 54 OnTrackDM6 a5 FreeBSD ee EFI GPT
10 OPUS 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/
11 Hidden FAT12 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b
12 Compaq diagnost 5c Priam Edisk a8 Darwin UFS f1 SpeedStor
14 Hidden FAT16 <3 61 SpeedStor a9 NetBSD f4 SpeedStor
16 Hidden FAT16 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary
17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fd Linux raid auto
18 AST SmartSleep 65 Novell Netware b8 BSDI swap fe LANstep
1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid ff BBT
1c Hidden W95 FAT3 75 PC/IX
Hex code (type L to list codes): <- fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): <- t
Partition number (1-4): <- 2
Hex code (type L to list codes): <- fd
Changed system type of partition 2 to fd (Linux raid autodetect)
Command (m for help): <- w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@server1 ~]#
should now show that /dev/sdb1 and /dev/sdb2 are of the type Linux raid autodetect:
[root@server1 ~]# fdisk -l
Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0008b885
Device Boot Start End Blocks Id System
/dev/sda1 * 1 25 200781 83 Linux
/dev/sda2 26 652 5036377+ 8e Linux LVM
Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 25 200781 fd Linux raid autodetect
/dev/sdb2 26 652 5036377+ fd Linux raid autodetect
Disk /dev/dm-0: 4462 MB, 4462739456 bytes
255 heads, 63 sectors/track, 542 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 637 MB, 637534208 bytes
255 heads, 63 sectors/track, 77 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x30307800
Disk /dev/dm-1 doesn't contain a valid partition table
[root@server1 ~]#
To make sure that there are no remains from previous RAID installations on /dev/sdb, we run the following commands:
mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2
If there are no remains from previous RAID installations, each of the above commands will throw an error like this one (which is nothing to worry about):
[root@server1 ~]# mdadm --zero-superblock /dev/sdb1
mdadm: Unrecognised md component device - /dev/sdb1
[root@server1 ~]#
Otherwise the commands will not display anything at all.
4 Creating Our RAID Arrays
Now let's create our RAID arrays /dev/md0 and /dev/md1. /dev/sdb1 will be added to /dev/md0 and/dev/sdb2 to /dev/md1. /dev/sda1 and /dev/sda2 can't be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following two commands:
mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2
The command
cat /proc/mdstat
should now show that you have two degraded RAID arrays ([_U] or [U_] means that an array is degraded while [UU] means that the array is ok):
[root@server1 ~]# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb2[1]
5036288 blocks [2/1] [_U]
md0 : active raid1 sdb1[1]
200704 blocks [2/1] [_U]
unused devices:
[root@server1 ~]#
Next we create a filesystem (ext3) on our non-LVM RAID array /dev/md0:
mkfs.ext3 /dev/md0
Now we come to our LVM RAID array /dev/md1. To prepare it for LVM, we run:
pvcreate /dev/md1
Then we add /dev/md1 to our volume group VolGroup00:
vgextend VolGroup00 /dev/md1
The output of
pvdisplay
should now be similar to this:
[root@server1 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name VolGroup00
PV Size 4.80 GB / not usable 22.34 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 153
Free PE 1
Allocated PE 152
PV UUID op2n3N-rck1-Pywc-9wTY-EUxQ-KUcr-2YeRJ0
--- Physical volume ---
PV Name /dev/md1
VG Name VolGroup00
PV Size 4.80 GB / not usable 22.25 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 153
Free PE 153
Allocated PE 0
PV UUID pS3xiy-AEnZ-p3Wf-qY2D-cGus-eyGl-03mWyg
[root@server1 ~]#
The output of
vgdisplay
should be as follows:
[root@server1 ~]# vgdisplay
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 9.56 GB
PE Size 32.00 MB
Total PE 306
Alloc PE / Size 152 / 4.75 GB
Free PE / Size 154 / 4.81 GB
VG UUID jJj1DQ-SvKY-6hdr-3MMS-8NOd-pb3l-lS7TA1
[root@server1 ~]#
Next we create /etc/mdadm.conf as follows:
mdadm --examine --scan > /etc/mdadm.conf
Display the contents of the file:
cat /etc/mdadm.conf
In the file you should now see details about our two (degraded) RAID arrays:
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=7d2bf9c3:7cd9df21:f782dab8:9212d7cb ARRAY /dev/md1 level=raid1 num-devices=2 UUID=d93a2387:6355b5c5:25ed3e50:2a0e4f96 |
Next we modify /etc/fstab. Replace LABEL=/boot with /dev/md0 so that the file looks as follows:
vi /etc/fstab
/dev/VolGroup00/LogVol00 / ext3 defaults 1 1 /dev/md0 /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 |
Next replace /dev/sda1 with /dev/md0 in /etc/mtab:
vi /etc/mtab
/dev/mapper/VolGroup00-LogVol00 / ext3 rw 0 0 proc /proc proc rw 0 0 sysfs /sys sysfs rw 0 0 devpts /dev/pts devpts rw,gid=5,mode=620 0 0 /dev/md0 /boot ext3 rw 0 0 tmpfs /dev/shm tmpfs rw 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0 |
Now up to the GRUB boot loader. Open /boot/grub/menu.lst and add fallback=1 right after default=0:
vi /boot/grub/menu.lst
[...] default=0 fallback=1 [...] |
This makes that if the first kernel (counting starts with 0, so the first kernel is 0) fails to boot, kernel #2 will be booted.
In the same file, go to the bottom where you should find some kernel stanzas. Copy the first of them and paste the stanza before the first existing stanza; replace root (hd0,0) with root (hd1,0):
[...] title Fedora (2.6.23.1-42.fc8) root (hd1,0) kernel /vmlinuz-2.6.23.1-42.fc8 ro root=/dev/VolGroup00/LogVol00 initrd /initrd-2.6.23.1-42.fc8.img title Fedora (2.6.23.1-42.fc8) root (hd0,0) kernel /vmlinuz-2.6.23.1-42.fc8 ro root=/dev/VolGroup00/LogVol00 initrd /initrd-2.6.23.1-42.fc8.img |
The whole file should look something like this:
# grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 # initrd /initrd-version.img #boot=/dev/sda default=0 fallback=1 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.23.1-42.fc8) root (hd1,0) kernel /vmlinuz-2.6.23.1-42.fc8 ro root=/dev/VolGroup00/LogVol00 initrd /initrd-2.6.23.1-42.fc8.img title Fedora (2.6.23.1-42.fc8) root (hd0,0) kernel /vmlinuz-2.6.23.1-42.fc8 ro root=/dev/VolGroup00/LogVol00 initrd /initrd-2.6.23.1-42.fc8.img |
root (hd1,0) refers to /dev/sdb which is already part of our RAID arrays. We will reboot the system in a few moments; the system will then try to boot from our (still degraded) RAID arrays; if it fails, it will boot from /dev/sda (-> fallback=1).
Next we adjust our ramdisk to the new situation:
mv /boot/initrd-`uname -r`.img /boot/initrd-`uname -r`.img_orig
mkinitrd /boot/initrd-`uname -r`.img `uname -r`
5 Moving Our Data To The RAID Arrays
Now that we've modified all configuration files, we can copy the contents of /dev/sda to /dev/sdb (including the configuration changes we've made in the previous chapter).
To move the contents of our LVM partition /dev/sda2 to our LVM RAID array /dev/md1, we use the pvmove command:
pvmove /dev/sda2 /dev/md1
This can take some time, so please be patient.
Afterwards, we remove /dev/sda2 from the volume group VolGroup00...
vgreduce VolGroup00 /dev/sda2
... and tell the system to not use /dev/sda2 anymore for LVM:
pvremove /dev/sda2
The output of
pvdisplay
should now be as follows:
[root@server1 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/md1
VG Name VolGroup00
PV Size 4.80 GB / not usable 22.25 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 153
Free PE 1
Allocated PE 152
PV UUID pS3xiy-AEnZ-p3Wf-qY2D-cGus-eyGl-03mWyg
[root@server1 ~]#
Next we change the partition type of /dev/sda2 to Linux raid autodetect and add /dev/sda2 to the /dev/md1 array:
fdisk /dev/sda
[root@server1 ~]# fdisk /dev/sda
Command (m for help): <- t
Partition number (1-4): <- 2
Hex code (type L to list codes): <- fd
Changed system type of partition 2 to fd (Linux raid autodetect)
Command (m for help): <- w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@server1 ~]#
mdadm --add /dev/md1 /dev/sda2
Now take a look at
cat /proc/mdstat
... and you should see that the RAID array /dev/md1 is being synchronized:
[root@server1 ~]# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda2[2] sdb2[1]
5036288 blocks [2/1] [_U]
[=====>...............] recovery = 28.8% (1454272/5036288) finish=2.8min speed=21132K/sec
md0 : active raid1 sdb1[1]
200704 blocks [2/1] [_U]
unused devices:
[root@server1 ~]#
(You can run
watch cat /proc/mdstat
to get an ongoing output of the process. To leave watch, press CTRL+C.)
Wait until the synchronization has finished (the output should then look like this:
[root@server1 ~]# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda2[0] sdb2[1]
5036288 blocks [2/2] [UU]
md0 : active raid1 sdb1[1]
200704 blocks [2/1] [_U]
unused devices:
[root@server1 ~]#
).
Now let's mount /dev/md0:
mkdir /mnt/md0
mount /dev/md0 /mnt/md0
You should now find the array in the output of
mount
[root@server1 ~]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/md0 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/md0 on /mnt/md0 type ext3 (rw)
[root@server1 ~]#
Now we copy the contents of /dev/sda1 to /dev/md0 (which is mounted on /mnt/md0):
cd /boot
cp -dpRx . /mnt/md0
6 Preparing GRUB
Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:
grub
On the GRUB shell, type in the following commands:
root (hd0,0)
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0x83
grub>
setup (hd0)
grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.
grub>
root (hd1,0)
grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd
grub>
setup (hd1)
grub> setup (hd1)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd1)"... 16 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd1) (hd1)1+16 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.
grub>
quit
Now, back on the normal shell, we reboot the system and hope that it boots ok from our RAID arrays:
reboot
7 Preparing /dev/sda
If all goes well, you should now find /dev/md0 in the output of
df -h
[root@server1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
4.1G 2.0G 1.9G 51% /
/dev/md0 190M 16M 165M 9% /boot
tmpfs 151M 0 151M 0% /dev/shm
[root@server1 ~]#
The output of
cat /proc/mdstat
should be as follows:
[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1]
200704 blocks [2/1] [_U]
md1 : active raid1 sda2[0] sdb2[1]
5036288 blocks [2/2] [UU]
unused devices:
[root@server1 ~]#
The outputs of pvdisplay, vgdisplay, and lvdisplay should be as follows:
pvdisplay
[root@server1 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/md1
VG Name VolGroup00
PV Size 4.80 GB / not usable 22.25 MB
Allocatable yes
PE Size (KByte) 32768
Total PE 153
Free PE 1
Allocated PE 152
PV UUID pS3xiy-AEnZ-p3Wf-qY2D-cGus-eyGl-03mWyg
[root@server1 ~]#
vgdisplay
[root@server1 ~]# vgdisplay
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.78 GB
PE Size 32.00 MB
Total PE 153
Alloc PE / Size 152 / 4.75 GB
Free PE / Size 1 / 32.00 MB
VG UUID jJj1DQ-SvKY-6hdr-3MMS-8NOd-pb3l-lS7TA1
[root@server1 ~]#
lvdisplay
[root@server1 ~]# lvdisplay
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID yt5b4f-m2XC-F3aP-032r-ulAT-Re5P-lmh6hy
LV Write Access read/write
LV Status available
# open 1
LV Size 4.16 GB
Current LE 133
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol01
VG Name VolGroup00
LV UUID VrPqpP-40ym-55Gs-ShVm-Hlzs-Jzot-oYnonY
LV Write Access read/write
LV Status available
# open 1
LV Size 608.00 MB
Current LE 19
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1
[root@server1 ~]#
Now we must change the partition type of /dev/sda1 to Linux raid autodetect as well:
fdisk /dev/sda
[root@server1 ~]# fdisk /dev/sda
Command (m for help): <- t
Partition number (1-4): <- 1
Hex code (type L to list codes): <- fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): <- w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@server1 ~]#
Now we can add /dev/sda1 to the /dev/md0 RAID array:
mdadm --add /dev/md0 /dev/sda1
Now take a look at
cat /proc/mdstat
[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sda1[0] sdb1[1]
200704 blocks [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1]
5036288 blocks [2/2] [UU]
unused devices:
[root@server1 ~]#
Then adjust /etc/mdadm.conf to the new situation:
mdadm --examine --scan > /etc/mdadm.conf
/etc/mdadm.conf should now look something like this:
cat /etc/mdadm.conf
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=7d2bf9c3:7cd9df21:f782dab8:9212d7cb ARRAY /dev/md1 level=raid1 num-devices=2 UUID=d93a2387:6355b5c5:25ed3e50:2a0e4f96 |
Reboot the system:
reboot
It should boot without problems.
That's it - you've successfully set up software RAID1 on your running LVM system!
8 Testing
Now let's simulate a hard drive failure. It doesn't matter if you select /dev/sda or /dev/sdb here. In this example I assume that /dev/sdb has failed.
To simulate the hard drive failure, you can either shut down the system and remove /dev/sdb from the system, or you (soft-)remove it like this:
mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm --manage /dev/md1 --fail /dev/sdb2
mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm --manage /dev/md1 --remove /dev/sdb2
Shut down the system:
shutdown -h now
Then put in a new /dev/sdb drive (if you simulate a failure of /dev/sda, you should now put /dev/sdb in /dev/sda's place and connect the new HDD as /dev/sdb!) and boot the system. It should still start without problems.
Now run
cat /proc/mdstat
and you should see that we have a degraded array:
[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sda1[0]
200704 blocks [2/1] [U_]
md1 : active raid1 sda2[0]
5036288 blocks [2/1] [U_]
unused devices:
[root@server1 ~]#
The output of
fdisk -l
should look as follows:
[root@server1 ~]# fdisk -l
Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0008b885
Device Boot Start End Blocks Id System
/dev/sda1 * 1 25 200781 fd Linux raid autodetect
/dev/sda2 26 652 5036377+ fd Linux raid autodetect
Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/md1: 5157 MB, 5157158912 bytes
2 heads, 4 sectors/track, 1259072 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md1 doesn't contain a valid partition table
Disk /dev/dm-0: 4462 MB, 4462739456 bytes
255 heads, 63 sectors/track, 542 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 637 MB, 637534208 bytes
255 heads, 63 sectors/track, 77 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x30307800
Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/md0: 205 MB, 205520896 bytes
2 heads, 4 sectors/track, 50176 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table
[root@server1 ~]#
Now we copy the partition table of /dev/sda to /dev/sdb:
sfdisk -d /dev/sda | sfdisk /dev/sdb
(If you get an error, you can try the --force option:
sfdisk -d /dev/sda | sfdisk --force /dev/sdb
)
[root@server1 ~]# sfdisk -d /dev/sda | sfdisk /dev/sdb
Checking that no-one is using this disk right now ...
OK
Disk /dev/sdb: 652 cylinders, 255 heads, 63 sectors/track
sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0
Device Boot Start End #sectors Id System
/dev/sdb1 * 63 401624 401562 fd Linux raid autodetect
/dev/sdb2 401625 10474379 10072755 fd Linux raid autodetect
/dev/sdb3 0 - 0 0 Empty
/dev/sdb4 0 - 0 0 Empty
Successfully wrote the new partition table
Re-reading the partition table ...
If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
[root@server1 ~]#
Afterwards we remove any remains of a previous RAID array from /dev/sdb...
mdadm --zero-superblock /dev/sdb1
mdadm --zero-superblock /dev/sdb2
... and add /dev/sdb to the RAID array:
mdadm -a /dev/md0 /dev/sdb1
mdadm -a /dev/md1 /dev/sdb2
Now take a look at
cat /proc/mdstat
[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1] sda1[0]
200704 blocks [2/2] [UU]
md1 : active raid1 sdb2[2] sda2[0]
5036288 blocks [2/1] [U_]
[=====>...............] recovery = 29.8% (1502656/5036288) finish=18.8min speed=3116K/sec
unused devices:
[root@server1 ~]#
Wait until the synchronization has finished:
[root@server1 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[1] sda1[0]
200704 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
5036288 blocks [2/2] [UU]
unused devices:
[root@server1 ~]#
Then run
grub
and install the bootloader on both HDDs:
root (hd0,0)
setup (hd0)
root (hd1,0)
setup (hd1)
quit
That's it. You've just replaced a failed hard drive in your RAID1 array.