Chinaunix首页 | 论坛 | 博客
  • 博客访问: 806342
  • 博文数量: 127
  • 博客积分: 2669
  • 博客等级: 少校
  • 技术积分: 1680
  • 用 户 组: 普通用户
  • 注册时间: 2009-10-23 11:39
文章分类

全部博文(127)

文章存档

2014年(5)

2013年(19)

2012年(25)

2011年(9)

2010年(25)

2009年(44)

分类:

2009-12-23 15:18:10

Logical volume management (LVM) is a way systems can abstract physical volume management into a higher-level and usually simpler paradigm. By using LVM, all physical disks and partitions, no matter what size and how scattered they are, can be abstracted and viewed as a single storage source. For example, in the layout of physical-to-logical mapping shown in Figure 1, how could the user create a filesystem of, say 150GB, since the biggest disk is 80GB large?



Physical-to-logical mapping

By aggregating partitions and whole disks into a virtual disk, LVM can sum small storage spaces into a bigger, consolidated one. This virtual disk, in LVM terms, is called volume group.

And the possibility of having a filesystem bigger than your biggest disk isn't the only magic feature of this high-level paradigm of storage management. With LVM, you can also:

  • Add disks/partitions to your disk-pool and extend existing filesystems online
  • Replace two 80GB disks with one 160GB disk without the need to bring the system offline or manually move data between disks
  • Shrink filesystems and remove disks from the pool when their storage space is no longer necessary
  • Perform consistent backups using snapshots (more on this later in the article)

LVM2 refers to a new userspace toolset that provides logical volume management facilities In Linux. It is fully backwards-compatible with the original LVM toolset. In this article, you'll see the most useful features of LVM2 as well as some other possible uses to simplify your system administration tasks. (By the way, if you're looking for a more basic guide to LVM, try the LVM HowTo listed in the Resources section below).

Let's look at how the LVM is organized.

The LVM is structured in three elements:

  • Volumes: physical and logical volumes and volume groups
  • Extents: physical and logical extents
  • Device mapper: the Linux kernel module

Linux LVM is organized into physical volumes (PVs), volume groups (VGs), and logical volumes (LVs). Physical volumes are physical disks or physical disk partitions (as in /dev/hda or /dev/hdb1). A volume group is an aggregation of physical volumes. And a volume group can be logically partitioned into logical volumes.

Figure 2 shows a three-disk layout.



Physical to logical volume mapping

All four partitions in physical disk 0 (/dev/hda[1-4]), as well as the whole of physical disk 1 (/dev/hdb) and physical disk 2 (/dev/hdd), were added as PVs to volume group VG0.

The volume group is where the magic of n-to-m mapping is done (as in, n PVs can be seen as m LVs). So after the assignment of PVs to the volume group, you can create a logical volume of any size (to the maximum of the VG size). In the example in Figure 2, a volume group named LV0 was created, leaving some free-space for other LVs (or for posterior LV0 growth).

Logical volumes are the LVM equivalent of physical disks partitions—for all practical purposes, they are physical disk partitions.

So, after the creation of an LV, you can use it with whatever filesystem you prefer and mount it under some mount point to start using it. Figure 3 shows a formatted logical volume, LV0, mounted under /var.



Physical volumes to filesystem mapping

In order to do the n-to-m, physical-to-logical volumes mapping, PVs and VGs must share a common quantum size for their basic blocks; these are called physical extents (PEs) and logical extents (LEs). Despite the n-physical to m-logical volume mapping, PEs and LEs always map 1-to-1.

With LVM2, there's no limit on the maximum numbers of extents per PV/LV. The default extent size is 4MB, and there's no need to change this for most configurations, because there is no I/O performance penalty for smaller/bigger extent size. LVM tools usage, however, can suffer from high extent count, so using bigger extents can keep the extent count low. Be aware, however, that different extent sizes can't be mixed in a single VG, and changing the extent size is the single unsafe operation with the LVM: It can destroy data. The best advice is to stick with the extent size chosen in the initial setup.

Different extent sizes means different VG granularity. For instance, if you choose an extent size of 4GB, you can only shrink/extend LVs in steps of 4GB.

Figure 4 shows the same layout used in previous examples with the PEs and LEs shown (the free space inside VG0 is also formed of free LEs, even though they're not shown).



Physical to logical extent mapping

Also note the extent allocation policy in Figure 4. LVM2 doesn't always allocate PEs contiguously; for more details, see the Linux man page on lvm (see the Resources below for a link). The system administrator can set different allocation policies, but that isn't normally necessary, since the default one (called the normal allocation policy) uses common-sense rules such as not placing parallel stripes on the same physical volume.

If you decide to create a second LV (LV1), the final PE distribution may look like the one shown in Figure 5.



Physical to logical extent mapping

Device mapper (also known as dm_mod) is a Linux kernel module (it can be built-in too), upstream since kernel 2.6.9. Its job (as the name says) is to map devices—it is required by LVM2.

In most major distributions, Device mapper comes installed by default, and it is usually loaded automatically at boot time or when LVM2/EVMS packages are installed or enabled (EVMS is an alternative tool; more on that in Resources). If not, try to modprobe for dm_mod and then check your distro's documentation for how to enable it at boot time: modprobe dm_mod.

When creating VGs and LVs, you can give them a meaningful name (as opposed to the previous examples where, for didactic purposes, the names VG0, LV0, and LV1 were used). It is the Device mapper's job to map these names correctly to the physical devices. Using the previous examples, the Device mapper would create the following device nodes in the /dev filesystem:

  • /dev/mapper/VG0-LV0
    • /dev/VG0/LV0 is a link to the above
  • /dev/mapper/VG0-LV1
    • /dev/VG0/LV1 is a link to the above

(Notice the name format standard: /dev/{vg_name}/{lv_name} -> /dev/mapper/{vg_name}{lv_name}).

As opposed to a physical disk, there's no raw access to a volume group (meaning there's no such thing as a /dev/mapper/VG0 file or you can't dd if=/dev/VG0 of=dev/VG1). You usually deal with these using the lvm(8) command(s).

Some common tasks you'll perform with LVM2 are systems verification (is LVM2 installed?) and volume creation, extension, and management.


Verify whether your distro LVM2 package is installed. If not, install it (always giving preference to your original packages).

The Device mapper module must be loaded at system startup. Check to see if it is currently loaded with lsmod | grep dm_mod. Otherwise, you might need to install and configure additional packages (the original documentation can show you how to enable LVM2).

If you're just testing things (or maybe rescuing a system), use these basic commands to start using LVM2:



                
#this should load the Device-mapper module
modprobe dm_mod

#this should find all the PVs in your physical disks
pvscan

#this should activate all the Volume Groups
vgchange -ay

If you plan to have your root filesystem inside an LVM LV, take extra care with the initial-ramdisk image. Again, the distros usually take care of this—when installing the LVM2 package, they usually rebuild or update the initrd image with the appropriate kernel modules and activation scripts. But you may want to browse through your distro documentation and make sure that LVM2 root filesystems are supported.

Note that the initial-ramdisk image usually activates LVM only when it detects that the root filesystem is under a VG. That's usually done by parsing the root= kernel parameter. Different distros have different ways to determine whether the root filesystem path is or is not inside a volume group. Consult your distro documentation for details. If unsure, check your initrd or initramdisk configuration.


Using your favorite partitioner (fdisk, parted, gparted), create a new partition for LVM usage. Although supported by LVM, using an LVM on top of an entire disk is not recommended: Other operating systems may see this disk as uninitialized and wipe it out! Better to create a partition covering the whole disk.

Most partitioners usually default to create new partitions using the 0x83 (or Linux) partition ID. You can use the default, but for organization purposes, it is better to change it to 0x8e (or Linux LVM).

After you've created a partition, you should see one (or more) Linux LVM partitions in your partition table:

root@klausk:/tmp/a# fdisk -l

Disk /dev/hda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *           1        1623    13036716    7  HPFS/NTFS
/dev/hda2            1624        2103     3855600   8e  Linux LVM
/dev/hda3            2104        2740     5116702+  83  Linux
/dev/hda4            3000        9729    54058725    5  Extended
/dev/hda5            9569        9729     1293232+  82  Linux swap / Solaris
/dev/hda6            3000        4274    10241374+  83  Linux
/dev/hda7            4275        5549    10241406   83  Linux
/dev/hda8            5550        6824    10241406   83  Linux
/dev/hda9            6825        8099    10241406   83  Linux
/dev/hda10           8100        9568    11799711   8e  Linux LVM

Partition table entries are not in disk order
root@klausk:/tmp/a#

Now initialize each partition with pvcreate:



                
root@klausk:/tmp/a# pvcreate /dev/hda2 /dev/hda10
  Physical volume "/dev/hda2" successfully created
  Physical volume "/dev/hda10" successfully created
root@klausk:/tmp/a#

The PVs and the VG are created in a single step: vgcreate:



                
root@klausk:~# vgcreate test-volume /dev/hda2 /dev/hda10
  Volume group "test-volume" successfully created
root@klausk:~#

The command above creates a logical volume called test-volume using /dev/hda2 and /dev/hda10 as the initial PVs.

After the VG test-volume creation, use the vgdisplay command to review general info about the newly created VG:



                
root@klausk:/dev# vgdisplay -v test-volume
    Using volume group(s) on command line
    Finding volume group "test-volume"
  --- Volume group ---
  VG Name               test-volume
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               14.93 GB
  PE Size               4.00 MB
  Total PE              3821
  Alloc PE / Size       0 / 0   
  Free  PE / Size       3821 / 14.93 GB
  VG UUID               lk8oco-ndQA-yIMZ-ZWhu-LtYX-T2D7-7sGKaV
   
  --- Physical volumes ---
  PV Name               /dev/hda2     
  PV UUID               8LTWlw-p1OJ-dF6w-ZfMI-PCuo-8CiU-CT4Oc6
  PV Status             allocatable
  Total PE / Free PE    941 / 941
   
  PV Name               /dev/hda10     
  PV UUID               vC9Lwb-wvgU-UZnF-0YcE-KMBb-rCmU-x1G3hw
  PV Status             allocatable
  Total PE / Free PE    2880 / 2880
   
root@klausk:/dev# 

In Listing 4, check that there are 2 PVs assigned to this VG with the total size of 14.93GB (that is, 3,821 PEs of 4MB each)—don't forget to see that all of them are free for use!

Now that the volume group is ready to use, use it like a virtual disk to create/remove/resize partitions (LVs)—note that the Volume Group is an abstract entity, only seen by the LVM toolset. Create a new logical volume using lvcreate:



                
root@klausk:/# lvcreate -L 5G -n data test-volume
  Logical volume "data" created
root@klausk:/#

Listing 5 creates a 5GB LV named data. After data has been created, you can check for its device node:


Listing 6. Checking LVs device node
                
root@klausk:/# ls -l /dev/mapper/test--volume-data 
brw-rw---- 1 root disk 253, 4 2006-11-28 17:48 /dev/mapper/test--volume-data
root@klausk:/# ls -l /dev/test-volume/data 
lrwxrwxrwx 1 root root 29 2006-11-28 17:48 /dev/test-volume/data -> 
/dev/mapper/test--volume-data
root@klausk:/# 

You can also check for the LV properties with the lvdisplay command:



                
root@klausk:~# lvdisplay /dev/test-volume/data 
  --- Logical volume ---
  LV Name                /dev/test-volume/data
  VG Name                test-volume
  LV UUID                FZK4le-RzHx-VfLz-tLjK-0xXH-mOML-lfucOH
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                5.00 GB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           253:4
   
root@klausk:~#

As you probably noticed, the LV name/path for all practical purposes is /dev/{VG_name}/{LV_name}, as in /dev/test-volume/data. Besides being the target for the /dev/{VG_name}/{LV_name} link, don't use the /dev/mapper/{VG_name}-{LV_name} file. The majority of LVM commands are expecting something in the format /dev/{vg-name}/{lv-name} as the target specification for operation.

Finally, with the logical volume ready, format it with whatever filesystem you prefer and then mount it under the desired mount point:



                
root@klausk:~# mkfs.reiserfs /dev/test-volume/data 
root@klausk:~# mkdir /data
root@klausk:~# mount -t reiserfs /dev/test-volume/data /data/
root@klausk:~# df -h /data
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/test--volume-data
                      5.0G   33M  5.0G   1% /data
root@klausk:~# 
      

You may also want to edit your fstab(5) file to automatically mount the filesystem at boot time:



                
#mount Logical Volume 'data' under /data
/dev/test-volume/data   /data   reiserfs        defaults        0 2
 

The logical volume is like a block device for all purposes, including but not limited to using it as a raw partition for databases. This is, in fact, a standard best practice if you want to perform consistent backups over a database using LVM snapshots.


This is the easy part. If you have enough free space in the volume group, you just have to use lvextend in order to extend the volume. There's no need to unmount it. Afterwards, also extend the filesystem inside the logical volume (they are two separate things, remember). Depending on the filesystem you're using, it also can be extended online (that's it, while mounted!).

If you don't have enough space in your VG, you'll need to add additional physical disks first. To do that:

  • Use a free partition to create a physical disk. It is recommended that you change the partition type to 0x8e (Linux LVM) for easy identification of LVM partitions/disks. Use pvcreate to initialize the Physical Disk: pvcreate /dev/hda3.
  • Then, use vgextend to add it to an existing VG: vgextend test-volume /dev/hda2.

You can also create or add several physical disks at once with:

pvcreate /dev/hda2 /dev/hda3 /dev/hda5
vgextend test-volume /dev/hda2 /dev/hda3 /dev/hda5

Once you're ready with adding PVs and have sufficient space to grow your logical volume, use lvextend to extend the logical volume(s): lvextend -L 8G /dev/test-volume/data. This command extends the /dev/test-volume/data LV to the size of 8GB.

There are several useful parameters for lvextend:

  • You can use -L +5G if you extend your LV in 5GB chunks (relative size).
  • You can specify where you want this new extension to be placed (in terms of PVs); just append the PV you want to use to the command.
  • You can also specify the absolute/relative extension size in terms of PEs.

Take a look at lvextend(8) for more details.

After extending the LV, don't forget to also extend the filesystem (so you can actually use the extra space). This can be done online (with the filesystem mounted), depending on the filesystem.

Listing 10 is an example of resizing an reiserfs(v3) with resize_reiserfs (which can be used on a mounted filesystem, by the way): resize_reiserfs /dev/test-volume/data.


To manage volumes, you need to know how to reduce LVs and how to remove PVs.

Reducing logical volumes
You can reduce an LV in the same way you extend one, using the lvreduce command. From the LVM side, this operation can always be done with the volume online. One caveat: the majority of filesystems don't support online filesystem shrinking. Listing 10 demonstrates a sample procedure:



                
#unmount LV
umount /path/to/mounted-volume
#shrink filesystem to 4G
resize_reiserfs -s 4G /dev/test-volume/data
#reduce LV
lvreduce -L 4G /dev/test-volume/data

Be careful with sizes and units: the filesystem should not be longer than the LV!

Removing physical volumes
Imagine the following situation: You have a volume group with two 80GB disks, and you want to upgrade those to 160GB disks. With LVM, you can remove a PV from a VG in the same way they are added (that means online!). Notice, though, that you can't remove PVs that are being used in an LV. For those situations, there is a great utility called pvmove that can free PVs online so you can replace them easily. In a hot-swap environment, you can even swap all disks with no downtime at all!

pvmove's only requirement is a contiguous number of free extents in the VG equivalent to the number of extents to be moved out of a PV. There's no easy way to directly determine the largest free set of contiguous PEs, but you can use pvdisplay -m to display the PV allocation map:



                
#shows the allocation map
pvdisplay -m
  --- Physical volume ---
  PV Name               /dev/hda6
  VG Name               test-volume
  PV Size               4.91 GB / not usable 1.34 MB
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              1200
  Free PE               0
  Allocated PE          1200
  PV UUID               BA99ay-tOcn-Atmd-LTCZ-2KQr-b4Z0-CJ0FjO

  --- Physical Segments ---
  Physical extent 0 to 2367:
    Logical volume      /dev/test-volume/data
    Logical extents     5692 to 8059
  Physical extent 2368 to 2499:
    Logical volume      /dev/test-volume/data
    Logical extents     5560 to 5691

  --- Physical volume ---
  PV Name               /dev/hda7
  VG Name               test-volume
  PV Size               9.77 GB / not usable 1.37 MB
  Allocatable           yes
  PE Size (KByte)       4096
  Total PE              2500
  Free PE               1220
  Allocated PE          1280
  PV UUID               Es9jwb-IjiL-jtd5-TgBx-XSxK-Xshj-Wxnjni

  --- Physical Segments ---
  Physical extent 0 to 1279:
    Logical volume      /dev/test-volume/LV0
    Logical extents     0 to 1279
  Physical extent 1280 to 2499:
    FREE

In Listing 11, note that there are 2,499-1,280 = 1,219 free contiguous extents available, meaning that we can move up to 1,219 extents from another PV to /dev/hda7.

If you want to free a PV for replacement purposes, it's a good idea to disable its allocation so that you can be sure it remains free until you remove it from the volume group. Issue this before moving out the data:



                
#Disable /dev/hda6 allocation
pvchange -xn /dev/hda6

Once free, see that the PV /dev/hda6 is 1,200 extents large and there are no free extents. To move the data from this PV, issue the following:



                
#Move allocated extents out of /dev/hda6
pvmove -i 10 /dev/hda6

The -i 10 parameter in Listing 13 tells pvmove to report back status once every 10 seconds. Depending on how large the data to be moved is, this operation can take several minutes (or hours). This can also be done in the background with the -b parameter. In this case, status would be reported to the syslog.

In case you just don't have enough free contiguous extents to use in a pvmove operation, remember that you can always add one or more disks/partitions to a VG, thus adding a contiguous space, free for pvmove use.

Other useful LVM operations
Consult the man pages for more details on these other useful LVM operations:

  • pvresize extends PVs if the underlying partition has also been extended; it shrinks PV if the allocation map permits it.
  • pvremove destroys PVs (wipes its metadata clean). Use only after the PV had been removed from a VG with vgreduce.
  • vgreduce removes unallocated PVs from a volume group, reducing the VG.
  • vgmerge merges two different VGs into one. The target VG can be online!
  • vgsplit splits a volume group.
  • vgchange changes attributes and permissions of a VG.
  • lvchange changes attributes and permissions of a LV.
  • lvconvert converts between a linear volume and a mirror or snapshot and vice versa.

A consistent backup is achieved when no data is changed between the start and the end of the backup process. This can be hard to guarantee without stopping the system for the time required by the copy process.

Linux LVM implements a feature called Snapshots that does exactly what the name says: It's like taking a picture of a logical volume at a given moment in time. With a Snapshot, you are provided with two copies of the same LV—one can be used for backup purposes while the other continues in operation.

The two great advantages of Snapshots are:

  1. Snapshot creation is instantaneous; no need to stop a production environment.
  2. Two copies are made, but not at twice the size. A Snapshot will use only the space needed to accommodate the difference between the two LVs.

This is accomplished by having an exception list that is updated every time something changes between the LVs (formally known as CoW, Copy-on-Write).

In order to create a new Snapshot LV, use the same lvcreate command, specifying the -s parameter and an origin LV. The -L size in this case specifies the exception table size, which is how much difference the Snapshot will support before losing consistency.



                
#create a Snapshot LV called 'snap' from origin LV 'test'
lvcreate -s -L 2G -n snap /dev/test-volume/test

Use lvdisplay to query special information like CoW-size and CoW-usage:



                
lvdisplay /dev/vg00/snap

  --- Logical volume ---
  LV Name                /dev/test-volume/snap
  VG Name                vg00
  LV UUID                QHVJYh-PR3s-A4SG-s4Aa-MyWN-Ra7a-HL47KL
  LV Write Access        read/write
  LV snapshot status     active destination for /dev/test-volume/test
  LV Status              available
  # open                 0
  LV Size                4.00 GB
  Current LE             1024
  COW-table size         2.00 GB
  COW-table LE           512
  Allocated to snapshot  54.16%
  Snapshot chunk size    8.00 KB
  Segments               1
  Allocation             inherit
  Read ahead sectors     0
  Block device           254:5

Notice in Listing 15 that CoW is 2GB large, 54.16 % of which is already used.

For all intents and purposes, the Snapshot is a copy of the original LV. It can be mounted if a filesystem is present with:

#mount snapshot volume
mount -o ro /dev/test-volume/snap /mnt/snap

In this snippet code, the ro flag to mount it is read-only. You can make it read-only at the LVM level by appending a -p r to the lvcreate command.

Once the filesystem has been mounted, you can proceed with backup using tar, rsync, or whatever backup tool is desired. If the LV doesn't contain a filesystem, or if a raw backup is desired, it's also possible to use dd directly on the device node.

Once the copy process finishes and the Snapshot is no longer needed, simply unmount and scrap it using lvremove:

#remove snapshot
lvremove /dev/test-volume/snap
 

For consistency, in case a database is on top of an LV and a consistent backup is desired, remember to flush tables and make the Snapshot volume while acquiring a read-lock (in this lovely sample pseudo-code):

SQL> flush tables read lock
{create Snapshot}
SQL> release read lock
{start copy process from the snapshot LV}
 

The script in Listing 16 is taken directly from my laptop where I make daily backups using rsync to a remote server. This is not intended for enterprise use—an incremental backup with history would make more sense there. The concept remains the same, though.



                
#!/bin/sh

# we need the dm-snapshot module
modprobe dm-snapshot
if [ -e /dev/test-volume/home-snap ]
then
  # remove left-overs, if any
  umount -f /mnt/home-snap && true
  lvremove -f /dev/test-volume/home-snap
fi
# create snapshot, 1GB CoW space
# that should be sufficient for accommodating changes during copy
lvcreate -vs -p r -n home-snap -L 1G /dev/test-volume/home
mkdir -p /mnt/home-snap
# mount recently-created snapshot as read-only
mount -o ro /dev/test-volume/home-snap /mnt/home-snap
# magical rsync command
rsync -avhzPCi --delete -e "ssh -i /home/klausk/.ssh/id_rsa" \
      --filter '- .Trash/' --filter '- *~' \
      --filter '- .local/share/Trash/' \
      --filter '- *.mp3' --filter '- *Cache*' --filter '- *cache*' \
      /mnt/home-snap/klausk backuphost.domain.net:backupdir/
# unmount and scrap snapshot LV
umount /mnt/home-snap
lvremove -f /dev/test-volume/home-snap

In special cases where the cycle can't be estimated or copy process times are long, a script could query the Snapshot CoW usage with lvdisplay and extend the LV on demand. In extreme cases, you could opt for a Snapshot the same size as the original LV—that way, changes can never be larger than the whole volume!


I'll wrap up with a few other nifty sysadmin tricks you can do with LVM2, including on-demand virtualization, improving fault tolerance with mirroring, and transparently encrypting a block device.

With LVM2, Snapshots are not restricted to read-only. This means that once a Snapshot has been made, you can mount and read and write to it like a regular block device.

Because popular virtualization systems like Xen, VMWare, Qemu, and KVM can use block devices as guest images, it's possible to create full copies of these images and use them like on-demand, small-fingerprint virtual machines with the added advantage of rapid deployment (creating a Snapshot usually doesn't take more than a few seconds) and space saving (guests would share most of the data with the original image).

General guidelines for doing this include the following steps:

  1. Create a logical volume for the original image.
  2. Install a guest virtual machine using the LV as the disk image.
  3. Suspend or freeze the virtual machine. The memory image can be a regular file where all other Snapshots reside.
  4. Create a read-write Snapshot of the original LV.
  5. Spawn a new virtual machine using the Snapshot volume as the disk image. Change network/console settings if necessary.
  6. Log on the created machine, and change network settings/hostname.

After completing these steps, you can provide the user with access information to the newly created virtual machine. If another virtual machine is required, repeat steps 4 through 6 (which means no need to reinstall a machine!). Alternatively, you can automate these steps with a script.

After you're finished using it, you can stop the virtual machine and scrap the Snapshot if desired.

Recent LVM2 developments allow a logical volume to sport high-availability features by having two or more mirrors each which can be placed under different physical volumes (or different devices). dmeventd can bring a PV offline without service prejudice when an I/O error is detected in the device. Refer to lvcreate(8), lvconvert(8), and lvchange(8) man pages for more info.

For hardware that supports it, it's possible to use dm_multipath for using different channels to access a single device, having a fail-over possibility in case a channel goes down. Refer to the dm_multipath and multipathd documentation for more details.

Transparent device encryption

You can transparently encrypt a block device or a logical volume with dm_crypt. Refer to the dm_crypt documentation and the cryptsetup(8) man page for more info.


Learn

Get products and technologies

  • Order the SEK for Linux, a two-DVD set containing the latest IBM trial software for Linux from DB2®, Lotus®, Rational®, Tivoli®, and WebSphere®.

  • With IBM trial software, available for download directly from developerWorks, build your next development project on Linux.

Discuss

阅读(2423) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~