Chinaunix首页 | 论坛 | 博客
  • 博客访问: 53134
  • 博文数量: 26
  • 博客积分: 1295
  • 博客等级: 中尉
  • 技术积分: 280
  • 用 户 组: 普通用户
  • 注册时间: 2005-12-12 08:38
文章分类

全部博文(26)

文章存档

2010年(23)

2009年(3)

我的朋友

分类: LINUX

2010-04-06 13:37:51

Issue

In a High-Availability cluster the usage of shared storage is very common for ensuring all nodes have access to the same data.  When the filesystem residing on that storage is not cluster-aware, such as ext3, the risk of file system corruption is present since any two nodes can mount it at the same time.

Environment

Red Hat Enterprise Linux 4 and 5

Resolution

As of Red Hat Enterprise Linux 4.5, there is support in rgmanager for highly-available LVM volumes (HA-LVM) in a failover configuration without the need for a clustered logical volume manager (clvm) or cluster-aware file system.

 

When using LVM with local file-based locking (locking_type is set to 1 in /etc/lvm/lvm.conf), a volume group (VG) must never be active on more than one node at time. This is a requirement as local locking is not able to ensure consistency when more than one node must make updates to volume group metadata simultaneously.

 

HA-LVM is a resource agent for the rgmanager daemon that ensures this mutual-exclusion for volume group access. Internally, LVM tagging is used to control ownership of HA-LVM resources and to prevent the activation of a VG on more than one node simultaneously.

 

HA-LVM permits configuration of services using resources located on shared storage visible to all nodes in a cluster but where the additional complexity and cost of a cluster-aware file system and volume manager are not required.

 

It is advisable to equip hosts using HA-LVM for resource management with redundant paths to the shared storage devices (multipathing) to ensure correct access to the volume group data in the event of partial failure of storage components.

 

To set up LVM Failover, perform the following procedure:

 

1. Ensure that the parameter locking_type in the global section of /etc/lvm/lvm.conf is set to the value '1'.

 

2. Create the logical volume and filesystem using standard LVM2 and file system commands. For example:

# pvcreate /dev/sd[cde]1

# vgcreate /dev/sd[cde]1

# lvcreate -L 10G -n

# mkfs.ext3 /dev//

 

3. Edit /etc/cluster/cluster.conf to include the newly created logical volume as a resource in one of your services. Alternatively, configuration tools such as Conga or system-config-cluster may be used to create these entries.  Below is a sample resource manager section from /etc/cluster/cluster.conf:

 

  
  
      
         
         
      
  
  
      
      
  
  
      
      
  

 

 

Note: If there are multiple logical volumes in the volume group, then the Logical Volume name (lv_name) in the lvm resource should be left blank or unspecified.  The ability to have multiple logical volumes in a single volume group in HA-LVM became available as of Red Hat Enterprise Linux 4.7 (rgmanager-1.9.80-1) and 5.2 (rgmanager-2.0.38-2).  Also note that in an HA-LVM configuration, a volume group may only be used by a single service.

 

4. Edit the volume_list field in /etc/lvm/lvm.conf. Include the name of your root volume group and your hostname as listed in /etc/cluster/cluster.conf preceded by @. Note that this string MUST match the node name given in cluster.conf.  Below is a sample entry from /etc/lvm/lvm.conf:

volume_list = [ "VolGroup00", "@neo-01" ]

 

This tag will be used to activate shared VGs or LVs. DO NOT include the names of any volume groups that are to be shared using HA-LVM.

 

5. Update the initrd on all your cluster nodes. To do this, use the following command:

# mkinitrd -f /boot/initrd-$(uname -r).img $(uname -r)

 

6. Reboot all nodes to ensure the correct initrd is in use.

阅读(1142) | 评论(0) | 转发(0) |
0

上一篇:aix 系统配置

下一篇:reduce filesystem on rhel

给主人留下些什么吧!~~