Chinaunix首页 | 论坛 | 博客
  • 博客访问: 2021086
  • 博文数量: 593
  • 博客积分: 20034
  • 博客等级: 上将
  • 技术积分: 6779
  • 用 户 组: 普通用户
  • 注册时间: 2006-02-06 14:07
文章分类

全部博文(593)

文章存档

2016年(1)

2011年(101)

2010年(80)

2009年(10)

2008年(102)

2007年(16)

2006年(283)

我的朋友

分类: LINUX

2010-06-05 02:52:07

VMware fence agent & Red Hat Cluster Suite

Last updated 04-May-2010

Updates history:

  • 2010-05-04: Fence_vmware in RHEL 5.5 is currently fence_vmware_ng (same syntax but named as fence_vmware) so fence_vmware_ng is no longer needed.

  • 2010-15-01: Fence_vmware from RHEL 5.3, 5.4 doesn't work with ESX 4.0. Working agent included
  • 2009-10-07: Tested on ESX 4.0.0 and vCenter 4.0.0
  • 2009-01-19: Fixed vmware-ng. Old one mail fail, if somebody turn on VM (for example VMware cluster itself) before agent itself. Before, this leads to error and fence agent fail. Now only warning is displayed and fencing is considered sucessfull.
  • 2009-01-15: New vmware-ng. There is speed improvement with many VMs registered in VMware for status operation => whole fencing. Default type esx is now really default.

We have 2 agents for VMware virtual machines fencing.

  • First named fence_vmware is in RHEL5.3 and 5.4/STABLE2 branch. It's designed and tested against VMware ESX server (not ESXi!) and Server 1.x. This one is replaced by new fence_vmware (currently named fence_vmware) in RHEL 5.5/STABLE3
  • Second with old name fence_vmware_ng (currently fence_vmware) in master/STALBLE3 branch. It's designed and tested against VMware ESX/ESXi/VC and Server 2.x, 1.x. This is what replaced old fence_vmware. (in master/STABLE3 is actually named fence_vmware)

Fence_vmware

It's union of two older agents. fence_vmware_vix and fence_vmware_vi.

VI (in following text, VI API is not only original VI Perl API with last version 1.6, but VMware vSphere SDK for Perl too) is VMware API for controlling their main business class of VMware products (ESX/VC). This API is fully cluster aware (VMware cluster). So this agent is able to do fencing guests machines physically running on ESX but managed by VC and able to work without any reconfiguration in case of migrating guest to another ESX.

VIX is newer API, working on VMware "low-end" products (Server 2.x, 1.x), but there is some support for ESX/ESXi 3.5 update 2 and VC 2.5 update 2. This API is NOT cluster aware, and recommended only for Server 2.x and 1.x. But if you are using only one ESX/ESXi or doesn't have VMware Cluster and never use migration, you can use this API too.

If you are using RHEL 5.5/RHEL 6 just install fence_agents package and you are ready to use fence_vmware. For distributions with older fence_agetns, you can get this agent from GIT (RHEL 5.5/STABLE3/master) repository and use (please make sure to use current library (fencing.py) too).

Pre-req

VI Perl API or/and VIX API installed on every node in cluster. This is big difference against older agent, where you don't need install anything, but new agent has little less painful configuration (and many bonuses)

Running

If you run fence_vmware with -h you will see something like this:

Options:
   -o     Action: status, reboot (default), off or on
   -a         IP address or hostname of fencing device
   -l       Login name
   -p   Login password or passphrase
   -S 
				
			  
阅读(1659) | 评论(2) | 转发(0) |
给主人留下些什么吧!~~

netyu2010-06-05 02:54:49

This is very old news, but one of the neat things about ESX is that you can build clusters for testing and don’t have to shell out for a split-bus DAS or SAN space. My colleague, who’s been using ESX for good and evil (testing Veritas Cluster Server) needed to build one in our dev environment the other day. Last time I did this was to cluster VirtualCenter 2.0.2 and I was a bit rusty on the process. So, for my reference, do this: Create a shared vmdk to act as the quorum disk using vmkfsto

netyu2010-06-05 02:54:49

This is very old news, but one of the neat things about ESX is that you can build clusters for testing and don’t have to shell out for a split-bus DAS or SAN space. My colleague, who’s been using ESX for good and evil (testing Veritas Cluster Server) needed to build one in our dev environment the other day. Last time I did this was to cluster VirtualCenter 2.0.2 and I was a bit rusty on the process. So, for my reference, do this: Create a shared vmdk to act as the quorum disk using vmkfsto