分类: LINUX
2010-06-05 02:52:07
Last updated 04-May-2010
Updates history:
2010-05-04: Fence_vmware in RHEL 5.5 is currently fence_vmware_ng (same syntax but named as fence_vmware) so fence_vmware_ng is no longer needed.
2009-01-15: New vmware-ng. There is speed improvement with many VMs registered in VMware for status operation => whole fencing. Default type esx is now really default.
We have 2 agents for VMware virtual machines fencing.
It's union of two older agents. fence_vmware_vix and fence_vmware_vi.
VI (in following text, VI API is not only original VI Perl API with last version 1.6, but VMware vSphere SDK for Perl too) is VMware API for controlling their main business class of VMware products (ESX/VC). This API is fully cluster aware (VMware cluster). So this agent is able to do fencing guests machines physically running on ESX but managed by VC and able to work without any reconfiguration in case of migrating guest to another ESX.
VIX is newer API, working on VMware "low-end" products (Server 2.x, 1.x), but there is some support for ESX/ESXi 3.5 update 2 and VC 2.5 update 2. This API is NOT cluster aware, and recommended only for Server 2.x and 1.x. But if you are using only one ESX/ESXi or doesn't have VMware Cluster and never use migration, you can use this API too.
If you are using RHEL 5.5/RHEL 6 just install fence_agents package and you are ready to use fence_vmware. For distributions with older fence_agetns, you can get this agent from GIT (RHEL 5.5/STABLE3/master) repository and use (please make sure to use current library (fencing.py) too).
VI Perl API or/and VIX API installed on every node in cluster. This is big difference against older agent, where you don't need install anything, but new agent has little less painful configuration (and many bonuses)
If you run fence_vmware with -h you will see something like this:
Options: -oAction: status, reboot (default), off or on -a IP address or hostname of fencing device -l Login name -p Login password or passphrase -S
netyu2010-06-05 02:54:49
This is very old news, but one of the neat things about ESX is that you can build clusters for testing and don’t have to shell out for a split-bus DAS or SAN space. My colleague, who’s been using ESX for good and evil (testing Veritas Cluster Server) needed to build one in our dev environment the other day. Last time I did this was to cluster VirtualCenter 2.0.2 and I was a bit rusty on the process. So, for my reference, do this: Create a shared vmdk to act as the quorum disk using vmkfsto
netyu2010-06-05 02:54:49
This is very old news, but one of the neat things about ESX is that you can build clusters for testing and don’t have to shell out for a split-bus DAS or SAN space. My colleague, who’s been using ESX for good and evil (testing Veritas Cluster Server) needed to build one in our dev environment the other day. Last time I did this was to cluster VirtualCenter 2.0.2 and I was a bit rusty on the process. So, for my reference, do this: Create a shared vmdk to act as the quorum disk using vmkfsto