分类:
2008-05-31 16:22:57
五、 配置MC/ServiceGuard:
1, 在hsedb1上生成cmclconf.ascii文件:
[hsedb1:/]# cmquerycl –n hsedb1 -v -C /etc/cmcluster/cmclconf.ascii
2, 编辑cmclconf.ascii文件(红色标示部分为修改内容):
[hsedb1:/etc/cmcluster]#vi cmclconf.ascii
# **********************************************************************
# ********* HIGH AVAILABILITY CLUSTER CONFIGURATION FILE ***************
# ***** For complete details about cluster parameters and how to *******
# ***** set them, consult the Serviceguard manual. *********************
# **********************************************************************
# Enter a name for this cluster. This name will be used to identify the
# cluster when viewing or manipulating it.
CLUSTER_NAME cluster1
# Cluster Lock Parameters
# The cluster lock is used as a tie-breaker for situations
# in which a running cluster fails, and then two equal-sized
# sub-clusters are both trying to form a new cluster. The
# cluster lock may be configured using only one of the
# following alternatives on a cluster:
# the LVM lock disk
# the lock LUN
# the quorom server
#
#
# Consider the following when configuring a cluster.
# For a two-node cluster, you must use a cluster lock. For
# a cluster of three or four nodes, a cluster lock is strongly
# recommended. For a cluster of more than four nodes, a
# cluster lock is recommended. If you decide to configure
# a lock for a cluster of more than four nodes, it must be
# a quorum server.
# Lock Disk Parameters. Use the FIRST_CLUSTER_LOCK_VG and
# FIRST_CLUSTER_LOCK_PV parameters to define a lock disk.
# The FIRST_CLUSTER_LOCK_VG is the LVM volume group that
# holds the cluster lock. This volume group should not be
# used by any other cluster as a cluster lock device.
# LUN lock disk parameters. Use the CLUSTER_LOCK_LUN parameter
# to define the device on a per node basis. The device may only
# be used for this purpose and by only a single cluster.
#
# Example for a FC storage array cluster disk
# CLUSTER_LOCK_LUN /dev/dsk/c1t2d3s1
# For 11.31 and later versions of HP-UX
# CLUSTER_LOCK_LUN /dev/disk/disk4_p2
# Quorum Server Parameters. Use the QS_HOST, QS_ADDR, QS_POLLING_INTERVAL,
# and QS_TIMEOUT_EXTENSION parameters to define a quorum server. The QS_HOST
# and QS_ADDR are either the host name or IP address of the system that is
# running the quorum server process. More than one IP address can be
# configured for the quorum server. When one subnet fails, Serviceguard
# uses the next available subnet to communicate with the quorum server.
# QS_HOST is used to specify the quorum server and QS_ADDR can be used to
# specify additional IP addresses for the quorum server. The QS_HOST entry
# must be specified (only once) before any other QS parameters. Only
# one QS_ADDR entry is used to specify the additionalIP address.
# Both QS_HOST and QS_ADDR should not resolve to the same IP address.
# Otherwise cluster configuration will fail. All subnets must be up
# when you use cmapplyconf and cmquerycl to configure the cluster.
# The QS_POLLING_INTERVAL is the interval (in microseconds) at which
# Serviceguard checks to sure the quorum server is running. You can use
# the optional QS_TIMEOUT_EXTENSION to increase the time interval (in
# microseconds) after which the quorum server is marked DOWN.
#
# The default quorum server timeout is calculated from the
# Serviceguard cluster parameters, including NODE_TIMEOUT and
# HEARTBEAT_INTERVAL. If you are experiencing quorum server
# timeouts, you can adjust these parameters, or you can include
# the QS_TIMEOUT_EXTENSION parameter.
#
# The value of QS_TIMEOUT_EXTENSION will directly effect the amount of
# time it takes for cluster reformation in the event of failure. For
# example, if QS_TIMEOUT_EXTENSION is set to 10 seconds, the cluster
# reformation will take 10 seconds longer than if the QS_TIMEOUT_EXTENSION
# was set to 0. This delay applies even if there is no delay in contacting
# the Quorum Server. The recommended value for QS_TIMEOUT_EXTENSION is 0,
# which is used as the default and the maximum supported value is 30000000
# (5 minutes).
#
# For example, to configure a quorum server running on node "qs_host"
# with the additional IP address "qs_addr" and with 120 seconds for the
# QS_POLLING_INTERVAL and to add 2 seconds to the system assigned value
# for the quorum server timeout, enter
#
# QS_HOST qs_host
# QS_ADDR qs_addr
# QS_POLLING_INTERVAL 120000000
# QS_TIMEOUT_EXTENSION 2000000
FIRST_CLUSTER_LOCK_VG /dev/vg_ops
# Definition of nodes in the cluster.
# Repeat node definitions as necessary for additional nodes.
# NODE_NAME is the specified nodename in the cluster.
# It must match the hostname and both cannot contain full domain name.
# Each NETWORK_INTERFACE, if configured with IPv4 address,
# must have ONLY one IPv4 address entry with it which could
# be either HEARTBEAT_IP or STATIONARY_IP.
# Each NETWORK_INTERFACE, if configured with IPv6 address(es)
# can have multiple IPv6 address entries(up to a maximum of 2,
# only one IPv6 address entry belonging to site-local scope
# and only one belonging to global scope) which must be all
# STATIONARY_IP. They cannot be HEARTBEAT_IP.
NODE_NAME hsedb1
NETWORK_INTERFACE lan0
HEARTBEAT_IP 10.22.133.81
NETWORK_INTERFACE lan2
HEARTBEAT_IP 192.168.3.81
# CLUSTER_LOCK_LUN
FIRST_CLUSTER_LOCK_PV /dev/dsk/c6t0d4
# Route information
# route id 1: 10.22.133.81
# route id 2: 192.168.3.81
# Warning: There are no standby network interfaces for lan0.
# Warning: There are no standby network interfaces for lan2.
NODE_NAME hsedb2
NETWORK_INTERFACE lan0
HEARTBEAT_IP 10.22.133.82
NETWORK_INTERFACE lan2
HEARTBEAT_IP 192.168.3.82
# CLUSTER_LOCK_LUN
FIRST_CLUSTER_LOCK_PV /dev/dsk/c6t0d4
# Route information
# route id 1: 10.22.133.82
# route id 2: 192.168.3.82
# Warning: There are no standby network interfaces for lan0.
# Warning: There are no standby network interfaces for lan2.
NODE_NAME hsedb3
NETWORK_INTERFACE lan0
HEARTBEAT_IP 10.22.133.83
NETWORK_INTERFACE lan2
HEARTBEAT_IP 192.168.3.83
# CLUSTER_LOCK_LUN
FIRST_CLUSTER_LOCK_PV /dev/dsk/c6t0d3
# Route information
# route id 1: 10.22.133.83
# route id 2: 192.168.3.83
# Warning: There are no standby network interfaces for lan0.
# Warning: There are no standby network interfaces for lan2.
NODE_NAME hsedb4
NETWORK_INTERFACE lan0
HEARTBEAT_IP 10.22.133.84
NETWORK_INTERFACE lan2
HEARTBEAT_IP 192.168.3.84
# CLUSTER_LOCK_LUN
FIRST_CLUSTER_LOCK_PV /dev/dsk/c6t0d3
# Route information
# route id 1: 10.22.133.84
# route id 2: 192.168.3.84
# Warning: There are no standby network interfaces for lan0.
# Warning: There are no standby network interfaces for lan2.
# Cluster Timing Parameters (microseconds).
# The NODE_TIMEOUT parameter defaults to 2000000 (2 seconds).
# This value is recommended for installations in which the highest
# priority is to reform the cluster as fast as possible in
# case of failure. But this value can sometimes lead to reformations
# caused by short-lived system hangs or network load spikes. If your
# highest priority is to minimize reformations, consider using
# a higher setting. For a significant portion of installations,
# a setting of 5000000 to 8000000 (5 to 8 seconds) is appropriate.
# The maximum value recommended for NODE_TIMEOUT is 30000000
# (30 seconds).
HEARTBEAT_INTERVAL 2000000
NODE_TIMEOUT 5000000
# Configuration/Reconfiguration Timing Parameters (microseconds).
AUTO_START_TIMEOUT 600000000
NETWORK_POLLING_INTERVAL 2000000
# Network Monitor Configuration Parameters.
# The NETWORK_FAILURE_DETECTION parameter determines how LAN card failures are detected.
# If set to INONLY_OR_INOUT, a LAN card will be considered down when its inbound
# message count stops increasing or when both inbound and outbound
# message counts stop increasing.
# If set to INOUT, both the inbound and outbound message counts must
# stop increasing before the card is considered down.
NETWORK_FAILURE_DETECTION INOUT
# Package Configuration Parameters.
# Enter the maximum number of packages which will be configured in the cluster.
# You can not add packages beyond this limit.
# This parameter is required.
MAX_CONFIGURED_PACKAGES 150
# Access Control Policy Parameters.
#
# Three entries set the access control policy for the cluster:
# First line must be USER_NAME, second USER_HOST, and third USER_ROLE.
# Enter a value after each.
#
# 1. USER_NAME can either be ANY_USER, or a maximum of
# 8 login names from the /etc/passwd file on user host.
# The following special characters are NOT supported for USER_NAME
# ' ', '/', '\', '*'
# 2. USER_HOST is where the user can issue Serviceguard commands.
# If using Serviceguard Manager, it is the COM server.
# Choose one of these three values: ANY_SERVICEGUARD_NODE, or
# (any) CLUSTER_MEMBER_NODE, or a specific node. For node,
# use the official hostname from domain name server, and not
# an IP addresses or fully qualified name.
# 3. USER_ROLE must be one of these three values:
# * MONITOR: read-only capabilities for the cluster and packages
# * PACKAGE_ADMIN: MONITOR, plus administrative commands for packages
# in the cluster
# * FULL_ADMIN: MONITOR and PACKAGE_ADMIN plus the administrative
# commands for the cluster.
#
# Access control policy does not set a role for configuration
# capability. To configure, a user must log on to one of the
# cluster's nodes as root (UID=0). Access control
# policy cannot limit root users' access.
#
# MONITOR and FULL_ADMIN can only be set in the cluster configuration file,
# and they apply to the entire cluster. PACKAGE_ADMIN can be set in the
# cluster or a package configuration file. If set in the cluster
# configuration file, PACKAGE_ADMIN applies to all configured packages.
# If set in a package configuration file, PACKAGE_ADMIN applies to that
# package only.
#
# Conflicting or redundant policies will cause an error while applying
# the configuration, and stop the process. The maximum number of access
# policies that can be configured in the cluster is 200.
#
# Example: to configure a role for user john from node noir to
# administer a cluster and all its packages, enter:
# USER_NAME john
# USER_HOST noir
# USER_ROLE FULL_ADMIN
# List of cluster aware LVM Volume Groups. These volume groups will
# be used by package applications via the vgchange -a e command.
# Neither CVM or VxVM Disk Groups should be used here.
# For example:
# VOLUME_GROUP /dev/vgdatabase
# VOLUME_GROUP /dev/vg02
# List of OPS Volume Groups.
# Formerly known as DLM Volume Groups, these volume groups
# will be used by OPS or RAC cluster applications via
# the vgchange -a s command. (Note: the name DLM_VOLUME_GROUP
# is also still supported for compatibility with earlier versions.)
# For example:
# OPS_VOLUME_GROUP /dev/vgdatabase
# OPS_VOLUME_GROUP /dev/vg02
OPS_VOLUME_GROUP /dev/vg_ops