分类: 系统运维
2012-07-16 18:54:13
在磁盘阵列上划分两个磁盘空间,一个用于数据vgora, 另一个用于创建vglock,大小为1G。
创建MC-SG:
首先,停止并删除现有Cluster(如果有的话):
#cmhaltcl –v –f
#cmdeleteconf
确认删除。这样就会保留原有的配置文件,而把二进制文件删除。
如果是新安装机器,则不用这一步。
修改MC/SG配置:
1.配置网络、磁盘相关部分:
Vi /.rhosts---------两个主机都作
nodedb1 root
nodedb2 root
vi /etc/hosts---------两个主机都作
10.0.10.193 nodedb1
10.0.10.195 nodedb2
最好hosts文件中不要有其他的nodedb1,nodedb2的名称。
使用nslookup检查配置的正确性:
Nslookup nodedb1
Nslookup nodedb2
Nslookup 10.0.10.193等。
从nodedb1上复制到nodedb2:
cp /.rhosts /etc/hosts.equiv
rcp /etc/hosts.equiv nodedb2:/etc/hosts.equiv
创建数据卷组和vglock (/dev/disk/disk44),其实也可以直接用数据卷组作为lock vg.
首先创建数据卷组vgora(/dev/rdisk/disk32):
#pvcreate /dev/rdisk/disk32
#mkdir /dev/vgora
#mknod /dev/vgora/group c 64 0x010000
nodedb1@[/etc#]vgcreate /dev/vgora /dev/disk/disk32
Volume group "/dev/vgora" has been successfully created.
Volume Group configuration for /dev/vgora has been saved in /etc/lvmconf/vgora.conf
在卷组vgora上创建数据库所需的lv:
lvcreate -L 1024 -n ora_ocr1 /dev/vgora
lvcreate -L 1024 -n ora_ocr2 /dev/vgora
lvcreate -L 1024 -n ora_vote1 /dev/vgora
lvcreate -L 1024 -n ora_vote2 /dev/vgora
lvcreate -L 1024 -n ora_vote3 /dev/vgora
lvcreate -L 500 -n democtl1.ctl /dev/vgora
lvcreate -L 500 -n democtl2.ctl /dev/vgora
lvcreate -L 500 -n democtl3.ctl /dev/vgora
lvcreate -L 1000 -n demo1log1.log /dev/vgora
lvcreate -L 1000 -n demo1log2.log /dev/vgora
lvcreate -L 1000 -n demo1log3.log /dev/vgora
lvcreate -L 1000 -n demo2log1.log /dev/vgora
lvcreate -L 1000 -n demo2log2.log /dev/vgora
lvcreate -L 1000 -n demo2log3.log /dev/vgora
lvcreate -L 5000 -n demosystem.dbf /dev/vgora
lvcreate -L 5000 -n demosysaux.dbf /dev/vgora
lvcreate -L 10000 -n demotemp.dbf /dev/vgora
lvcreate -L 5000 -n demousers.dbf /dev/vgora
lvcreate -L 5 -n demospfile1.ora /dev/vgora
lvcreate -L 5 -n pwdfile.ora /dev/vgora
lvcreate -L 20000 -n demoundotbs1.dbf /dev/vgora
lvcreate -L 20000 -n demoundotbs2.dbf /dev/vgora
lvcreate -L 20000 -n data01 /dev/vgora
lvcreate -L 20000 -n data02 /dev/vgora
lvcreate -L 20000 -n data03 /dev/vgora
lvcreate -L 20000 -n data04 /dev/vgora
lvcreate -L 20000 -n data05 /dev/vgora
lvcreate -L 20000 -n data06 /dev/vgora
lvcreate -L 20000 -n data07 /dev/vgora
lvcreate -L 20000 -n data08 /dev/vgora
lvcreate -L 20000 -n data09 /dev/vgora
lvcreate -L 20000 -n data10 /dev/vgora
lvcreate -L 20000 -n data11 /dev/vgora
lvcreate -L 20000 -n data12 /dev/vgora
lvcreate -L 20000 -n data13 /dev/vgora
lvcreate -L 20000 -n data14 /dev/vgora
lvcreate -L 20000 -n data15 /dev/vgora
lvcreate -L 10000 -n data16 /dev/vgora
lvcreate -L 10000 -n data17 /dev/vgora
lvcreate -L 10000 -n data18 /dev/vgora
lvcreate -L 10000 -n data19 /dev/vgora
lvcreate -L 10000 -n data20 /dev/vgora
lvcreate -L 10000 -n data21 /dev/vgora
lvcreate -L 10000 -n data22 /dev/vgora
lvcreate -L 10000 -n data23 /dev/vgora
lvcreate -L 10000 -n data24 /dev/vgora
lvcreate -L 10000 -n data25 /dev/vgora
lvcreate -L 10000 -n data26 /dev/vgora
lvcreate -L 10000 -n data27 /dev/vgora
lvcreate -L 10000 -n data28 /dev/vgora
lvcreate -L 10000 -n data29 /dev/vgora
lvcreate -L 10000 -n data30 /dev/vgora
lvcreate.sh: END
将vgora的map导出到/tmp下:
nodedb1@[/tmp#]vgexport -p -v -s -m /tmp/vgora.map /dev/vgora
将vgora导入到nodedb2上:
nodedb2@[/#] cd /tmp
nodedb2@[/tmp#]rcp nodedb1:/tmp/vgora.map .
nodedb2@[/tmp#]mkdir /dev/vgora
nodedb2@[/tmp#]mknod /dev/vgora/group c 64 0x010000
nodedb2@[/tmp#]remsh nodedb1 ll /dev/vgora/group
crw-rw-rw- 1 root sys 64 0x010000 Jun 6 11:58 /dev/vgora/group
nodedb2@[/tmp#]ll /dev/vgora/group
crw-rw-rw- 1 root sys 64 0x010000 Jun 6 13:13 /dev/vgora/group
nodedb2@[/tmp#]vgimport -s -v -m /tmp/vgora.map /dev/vgora
Beginning the import process on Volume Group "/dev/vgora".
Logical volume "/dev/vgora/ora_ocr1" has been successfully created
with lv number 1.
…
Logical volume "/dev/vgora/data30" has been successfully created
with lv number 56.
vgimport: Volume group "/dev/vgora" has been successfully created.
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating the volume group.
然后在nodedb1上创建vglock:
nodedb1@[/tmp#]mkdir /dev/vglock
nodedb1@[/tmp#]mknod /dev/vglock/group c 64 0x020000
nodedb1@[/tmp#]vgcreate /dev/vglock /dev/disk/disk44
Volume group "/dev/vglock" has been successfully created.
Volume Group configuration for /dev/vglock has been saved in /etc/lvmconf/vglock.conf
也导入到nodedb2上:
nodedb1@[/tmp#]vgexport -p -v -s –m /tmp/vglock.map /dev/vglock
nodedb2@[/#] cd /tmp
nodedb2@[/tmp#]rcp nodedb1:/tmp/vglock.map .
nodedb2@[/tmp#]mkdir /dev/vglock
nodedb2@[/tmp#]mknod /dev/vglock/group c 64 0x020000
nodedb2@[/tmp#]remsh nodedb1 ll /dev/vglock/group
crw-rw-rw- 1 root sys 64 0x020000 Jun 6 11:58 /dev/ vglock /group
nodedb2@[/tmp#]ll /dev/ vglock /group
crw-rw-rw- 1 root sys 64 0x020000 Jun 6 13:13 /dev/ vglock /group
nodedb2@[/tmp#]vgimport -s -v -m /tmp/vglock.map /dev/vglock
检查两边的/etc/lvmtab ,保证物理磁盘是一致的,如果不一致,可以使用vglock.map在两边都导入一遍。
2.修改参数文件:
vi /etc/lvmrc---------两个主机都作,确保机器启动时不会激活共享卷组。vglock实际上不用激活即可使用。
AUTO_VG_ACTIVE=0
如果要Cluster自动启动:
vi /etc/rc.config.d/cmcluster---------两个主机都作
AUTOSTART_CMCLD=1
3. 在nodedb1 创立集群配置文件
#cmquerycl -n nodedb1 -n nodedb2 -v -C /etc/cmcluster/cmcl.ascii
4. 编辑集群配置文件
红色为需要修改部分.
# **********************************************************************
# ********* HIGH AVAILABILITY CLUSTER CONFIGURATION FILE ***************
# ***** For complete details about cluster parameters and how to *******
# ***** set them, consult the Serviceguard manual. *********************
# **********************************************************************
# Enter a name for this cluster. This name will be used to identify the
# cluster when viewing or manipulating it.
CLUSTER_NAME demodb_cluster
# The HOSTNAME_ADDRESS_FAMILY parameter specifies the Internet Protocol address
# family to which Serviceguard will attempt to resolve cluster node names and
# quorum server host names.
# If the parameter is set to IPV4, Serviceguard will attempt to resolve the names
# to IPv4 addresses only. This is the default value.
# If the parameter is set to IPV6, Serviceguard will attempt to resolve the names
# to IPv6 addresses only. No IPv4 addresses need be configured on the system or
# listed in the /etc/hosts file except for IPv4 loopback address.
# If the parameter is set to ANY, Serviceguard will attempt to resolve the names
# to both IPv4 and IPv6 addresses. The /etc/hosts file on each node must contain
# entries for all IPv4 and IPv6 addresses used throughout the cluster including
# all STATIONARY_IP and HEARTBEAT_IP addresses as well as any other addresses
HOSTNAME_ADDRESS_FAMILY IPV4
# Cluster Lock Parameters
# The cluster lock is used as a tie-breaker for situations
# in which a running cluster fails, and then two equal-sized
# sub-clusters are both trying to form a new cluster. The
# cluster lock may be configured using only one of the
# following alternatives on a cluster:
# the LVM lock disk
# the lock LUN
# the quorom server
#
#
# Consider the following when configuring a cluster.
# For a two-node cluster, you must use a cluster lock. For
# a cluster of three or four nodes, a cluster lock is strongly
# recommended. For a cluster of more than four nodes, a
# cluster lock is recommended. If you decide to configure
# a lock for a cluster of more than four nodes, it must be
# a quorum server.
# Lock Disk Parameters. Use the FIRST_CLUSTER_LOCK_VG and
# FIRST_CLUSTER_LOCK_PV parameters to define a lock disk.
# The FIRST_CLUSTER_LOCK_VG is the LVM volume group that
# holds the cluster lock. This volume group should not be
# used by any other cluster as a cluster lock device.
# LUN lock disk parameters. Use the CLUSTER_LOCK_LUN parameter
# to define the device on a per node basis. The device may only
# be used for this purpose and by only a single cluster.
#
# Example for a FC storage array cluster disk
# CLUSTER_LOCK_LUN /dev/dsk/c1t2d3s1
# For 11.31 and later versions of HP-UX with cluster device files
# CLUSTER_LOCK_LUN /dev/cdisk/disk22
# For 11.31 and later versions of HP-UX without cluster device files
# CLUSTER_LOCK_LUN /dev/disk/disk4_p2
# Quorum Server Parameters. Use the QS_HOST, QS_ADDR, QS_POLLING_INTERVAL,
# and QS_TIMEOUT_EXTENSION parameters to define a quorum server. The QS_HOST
# and QS_ADDR are either the host name or IP address of the system that is
# running the quorum server process. More than one IP address can be
# configured for the quorum server. When one subnet fails, Serviceguard
# uses the next available subnet to communicate with the quorum server.
# QS_HOST is used to specify the quorum server and QS_ADDR can be used to
# specify additional IP addresses for the quorum server. The QS_HOST entry
# must be specified (only once) before any other QS parameters. Only
# one QS_ADDR entry is used to specify the additional IP address.
# Both QS_HOST and QS_ADDR should not resolve to the same IP address.
# Otherwise cluster configuration will fail. All subnets must be up
# when you use cmapplyconf and cmquerycl to configure the cluster.
# The QS_POLLING_INTERVAL is the interval (in microseconds) at which
# Serviceguard checks to sure the quorum server is running.
# The optional QS_TIMEOUT_EXTENSION (in microseconds) is used to increase
# the time allocated for quorum server response. The default quorum
# server timeout is calculated primarily from MEMBER_TIMEOUT parameter.
# For cluster of up to 4 nodes it is 0.2*MEMBER_TIMEOUT. It increases
# as number of nodes increases and reaches to 0.5*MEMBER_TIMEOUT for
# 16 nodes
#
# If quorum server is configured on busy network or if quorum server
# polling is experiencing timeouts (syslog messages) or if quorum server
# is used for large number of clusters, such default time (as mentioned
# above) might not be sufficient. In such cases this paramter should be
# use to provide more time for quorum server.
# Also this parameter deserves more consideration if small values
# for MEMBER_TIMEOUT is used.
#
# The value of QS_TIMEOUT_EXTENSION will directly effect the amount of
# time it takes for cluster reformation in the event of node failure. For
# example, if QS_TIMEOUT_EXTENSION is set to 10 seconds, the cluster
# reformation will take 10 seconds longer than if the QS_TIMEOUT_EXTENSION
# was set to 0. This delay applies even if there is no delay incontacting
# the Quorum Server.#
# The recommended value for QS_TIMEOUT_EXTENSION is 0 (the default value),
# and the maximum supported value is 300000000 (5 minutes).
#
# For example, to configure a quorum server running on node "qs_host"
# with the additional IP address "qs_addr" and with 120 seconds for the
# QS_POLLING_INTERVAL and to add 2 seconds to the system assigned value
# for the quorum server timeout, enter
#
# QS_HOST qs_host
# QS_ADDR qs_addr
# QS_POLLING_INTERVAL 120000000
# QS_TIMEOUT_EXTENSION 2000000
FIRST_CLUSTER_LOCK_VG /dev/vglock
# Definition of nodes in the cluster.
# Repeat node definitions as necessary for additional nodes.
# NODE_NAME is the specified nodename in the cluster.
# It must match the hostname and both cannot contain full domain name.
# Each NETWORK_INTERFACE, if configured with IPv4 address,
# must have ONLY one IPv4 address entry with it which could
# be either HEARTBEAT_IP or STATIONARY_IP.
# Each NETWORK_INTERFACE, if configured with IPv6 address(es)
# can have multiple IPv6 address entries(up to a maximum of 2,
# only one IPv6 address entry belonging to site-local scope
# and only one belonging to global scope) which could be
# either HEARTBEAT_IP or STATIONARY_IP.
# Note: This configuration contains IPv4 STATIONARY_IP or HEARTBEAT_IP
# addresses. To obtain an IPv6-only cluster on supported platforms,
# comment out any IPv4 STATIONARY_IPs or HEARTBEAT_IPs.
# If this leaves any NETWORK_INTERFACE without any STATIONARY_IP or
# HEARTBEAT_IP, comment out the NETWORK_INTERFACE as well.
# Modify the resulting configuration as necessary to meet the
# hearbeat requirements and recommendations for a Serviceguard
# configuration. These are spelled out in chapter 4 of the Managing
# Serviceguard manual.
#
# Node capacity parameters. Use tne CAPACITY_NAME and CAPACITY_VALUE
# parameters to define a capacity for the node. Node capacities correspond to
# package weights; node capacity is checked against the corresponding
# package weight to determine if the package can run on that node.
#
# CAPACITY_NAME specifies a name for the capacity.
# The capacity name can be any string that starts and ends with an
# alphanumeric character, and otherwise contains only alphanumeric characters,
# dot (.), dash (-), or underscore (_). Maximum string
# length is 39 characters. Duplicate capacity names are not allowed.
#
# CAPACITY_VALUE specifies a value for the CAPACITY_NAME that precedes
# it. This is a floating point value between 0 and 1000000. Capacity values
# are arbitrary as far as Serviceguard is concerned; they have meaning only in
# relation to the corresponding package weights.
# Node capacity definition is optional, but if CAPACITY_NAME is specified,
# CAPACITY_VALUE must also be specified; CAPACITY_NAME must come first.
# To specify more than one capacity, repeat this process for each capacity.
# NOTE: If a given capacity is not defined for a node, Serviceguard assumes
# that capacity is infinite on that node. For example, if pkgA, pkgB, and pkgC
# each specify a weight of 1000000 for WEIGHT_NAME "memory", and CAPACITY_NAME
# "memory" is not defined for node1, then all three packages are eligible
# to run at the same time on node1, assuming all other requirements are met.
#
# Cmapplyconf will fail if any node defines a capacity and
# any package has min_package_node as the failover policy or
# has automatic as the failback policy.
# You can define a maximum of 4 capacities.
#
# NOTE: Serviceguard supports a capacity with the reserved name
# "package_limit". This can be used to limit the number of packages
# that can run on a node. If you use "package_limit", you cannot
# define any other capacities for this cluster, and the default
# weight for all packages is 1.
#
# Example:
# CAPACITY_NAME package_limit
# CAPACITY_VALUE 4
#
# This allows a maximum of four packages to run on this node,
# assuming each has the default weight of one.
#
# For all capacities other than "package_limit", the default weight for
# all packages is zero
#
NODE_NAME nodedb1
NETWORK_INTERFACE lan901
HEARTBEAT_IP 10.0.10.193 #(原来为STATIONARY_IP)
NETWORK_INTERFACE lan902
HEARTBEAT_IP 192.168.100.1
# CLUSTER_LOCK_LUN
FIRST_CLUSTER_LOCK_PV /dev/dsk/c5t1d0 #(和 /dev/disk/disk44等价)
# Route information
# route id 1: 192.168.0.10
# route id 2: 10.0.10.193
# route id 3: 192.168.100.1
# CAPACITY_NAME
# CAPACITY_VALUE
# Warning: There are no standby network interfaces for lan0.
# Link Aggregate lan901 contains the following port(s): lan2
# Warning: There are no standby network interfaces for lan901.
# Link Aggregate lan902 contains the following port(s): lan4
# Warning: There are no standby network interfaces for lan902.
NODE_NAME nodedb2
NETWORK_INTERFACE lan901
HEARTBEAT_IP 10.0.10.195
NETWORK_INTERFACE lan902
HEARTBEAT_IP 192.168.100.2
# CLUSTER_LOCK_LUN
FIRST_CLUSTER_LOCK_PV /dev/dsk/c5t1d0 #(和 /dev/disk/disk44等价)
# Route information
# route id 2: 10.0.10.195
# route id 3: 192.168.100.2
# CAPACITY_NAME
# CAPACITY_VALUE
# Link Aggregate lan901 contains the following port(s): lan2
# Warning: There are no standby network interfaces for lan901.
# Link Aggregate lan902 contains the following port(s): lan4
# Warning: There are no standby network interfaces for lan902.
# Cluster Timing Parameters (microseconds).
# The MEMBER_TIMEOUT parameter defaults to 14000000 (14 seconds).
# If a heartbeat is not received from a node within this time, it is
# declared dead and the cluster reforms without that node.
# A value of 10 to 25 seconds is appropriate for most installations.
# For installations in which the highest priority is to reform the cluster
# as fast as possible, a setting of as low as 3 seconds is possible.
# When a single heartbeat network with standby interfaces is configured,
# MEMBER_TIMEOUT cannot be set below 14 seconds if the network interface
# type is Ethernet, or 22 seconds if the network interface type is
# InfiniBand (HP-UX only).
# Note that a system hang or network load spike whose duration exceeds
# MEMBER_TIMEOUT will result in one or more node failures.
# The maximum value recommended for MEMBER_TIMEOUT is 60000000
# (60 seconds).
MEMBER_TIMEOUT 14000000
# Configuration/Reconfiguration Timing Parameters (microseconds).
AUTO_START_TIMEOUT 600000000
NETWORK_POLLING_INTERVAL 2000000
# You can use the optional CONFIGURED_IO_TIMEOUT_EXTENSION parameter
# to increase the amount of time (in microseconds) that Serviceguard
# will wait to ensure that all pending I/O on a failed node has ceased.
# To ensure data integrity, you must set this parameter in the following
# cases: for an extended-distance cluster using software mirroring across
# data centers over links between iFCP switches; and for any cluster in
# which packages use NFS mounts. See the section on cluster configuration
# parameters in the 'Managing Serviceguard' manual for more information.
# CONFIGURED_IO_TIMEOUT_EXTENSION 0
# Network Monitor Configuration Parameters.
# The NETWORK_FAILURE_DETECTION parameter determines how LAN card failures are detected.
# If set to INONLY_OR_INOUT, a LAN card will be considered down when its inbound
# message count stops increasing or when both inbound and outbound
# message counts stop increasing.
# If set to INOUT, both the inbound and outbound message counts must
# stop increasing before the card is considered down.
NETWORK_FAILURE_DETECTION INOUT
# NETWORK_AUTO_FAILBACK
# When set to YES a recovery of the primary LAN interface will cause failback
# from the standby LAN interface to the primary.
# When set to NO a recovery of the primary LAN interface will do nothing and
# the standby LAN interface will continue to be used until cmmodnet -e lanX
# is issued for the primary LAN interface.
NETWORK_AUTO_FAILBACK YES
# IP Monitor Configuration Parameters.
# The following set of three parameters can be repeated as necessary.
# SUBNET is the subnet to be configured whether or not to be monitored
# at IP layer.
# IP_MONITOR is set to ON if the subnet is to be monitored at IP layer.
# IP_MONITOR is set to OFF if the subnet is not to be monitored at IP layer.
# POLLING_TARGET is the IP address to which polling messages are sent
# from each network interface in the subnet.
# Each SUBNET can have multiple polling targets, so multiple
# POLLING_TARGET entries can be specified. If no POLLING_TARGET is
# specified, peer interfaces in the subnet will be polling targets for each other.
# Only subnets with a gateway that is configured to accept
# ICMP Echo Request messages will be included by default with IP_MONITOR
# set to ON, and with its gateway listed as a POLLING_TARGET.
SUBNET 10.0.10.0
IP_MONITOR OFF #(原来为ON)
SUBNET 192.168.100.0
IP_MONITOR OFF
# Package Configuration Parameters.
# Enter the maximum number of packages which will be configured in the cluster.
# You can not add packages beyond this limit.
# This parameter is required.
MAX_CONFIGURED_PACKAGES 300
# Optional package default weight parameters. Use WEIGHT_NAME and
# WEIGHT_DEFAULT parameters to define a default value for this weight
# for all packages except system multi-node packages.
# Package weights correspond to node capacities; node capacity
# is checked against the corresponding package weight to determine
# if the package can run on that node.
#
# WEIGHT_NAME
# specifies a name for a weight that corresponds to a
# capacity specified earlier in this file. Weight is defined for
# a package, whereas capacity is defined for a node. For any given
# weight/capacity pair, WEIGHT_NAME, CAPACITY_NAME (and weight_name
# in the package configuration file) must be the same. The rules for
# forming all three are the same. See the discussion of the capacity
# parameters earlier in this file.
# NOTE: A weight (WEIGHT_NAME/WEIGHT_DEFAULT) has no meaning on a node
# unless a corresponding capacity (CAPACITY_NAME/CAPACITY_VALUE) is
# defined for that node.
# For example, if CAPACITY_NAME "memory" is not defined for
# node1, then node1's "memory" capacity is assumed to be infinite.
# Now even if pkgA, pkgB, and pkgC each specify the maximum weight
# of 1000000 for WEIGHT_NAME "memory", all three packages are eligible
# to run at the same time on node1, assuming all other requirements are met.
#
# WEIGHT_DEFAULT specifies a default weight for this WEIGHT_NAME.
# This is a floating point value between 0 and 1000000.
# Package weight default values are arbitrary as far as Serviceguard is
# concerned; they have meaning only in relation to the corresponding node
# capacities.
#
# The package weight default parameters are optional. If they are not
# specified, a default value of zero will be assumed. If defined,
# WEIGHT_DEFAULT must follow WEIGHT_NAME. To specify more than one package
# weight, repeat this process for each weight.
# Note: for the reserved weight "package_limit", the default weight is
# always one. This default cannot be changed in the cluster configuration file,
# but it can be overriden in the package configuration file.
#
# For any given package and WEIGHT_NAME, you can override the WEIGHT_DEFAULT
# set here by setting weight_value to a different value for the corresponding
# weight_name in the package configuration file.
#
# Cmapplyconf will fail if you define a default for a weight and no node
# in the cluster specifies a capacity of the same name.
# You can define a maximum of 4 weight defaults
#
# Example: The following example defines a default for "processor" weight
# of 0.1 for the package:
#
# WEIGHT_NAME processor
# WEIGHT_DEFAULT 0.1
#
# WEIGHT_NAME
# WEIGHT_DEFAULT
# Access Control Policy Parameters.
#
# Three entries set the access control policy for the cluster:
# First line must be USER_NAME, second USER_HOST, and third USER_ROLE.
# Enter a value after each.
#
# 1. USER_NAME can either be ANY_USER, or a maximum of
# 8 login names from the /etc/passwd file on user host.
# The following special characters are NOT supported for USER_NAME
# ' ', '/', '\', '*'
# 2. USER_HOST is where the user can issue Serviceguard commands.
# If using Serviceguard Manager, it is the COM server.
# Choose one of these three values: ANY_SERVICEGUARD_NODE, or
# (any) CLUSTER_MEMBER_NODE, or a specific node. For node,
# use the official hostname from domain name server, and not
# an IP addresses or fully qualified name.
# 3. USER_ROLE must be one of these three values:
# * MONITOR: read-only capabilities for the cluster and packages
# * PACKAGE_ADMIN: MONITOR, plus administrative commands for packages
# in the cluster
# * FULL_ADMIN: MONITOR and PACKAGE_ADMIN plus the administrative
# commands for the cluster.
#
# Access control policy does not set a role for configuration
# capability. To configure, a user must log on to one of the
# cluster's nodes as root (UID=0). Access control
# policy cannot limit root users' access.
#
# MONITOR and FULL_ADMIN can only be set in the cluster configuration file,
# and they apply to the entire cluster. PACKAGE_ADMIN can be set in the
# cluster or a package configuration file. If set in the cluster
# configuration file, PACKAGE_ADMIN applies to all configured packages.
# If set in a package configuration file, PACKAGE_ADMIN applies to that
# package only.
#
# MONITOR is set by default in a new cluster configuration as of Serviceguard
# release A.11.19.00. This is to support cluster discovery from other HP
# Administration products such as Systems Insight Manager (HP SIM) and
# Distributed Systems Administration (DSAU) tools. Removing MONITOR is allowed
# as an online configuration change within Serviceguard. However removing MONITOR
# will break cluster management for HP SIM and HP VSE products
#
# Conflicting or redundant policies will cause an error while applying
# the configuration, and stop the process. The maximum number of access
# policies that can be configured in the cluster is 200.
#
# Example: to configure a role for user john from node noir to
# administer a cluster and all its packages, enter:
# USER_NAME john
# USER_HOST noir
# USER_ROLE FULL_ADMIN
USER_NAME ANY_USER
USER_HOST ANY_SERVICEGUARD_NODE
USER_ROLE MONITOR
# List of cluster aware LVM Volume Groups. These volume groups will
# be used by package applications via the vgchange -a e command.
# Neither CVM or VxVM Disk Groups should be used here.
# For example:
# VOLUME_GROUP /dev/vgdatabase
# VOLUME_GROUP /dev/vg02
VOLUME_GROUP /dev/vglock
# List of OPS Volume Groups.
# Formerly known as DLM Volume Groups, these volume groups
# will be used by OPS or RAC cluster applications via
# the vgchange -a s command. (Note: the name DLM_VOLUME_GROUP
# is also still supported for compatibility with earlier versions.)
# For example:
# OPS_VOLUME_GROUP /dev/vgdatabase
# OPS_VOLUME_GROUP /dev/vg02
OPS_VOLUME_GROUP /dev/vgora
修改完毕后,检查。
5. nodedb1检查集群配置文件
nodedb1@[/etc/cmcluster#]cmcheckconf -v -C /etc/cmcluster/cmcl.ascii
Begin cluster verification...
Checking cluster file: /etc/cmcluster/cmcl.ascii
Checking nodes ... Done
Checking existing configuration ... Done
Gathering storage information
Found 9 devices on node nodedb1
Found 9 devices on node nodedb2
Analysis of 18 devices should take approximately 2 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 3 volume groups on node nodedb1
Found 3 volume groups on node nodedb2
Analysis of 6 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Gathering network information
Beginning network probing (this may take a while)
Completed network probing
Begin file consistency checking
/etc/nsswitch.conf not found
-rw-r--r-- 1 root root 1585 Mar 31 2010 /etc/cmcluster/cmclfiles2check
-r--r--r-- 1 root root 524 Oct 22 2009 /etc/cmcluster/cmignoretypes.conf
-r-------- 1 bin bin 118 Oct 22 2009 /etc/cmcluster/cmknowncmds
-rw-r--r-- 1 root root 667 Oct 22 2009 /etc/cmcluster/cmnotdisk.conf
-rw-r--r-- 1 root sys 762 Jun 6 13:51 /etc/hosts
-r--r--r-- 1 bin bin 12662 May 17 13:28 /etc/services
/etc/nsswitch.conf not found
-rw-r--r-- 1 root root 1585 Mar 31 2010 /etc/cmcluster/cmclfiles2check
-r--r--r-- 1 root root 524 Oct 22 2009 /etc/cmcluster/cmignoretypes.conf
-r-------- 1 bin bin 118 Oct 22 2009 /etc/cmcluster/cmknowncmds
-rw-r--r-- 1 root root 667 Oct 22 2009 /etc/cmcluster/cmnotdisk.conf
-rw-r--r-- 1 root sys 763 Jun 6 13:52 /etc/hosts
-r--r--r-- 1 bin bin 12662 May 17 17:22 /etc/services
cksum: can't open /etc/nsswitch.conf: No such file or directory
1244500118 1585 /etc/cmcluster/cmclfiles2check
1002189587 762 /etc/hosts
2206705817 12662 /etc/services
61360265 524 /etc/cmcluster/cmignoretypes.conf
344617849 118 /etc/cmcluster/cmknowncmds
1390752988 667 /etc/cmcluster/cmnotdisk.conf
cksum: can't open /etc/nsswitch.conf: No such file or directory
1244500118 1585 /etc/cmcluster/cmclfiles2check
1243335535 763 /etc/hosts
2206705817 12662 /etc/services
61360265 524 /etc/cmcluster/cmignoretypes.conf
344617849 118 /etc/cmcluster/cmknowncmds
1390752988 667 /etc/cmcluster/cmnotdisk.conf
ERROR: /etc/cmcluster/cmclfiles2check permissions could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmclfiles2check owner could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmclfiles2check checksum could not be checked on nodes nodedb1 nodedb2:
/etc/cmcluster/cmclfiles2check is the same across nodes nodedb1 nodedb2
ERROR: /etc/hosts permissions could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/hosts owner could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/hosts checksum could not be checked on nodes nodedb1 nodedb2:
/etc/hosts is the same across nodes nodedb1 nodedb2
ERROR: /etc/nsswitch.conf permissions could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/nsswitch.conf owner could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/nsswitch.conf checksum could not be checked on nodes nodedb1 nodedb2:
/etc/nsswitch.conf is the same across nodes nodedb1 nodedb2
ERROR: /etc/services permissions could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/services owner could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/services checksum could not be checked on nodes nodedb1 nodedb2:
/etc/services is the same across nodes nodedb1 nodedb2
ERROR: /etc/cmcluster/cmignoretypes.conf permissions could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmignoretypes.conf owner could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmignoretypes.conf checksum could not be checked on nodes nodedb1 nodedb2:
/etc/cmcluster/cmignoretypes.conf is the same across nodes nodedb1 nodedb2
ERROR: /etc/cmcluster/cmknowncmds permissions could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmknowncmds owner could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmknowncmds checksum could not be checked on nodes nodedb1 nodedb2:
/etc/cmcluster/cmknowncmds is the same across nodes nodedb1 nodedb2
ERROR: /etc/cmcluster/cmnotdisk.conf permissions could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmnotdisk.conf owner could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmnotdisk.conf checksum could not be checked on nodes nodedb1 nodedb2:
/etc/cmcluster/cmnotdisk.conf is the same across nodes nodedb1 nodedb2
Command 'cat /etc/cmcluster/cmclfiles2check | /usr/sbin/cmcompare -W -v -n nodedb1 -n nodedb2' exited with status 2
WARNING: Unable to check consistency of all files listed in /etc/cmcluster/cmclfiles2check
Maximum configured packages parameter is 300.
Configuring 0 package(s).
Creating the cluster configuration for cluster demodb_cluster
Adding node nodedb1 to cluster demodb_cluster
Adding node nodedb2 to cluster demodb_cluster
cmcheckconf: Verification completed with no errors found.
Use the cmapplyconf command to apply the configuration
上述的那些ERROR可以不用担心。
6. 检查完毕后,分发到集群中所有结点:
首先在nodedb1上激活vglock:
nodedb1@[/etc/cmcluster#]vgchange -a y /dev/vglock
Activated volume group.
Volume group "/dev/vglock" has been successfully changed.
(去激活:#vgchange -c n /dev/vglock)
否则cmapplyconf报错。
然后,
#cmapplyconf -v -C /etc/cmcluster/cmcl.ascii
执行完本步骤后,集群中所有结点都已生成 /etc/cmcluster/cmclconfig二进制的配置文件
然后可以在nodedb1上做:
#vgchange -a n /dev/vglock
启动cluster
nodedb1@[/etc/cmcluster#]cmruncl
cmruncl: Validating network configuration...
cmruncl: Network validation complete
cmruncl: Validating cluster lock disk .... Done
Waiting for cluster to form .... done
Cluster successfully formed.
Check the syslog files on all nodes in the cluster to verify that no warnings occurred during startup.
7.两台机器上建立包目录,在两台上都做(oracle相关工作):
#mkdir /etc/cmcluster/orapkg1
8. 建立并修改包配置文件,第一个机器上做即可。
nodedb1@[/etc/cmcluster#]cmmakepkg -p /etc/cmcluster/orapkg1/ora1.ascii
修改:/etc/cmcluster/orapkg1/ora1.ascii 如下,红色部分为修改部分:
# **********************************************************************
# ****** HIGH AVAILABILITY PACKAGE CONFIGURATION FILE (template) *******
# **********************************************************************
# ******* Note: This file MUST be edited before it can be used. ********
# * For complete details about package parameters and how to set them, *
# * consult the Serviceguard manual.
# **********************************************************************
#
# "PACKAGE_NAME" is the name that is used to identify the package.
#
# This name will be used to identify the package when viewing or
# manipulating it. Package names must be unique within a cluster.
#
#
# Legal values for PACKAGE_NAME:
# Any string that starts and ends with an alphanumeric character, and
# contains only alphanumeric characters, dot(.), dash(-), or underscore(_)
# in between.
# Maximum length is 39 characters.
#
PACKAGE_NAME orapkg
# "PACKAGE_TYPE" is the type of package.
#
# The PACKAGE_TYPE attribute specifies the desired behavior for this
# package. Legal values and their meaning are described below:
#
# FAILOVER package runs on one node at a time and if a failure
# occurs it can switch to an alternate node.
#
# MULTI_NODE package runs on multiple nodes at the same time and
# can be independently started and halted on
# individual nodes. Failures of package components such
# as services, EMS resources or subnets, will cause
# the package to be halted only on the node on which the
# failure occurred. Relocatable IP addresses cannot be
# assigned to "multi_node" packages.
#
# SYSTEM_MULTI_NODE
# package runs on all cluster nodes at the same time.
# It cannot be started and halted on individual nodes.
# Both "NODE_FAIL_FAST_ENABLED" and "AUTO_RUN"
# must be set to "YES" for this type of package. All
# "SERVICES" must have "SERVICE_FAIL_FAST_ENABLED" set
# to "YES". SYSTEM_MULTI_NODE packages are only
# supported for use by applications provided by
# Hewlett-Packard.
#
#
# Since "MULTI_NODE" and "SYSTEM_MULTI_NODE" packages can run on more
# than one node at a time and do not failover in the event of a
# package failure, the following parameters cannot be
# specified when configuring packages of these types:
#
# FAILOVER_POLICY
# FAILBACK_POLICY
#
# Since an IP address cannot be assigned to more than one node at
# a time, relocatable IP addresses cannot be assigned to
# "MULTI_NODE" packages. If volume groups are used in a
# "MULTI_NODE" package, they must be activated in shared mode,
# leaving the application responsible for data integrity.
#
# Shared access requires a shared volume manager.
#
# The default value for "PACKAGE_TYPE" is "FAILOVER".
#
# Legal values for PACKAGE_TYPE: FAILOVER, MULTI_NODE, SYSTEM_MULTI_NODE.
PACKAGE_TYPE MULTI_NODE
#(因为oracle为RAC,MC-SG只是挂载共享磁盘,共享磁盘并不切换,也没有IP切换,所以该包只是在指定的机器上运行即可,不用切换,故用
# MULTI_NODE 方式)
# "NODE_NAME" specified which nodes this package can run on.
#
# Enter the names of the nodes configured to run this package, repeat
# this line for each cluster member node configured to run this package.
#
# NOTE: The order in which the nodes are specified here determines the
# order of priority when Serviceguard is deciding where to run the
# package.
#
# Example : NODE_NAME first_priority_node
# NODE_NAME second_priority_node
#
# If all nodes in the cluster can run the package, and order is not
# important, specify "NODE_NAME *".
#
# Example : NODE_NAME *
#
# Legal values for NODE_NAME:
# "*", or any node name in the cluster.
# Node name is any string that starts and ends with an alphanumeric
# character, and contains only alphanumeric characters, dot(.), dash(-),
# or underscore(_) in between.
# Maximum name length is 39 characters.
#
NODE_NAME nodedb1
NODE_NAME nodedb2
#(因为oracle为RAC,MC-SG只是挂载共享磁盘,共享磁盘并不切换,两台机器都需要同时运行该包)
# "AUTO_RUN" defines whether the package is to be started when the
# cluster is started, and if it will fail over automatically.
#
# Possible values are "YES" and "NO".
# The default for "AUTO_RUN" is "YES", meaning that the package will be
# automatically started when the cluster is started, and that, in the
# event of a failure the package will be started on an adoptive node. If
# "AUTO_RUN" is "NO", the package is not started when the cluster
# is started, and must be started with the cmrunpkg command.
#
# "AUTO_RUN" replaces "PKG_SWITCHING_ENABLED".
#
# Legal values for AUTO_RUN: YES, NO.
AUTO_RUN YES
# "NODE_FAIL_FAST_ENABLED" will cause node to fail if package fails.
#
# Possible values are "YES" and "NO".
# The default for "NODE_FAIL_FAST_ENABLED" is "NO". In the event of
# failure, if "NODE_FAIL_FAST_ENABLED" is set to "YES", Serviceguard
# will halt the node on which the package is running. All
# "SYSTEM_MULTI_NODE" packages must have "NODE_FAIL_FAST_ENABLED" set to
# "YES".
#
#
# Legal values for NODE_FAIL_FAST_ENABLED: YES, NO.
NODE_FAIL_FAST_ENABLED NO
# "RUN_SCRIPT" is the script that starts a package.
# "HALT_SCRIPT" is the script that stops a package.
#
# Enter the complete path for the run and halt scripts. The scripts must
# be located in directory with "cmcluster" in the path name. In most cases
# the run script and halt script specified here will be the same script,
# the package control script generated by the cmmakepkg command. This
# control script handles the run(ning) and halt(ing) of the package.
#
# Legal values for RUN_SCRIPT:
# Full path name for the run script with "cmcluster" in the path name.
# The maximum length for the path name is MAXPATHLEN characters long.
#
RUN_SCRIPT /etc/cmcluster/orapkg1/cm1.cntl
# Legal values for HALT_SCRIPT:
# Full path name for the halt script with "cmcluster" in the path name.
# The maximum length for path name MAXPATHLEN characters long.
#
HALT_SCRIPT /etc/cmcluster/orapkg1/cm1.cntl
# "RUN_SCRIPT_TIMEOUT" is the number of seconds allowed for the package to start.
# "HALT_SCRIPT_TIMEOUT" is the number of seconds allowed for the package to halt.
#
#
# If the start or halt function has not completed in the specified
# number of seconds, the function will be terminated. The default for
# each script timeout is "NO_TIMEOUT". Adjust the timeouts as necessary
# to permit full execution of each function.
#
# Note: The "HALT_SCRIPT_TIMEOUT" should be greater than the sum of
# all "SERVICE_HALT_TIMEOUT" values specified for all services.
#
# Legal values for RUN_SCRIPT_TIMEOUT: NO_TIMEOUT, (value > 0).
RUN_SCRIPT_TIMEOUT NO_TIMEOUT
# Legal values for HALT_SCRIPT_TIMEOUT: NO_TIMEOUT, (value > 0).
HALT_SCRIPT_TIMEOUT NO_TIMEOUT
# "SUCCESSOR_HALT_TIMEOUT" limits the amount of time Serviceguard waits
# for packages that depend on this package ("successor packages") to
# halt, before running the halt script of this package.
#
# SUCCESSOR_HALT_TIMEOUT limits the amount of time
# Serviceguard waits for successors of this package to
# halt, before running the halt script of this package.
# This is an optional parameter.
# Permissible values are 0 - 4294 (specifying the maximum
# number of seconds Serviceguard will wait).
# The default value is "NO_TIMEOUT", which means Serviceguard
# will wait for as long as it takes for the successor package to halt.
# The timeout of 0 indicates, that this package will halt without
# waiting for successors packages to halt
# Example: \n"
# SUCCESSOR_HALT_TIMEOUT NO_TIMEOUT
# SUCCESSOR_HALT_TIMEOUT 60
#
# Legal values for SUCCESSOR_HALT_TIMEOUT: NO_TIMEOUT, ( (value >= 0) && (value <= 4294) ).
SUCCESSOR_HALT_TIMEOUT NO_TIMEOUT
# "SCRIPT_LOG_FILE" is the full path name for the package control script
# log file. The maximum length of the path name is MAXPATHLEN characters long.
#
# If not set, the script output is sent to a file named by appending
# ".log" to the script path.
#
# Legal values for SCRIPT_LOG_FILE:
#SCRIPT_LOG_FILE
# "FAILOVER_POLICY" is the policy to be applied when package fails.
#
# This policy will be used to select a node whenever the package needs
# to be started or restarted. The default policy is "CONFIGURED_NODE".
# This policy means Serviceguard will select nodes in priority order
# from the list of "NODE_NAME" entries.
#
# An alternative policy is "SITE_PREFERRED". This policy means
# that when selecting nodes from the list of "NODE_NAME" entries,
# Serviceguard will give priority to nodes that belong to the site the
# package last ran on, over those that belong to a different site. When
# all nodes belonging to the same site where the package last ran are
# unable to run the package, the package will automatically fail over to
# the other site.
#
# An alternative policy is "SITE_PREFERRED_MANUAL". This policy can
# be used only in a Metrocluster environment. This policy means
# that when selecting nodes from the list of "node_name" entries,
# Serviceguard selects a node that belongs to the site that the package
# last ran on. When all nodes belonging to the same site where the package
# last ran are unable to run the package, the package will not automatically
# fail over to the other site. In such situations, manual intervention is
# needed to start the package on either the same site or on another site.
#
# Another policy is "MIN_PACKAGE_NODE". This policy means
# Serviceguard will select from the list of "NODE_NAME" entries the
# node, which is running fewest packages when this package needs to
# start.
#
# Legal values for FAILOVER_POLICY: CONFIGURED_NODE, MIN_PACKAGE_NODE, SITE_PREFERRED, SITE_PREFERRED_MANUAL.
FAILOVER_POLICY CONFIGURED_NODE
#(因为oracle为RAC,MC-SG只是挂载共享磁盘,共享磁盘并不切换,也没有IP切换,所以该包只是在指定的机器上运行即可,不用切换)
# "FAILBACK_POLICY" is the action to take when a package is not running
# on its primary node.
#
# This policy will be used to determine what action to take when a
# package is not running on its primary node and its primary node is
# capable of running the package. The default policy is "MANUAL". The
# "MANUAL" policy means no attempt will be made to move the package back
# to its primary node when it is running on an adoptive node.
#
# The alternative policy is "AUTOMATIC". This policy means Serviceguard
# will attempt to move the package back to its primary node as soon as
# the primary node is capable of running the package.
#
#
# Legal values for FAILBACK_POLICY: MANUAL, AUTOMATIC.
FAILBACK_POLICY MANUAL
# "PRIORITY" specifies the PRIORITY of the package.
#
# This is an optional parameter. Valid values are a number between
# 1 and 3000 or NO_PRIORITY. Default is NO_PRIORITY.
# A smaller number indicates higher priority. A package with a
# numerical priority has higher priority than a package with NO_PRIORITY.
#
# If a number is specified, it must be unique in the cluster.
# To help assign unique priorities, HP recommends you use
# priorities in increments of 10. This will allow you
# to add new packages without having to reassign priorities.
#
# Multi-node and System multi node packages cannot be assigned a priority.
#
# This parameter is used only when a weight has been defined for a package,
# a package depends on other packages,
# or other packages depend on this package, but can be specified even
# when no weights or dependencies have yet been configured.
# If priority is not configured, the package is assigned the default
# priority value, NO_PRIORITY.
#
# Serviceguard gives preference to running the higher priority package.
# This means that, if necessary, Serviceguard will halt a package (or
# halt and restart on anther node) in order to run a higher priority
# package. The reason may be:
# * the node's capacity would otherwise be exceeded
# * there is a direct or indirect dependency between the lower and
# higher priority packages.
#
# For example, suppose package pkg1 depends on package pkg2
# to be up on the same node, both have package switching enabled
# and both are currently up on node node1. If pkg1 needs to
# fail over to node2, it will also need pkg2 to move to node2.
# If pkg1 has higher priority than pkg2, it can force pkg2 to
# move to node2. Otherwise, pkg1 cannot fail over because pkg2 is
# running on node1.
# Examples of package priorities and failover results:
#
# pkg1 priority pkg2 priority results
# 10 20 pkg1 is higher; fails over
# 20 10 pkg1 is lower; will not fail over
# any number NO_PRIORITY pkg1 is higher; fails over
# NO_PRIORITY NO_PRIORITY equal priority; will not fail over
# NO_PRIORITY any number pkg1 is lower; will not fail over
#
# Legal values for PRIORITY: NO_PRIORITY, ( (value >= 1) && (value <= 3000) ).
PRIORITY NO_PRIORITY
# The package dependency parameters are "DEPENDENCY_NAME",
# "DEPENDENCY_CONDITION" and "DEPENDENCY_LOCATION".
#
# Dependencies are used to describe the relationship between two packages.
# To define a dependency, "DEPENDENCY_NAME" and "DEPENDENCY_CONDITION"
# are required and "DEPENDENCY_LOCATION is optional.
#
# "DEPENDENCY_NAME" must be a unique identifier for the dependency.
#
# "DEPENDENCY_CONDITION" describes what must be true for
# the dependency to be satisfied.
#
# The syntax is:
#
# The valid values for
#
# "up" means that this package requires the package identified
# by "PACKAGE_NAME" to be up (status reported by cmviewcl is "up").
#
# If "up" is specified, the dependency rules are as follows:
#
# * A multi-node package can depend only on another multi-
# node or system multi-node package.
#
# * A failover package whose FAILOVER_POLICY is
# MIN_PACKAGE_NODE can depend only on a multi-node or
# system multi-node package.
#
# * A failover package whose FAILOVER_POLICY is
# CONFIGURED_NODE can depend on a multi-node or system
# multi-node package, or another failover package whose
# FAILOVER_POLICY is CONFIGURED_NODE.
#
# "down" means that this package requires the package
# identified by "package name" to be down (status reported by
# cmviewcl is "down"). This is known as an exclusion dependency.
#
# This means that only one of these packages can be running at
# any given time.
#
# If "down" value is specified, the exclusion dependency must be
# mutual; that is, if pkgA depends on pkgB to be down, pkgB must
# also depend on pkgA to be down.
#
# This means that in order to create an exclusion dependency
# between two packages, you must apply both packages to the
# cluster configuration at the same time.
#
# An exclusion dependency is allowed only between failover
# packages with configured_node as failover policy, and at least one
# of the packages must specify a priority.
#
# "DEPENDENCY_LOCATION"
# This describes where the condition must be satisfied.
#
# This parameter is optional. If it is not specified, the default
# value "same_node" will be used.
#
# The possible values for this attribute depend on the
# dependency condition.
#
# If an "up" dependency is specified, the possible values
# are "same_node", "any_node", and "different_node".
#
# "same_node" means the dependency must be satisifed on
# the same node.
#
# "any_node" means the dependency can be satisified on
# any node in the cluster.
#
# "different_node" means the dependency must be satisfied
# on a node other than the dependent package's node.
#
# If a "down" dependency is specified, the possible values
# are "same_node" and "all_nodes".
#
# "same_node" means the package depended on must be down on
# the same node.
#
# "all_nodes" means the package depended on must be down on
# all nodes in the cluster.
#
# NOTE:
# Within a package, you cannot specifiy more than one dependency on the
# same package. For example, pkg1 cannot have one same_node and one
# any_node dependency on pkg2.
#
# When a package requires that another package be up and the
# DEPENDENCY_LOCATION is any_node or different_node, the priority of the
# the package depended on must be higher or equal to the dependent
# package and its dependents. For example, if pkg1 has a same_node
# dependency on pkg2 and pkg2 has an any_node dependency on pkg3,
# the priority of pkg3 must be higher or equal to the priority of
# pkg1 and pkg2.
#
# In a CFS cluster, the dependencies among the mount point, disk group,
# and system multi-node packages are automatically created by the commands
# that construct those packages.
#
# Example 1 : To specify a "same_node" dependency between pkg1 and pkg2:
# pkg1's ascii configuration file:
#
# DEPENDENCY_NAME pkg2_dep
# DEPENDENCY_CONDITION pkg2 = up
# DEPENDENCY_LOCATION same_node
#
# Example 2 : To specify a "same_node" exclusion dependency between
# pkg1 and pkg2:
#
# pkg1's ascii configuration file:
#
# DEPENDENCY_NAME pkg2_dep
# DEPENDENCY_CONDITION pkg2 = down
# DEPENDENCY_LOCATION same_node
#
# pkg2's ascii configuration file:
#
# DEPENDENCY_NAME pkg1_dep
# DEPENDENCY_CONDITION pkg1 = down
# DEPENDENCY_LOCATION same_node
#
#
# Note that pkg1 and pkg2 must be applied at the same time.
#
# Legal values for DEPENDENCY_NAME:
# Any string that starts and ends with an alphanumeric character, and
# contains only alphanumeric characters, dot(.), dash(-), or underscore(_)
# in the middle.
# Maximum string length is 39 characters.
#
# Legal values for DEPENDENCY_CONDITION:
# Legal values for DEPENDENCY_LOCATION: same_node, any_node, different_node, all_nodes.
#DEPENDENCY_NAME
#DEPENDENCY_CONDITION
#DEPENDENCY_LOCATION
# The package weight parameters are the "WEIGHT_NAME" and "WEIGHT_VALUE".
#
#
# These optional attributes provide additional data which the
# Serviceguard package manager uses when selecting a node on which to
# place the package. As with all attribute names, they are case
# insensitive.
#
# A package can use this mechanism to define up to four arbitrary
# weight names with corresponding values that are meant to represent
# the runtime resource consumption of the package. In the cluster
# configuration file, you configure capacity limits for the named
# weights on the cluster nodes. During package placement,
# the package manager will ensure the total value of any given named
# weight does not exceed the capacity limit configured for the node.
#
# The "WEIGHT_NAME" is string of up to 39 characters.
# The "WEIGHT_VALUE" specifies a value for the named weight that
# precedes it. This is an unsigned floating point value between 0 and
# 1000000 with at most three digits after the decimal point.
#
# If "WEIGHT_NAME" is specified, "WEIGHT_VALUE" must also be specified
# and "WEIGHT_NAME" must come first. To specifiy more than one weight,
# repeat this process.
#
# You can define weights either individually within each
# package configuration file, or by means of a default value
# in the cluster configuration file that applies to all configured
# packages (except system multi-node packages). If a particular
# weight name is defined in both the cluster and package configuration
# files, the value specified in the package configuration file takes
# precedence. This allows you to set an overall default, but to
# override it for a particular package.
#
# For example, if you specify WEIGHT_NAME "memory" with WEIGHT_DEFAULT
# 1000 in the cluster configuration file, and you do not specify a weight
# value for "memory" in the package configuration file for pkgA, pkgA's
# "memory" weight will be 1000. If you define a weight value of 2000 for
# "memory" in the configuration file for pkgA, pkgA's "memory" weight
# will be 2000.
#
# If no WEIGHT_NAME/WEIGHT_DEFAULT value is specified in the cluster
# configuration file for a given CAPACITY, and WEIGHT_NAME and WEIGHT_VALUE
# are not specified in this package configuration file for that CAPACITY,
# then the WEIGHT_VALUE for this package is set to zero or one depending
# on the capacity name. If the capacity name is the reserved capacity
# "package_limit", the WEIGHT_VALUE for this package is set to one;
# otherwise, the WEIGHT_VALUE is set to zero.
# For example, if you specify CAPACITY "memory" and do not specify
# a WEIGHT_DEFAULT for "memory" in the cluster configuration file,
# and do not specify weight "memory" in the package configuration
# file for pkgA, then pkgA's "memory" weight will be zero.
#
# Note that cmapplyconf will fail if you define a weight in the package
# configuration file and no node in the cluster configuration file
# specifies a capacity of the same name.
#
# Weight can be assigned only to multi-node packages, and failover packages
# with CONFIGURED_NODE as the FAILOVER_POLICY and MANUAL as the FAILBACK POLICY.
#
# For more information on how to configure default weights and
# node capacities, see the cmquerycl man page, the cluster configuration
# template file, and the Managing Serviceguard manual.
#
# Example :
# WEIGHT_NAME package_limit
# WEIGHT_VALUE 10
#
# This overrides the default value of 1 and sets the weight for this
# package to 10
#
# Legal values for WEIGHT_NAME:
# Any string that starts and ends with an alphanumeric character, and
# contains only alphanumeric characters, dot(.), dash(-), or underscore(_)
# in the middle.
# Maximum string length is 39 characters.
#
# Legal values for WEIGHT_VALUE:
# Any unsigned floating point string. Only 3 digits after the decimal point
# are significant. Maximum string length is 11 characters.
#
#WEIGHT_NAME
#WEIGHT_VALUE
# "LOCAL_LAN_FAILOVER_ALLOWED" will allow LANs to be switched locally.
#
# Possible values are "YES" and "NO".
# The default for "LOCAL_LAN_FAILOVER_ALLOWED" is "YES". In the event of a
# failure, this permits the Serviceguard to switch LANs locally
# (transfer to a standby LAN card). Adjust as necessary.
#
# "LOCAL_LAN_FAILOVER_ALLOWED" replaces "NET_SWITCHING_ENABLED".
#
# Legal values for LOCAL_LAN_FAILOVER_ALLOWED: YES, NO.
LOCAL_LAN_FAILOVER_ALLOWED YES
# "MONITORED_SUBNET" specifies the addresses of subnets that are to be
# monitored for this package.
#
# Enter the network subnet name that is to be monitored for this package.
# Repeat this line as necessary for additional subnet names. If any of
# the subnets defined goes down, the package will be switched to another
# node that is configured for this package and has all the defined subnets
# available.
#
# "MONITORED_SUBNET" replaces "SUBNET".
#
# The MONITORED_SUBNET names can be IPv4 or IPv6, or a mix of both.
#
# Example :
# MONITORED_SUBNET 192.10.25.0 # (netmask=255.255.255.0)
# MONITORED_SUBNET 2001::/64 # (netmask=ffff:ffff:ffff:ffff::)
# MONITORED_SUBNET 2001:: # (netmask=ffff:ffff:ffff:ffff::)
#
# Legal values for MONITORED_SUBNET:
# "MONITORED_SUBNET_ACCESS" defines how the MONITORED_SUBNET is
# configured in the cluster.
#
#
# MONITORED_SUBNET_ACCESS defines whether access to a MONITORED_SUBNET
# is configured on all of the nodes that can run this package, or only
# on some. Possible values are "PARTIAL" and "FULL". "PARTIAL" means
# that the MONITORED_SUBNET is expected to be configured on one or more
# of the nodes this package can run on, but not all. "FULL" means that
# the MONITORED_SUBNET is expected to be configured on all the nodes
# that this package can run on. "FULL" is the default. (Specifying
# "FULL" is equivalent to not specifying the monitored_subnet_access at
# all.)
#
# The MONITORED_SUBNET_ACCESS is defined per MONITORED_SUBNET entry.
#
# Example :
# MONITORED_SUBNET 192.10.25.0
# MONITORED_SUBNET_ACCESS PARTIAL # 192.10.25.0 is available on one
# # or more nodes of the cluster,
# # but not all.
#
# MONITORED_SUBNET 192.10.26.0 # no MONITORED_SUBNET_ACCESS entry,
# # hence this subnet is available
# # on all nodes of the cluster.
# MONITORED_SUBNET 2001::/64
# MONITORED_SUBNET_ACCESS FULL # 2001::/64 is available on all
# # nodes of the cluster.
#
# Legal values for MONITORED_SUBNET_ACCESS: PARTIAL, FULL.
#MONITORED_SUBNET
#MONITORED_SUBNET_ACCESS
# "CLUSTER_INTERCONNECT_SUBNET" specifies subnets that are to be monitored for
# a SGERAC multi-node package.
#
#
# This parameter requires an IPV4 or IPV6 address. CLUSTER_INTERCONNECT_SUBNETs
# can be configured only for multi_node packages in SGeRAC configurations.
#
#
# Legal values for CLUSTER_INTERCONNECT_SUBNET:
#CLUSTER_INTERCONNECT_SUBNET
# "SERVICE_NAME" is a long lived (daemon) executable which
# Serviceguard will monitor while the package is up.
#
# "SERVICE_NAME", "SERVICE_FAIL_FAST_ENABLED" and "SERVICE_HALT_TIMEOUT"
# specify a service for this package.
#
# The value for "SERVICE_FAIL_FAST_ENABLED" can be either "yes" or
# "no". The default is "no". If "SERVICE_FAIL_FAST_ENABLED" is set to
# "yes", and the service fails, Serviceguard will halt the node on which
# the service is running.
#
#
# "SERVICE_HALT_TIMEOUT" is a number of seconds. This timeout is used
# to determine the length of time the Serviceguard will wait for the
# service to halt before a SIGKILL signal is sent to force the
# termination of the service. In the event of a service halt,
# Serviceguard will first send a SIGTERM signal to terminate the
# service. If the service does not halt, Serviceguard will wait for the
# specified "SERVICE_HALT_TIMEOUT", then send the SIGKILL signal to
# force the service to terminate. This timeout value should be large
# enough to allow all cleanup processes associated with the service to
# complete. If the "SERVICE_HALT_TIMEOUT" is not specified, a zero
# timeout will be assumed, meaning the cluster software will not wait at
# all before sending the SIGKILL signal to halt the service.
#
#
# Example:
# SERVICE_NAME service_1a
# SERVICE_FAIL_FAST_ENABLED no
# SERVICE_HALT_TIMEOUT 300
#
# SERVICE_NAME service_1b
# SERVICE_FAIL_FAST_ENABLED no
# SERVICE_HALT_TIMEOUT 300
#
# SERVICE_NAME service_1c
# SERVICE_FAIL_FAST_ENABLED no
# SERVICE_HALT_TIMEOUT 300
#
# Note: No environmental variables will be passed to the service command, this
# includes the PATH variable. Absolute path names are required for the
# service command definition. Default shell is /usr/bin/sh.
#
# Legal values for SERVICE_NAME:
# Any string that starts and ends with an alphanumeric character, and
# contains only alphanumeric characters, dot(.), dash(-), or underscore(_)
# in between.
# Maximum string length is 39 characters.
#
# Legal values for SERVICE_FAIL_FAST_ENABLED: yes, no.
# Legal values for SERVICE_HALT_TIMEOUT: (value >= 0).
#SERVICE_NAME
#SERVICE_FAIL_FAST_ENABLED
#SERVICE_HALT_TIMEOUT
# Event Monitoring Service Resource Dependencies
#
# Event monitoring service resource dependencies are specified with the
# following parameters: "RESOURCE_NAME", "RESOURCE_POLLING_INTERVAL",
# "RESOURCE_START" and "RESOURCE_UP_VALUE".
#
# To define a package resource dependency, a "RESOURCE_NAME" line with
# a fully qualified resource path name, and one or more
# "RESOURCE_UP_VALUE" lines are required. "RESOURCE_POLLING_INTERVAL" and
# the "RESOURCE_START" are optional, and will default as described
# below if not specified.
#
# The "RESOURCE_POLLING_INTERVAL" indicates how often, in seconds, the
# resource is to be monitored. The default is 60 seconds.
#
# The "RESOURCE_START" option can be set to either "automatic" or "deferred".
# The default is "automatic". "automatic" means Serviceguard will
# start up resource monitoring for this resource automatically when the
# node starts up. If "deferred" is specified, Serviceguard will not
# attempt to start this resource at node start up. User
# should specify all the "deferred" resources in the package run script
# so that these "deferred" resources will be started up from the package
# run script during package run time.
#
# "RESOURCE_UP_VALUE" requires an operator and a value. This defines
# the resource 'UP' condition. The operators are =, !=, >, <, >=,
# and <=, depending on the type of value. Values can be string or
# numeric. If the type is string, then only = and != are valid
# operators. If the string contains white space, it must be enclosed
# in quotes. String values are case sensitive. For example,
#
# Resource is up when its value is
# --------------------------------
# RESOURCE_UP_VALUE = UP "UP"
# RESOURCE_UP_VALUE != DOWN Any value except "DOWN"
# RESOURCE_UP_VALUE = "On Course" "On Course"
#
# If the type is numeric, then it can specify a threshold, or a range to
# define a resource up condition. If it is a threshold, then any operator
# may be used. If a range is to be specified, then only > or >= may be used
# for the first operator, and only < or <= may be used for the second operator.
# For example,
# Resource is up when its value is
# --------------------------------
# RESOURCE_UP_VALUE = 5 5 (threshold)
# RESOURCE_UP_VALUE > 5.1 greater than 5.1 (threshold)
# RESOURCE_UP_VALUE > -5 and < 10 between -5 and 10 (range)
#
# Note that "and" is required between the lower limit and upper limit to
# specify a range. The upper limit must be greater than the lower
# limit. If "RESOURCE_UP_VALUE" is repeated within a "RESOURCE_NAME"
# block, then they are inclusively OR'd together. (Additional package
# resource dependencies are defined by repeating the entire
# "RESOURCE_NAME" block.)
#
# Example : RESOURCE_NAME /net/interfaces/lan/status/lan0
# RESOURCE_POLLING_INTERVAL 120
# RESOURCE_START automatic
# RESOURCE_UP_VALUE = running
# RESOURCE_UP_VALUE = online
#
# Means that the value of resource /net/interfaces/lan/status/lan0
# will be checked every 120 seconds, and is considered to
# be 'up' when its value is "running" or "online".
#
# Uncomment the following lines to specify package resource dependencies.
#
# Legal values for RESOURCE_NAME:
# Legal values for RESOURCE_POLLING_INTERVAL: ( (value > 0) && (value <= 86400) ).
# Legal values for RESOURCE_START: automatic, deferred.
# Legal values for RESOURCE_UP_VALUE:
#RESOURCE_NAME
#RESOURCE_POLLING_INTERVAL
#RESOURCE_START
#RESOURCE_UP_VALUE
# "STORAGE_GROUP" specifies CVM specific disk group used in this package.
#
# WARNING: "STORAGE_GROUP" is intended to support CVM 3.5 only. This
# parameter has been depreciated. It will be obsoleted in a future
# Serviceguard release! For CVM 4.1 or later disk groups, Please replace
# it by configuring package dependency on SG-CFS-pkg inside this package.
#
# Enter the names of the storage groups configured for this package.
# Repeat this line as necessary for additional storage groups.
#
# Storage groups are only used with CVM disk groups. Neither
# VxVM disk groups or LVM volume groups should be listed here.
# By specifying a CVM disk group with the "STORAGE_GROUP" keyword
# this package will not run until the CVM system multi node package is
# running and thus the CVM shared disk groups are ready for
# activation.
#
# Example : STORAGE_GROUP "dg01"
# STORAGE_GROUP "dg02"
# STORAGE_GROUP "dg03"
# STORAGE_GROUP "dg04"
#
# Legal values for STORAGE_GROUP:
# Any string that starts and ends with an alphanumeric character, and
# contains only alphanumeric characters, dot(.), dash(-), or underscore(_)
# in the middle.
# Maximum string length is 39 characters.
#
#STORAGE_GROUP
# Access Control Policy Parameters.
#
# "USER_NAME", "USER_HOST" and "USER_ROLE" specify who can administer
# this package.
#
# Three entries set the access control policy for the package: the
# first line must be "USER_NAME", the second "USER_HOST", and the third "USER_ROLE".
# Enter a value after each.
#
# 1. "USER_NAME" can either be "ANY_USER", or a maximum of
# 8 login names from the /etc/passwd file on user host.
# 2. "USER_HOST" is where the user can issue Serviceguard commands.
# Choose one of these three values: "ANY_SERVICEGUARD_NODE",
# or (any) "CLUSTER_MEMBER_NODE", or a specific node. For node,
# use the name portion of the official hostname supplied by the
# domain name server, not the IP addresses or fully qualified name.
# 3. "USER_ROLE" must be "PACKAGE_ADMIN". This role grants permission
# to "monitor", plus for administrative commands for the package.
#
# These policies do not affect root users. Access Policies defined in
# this file must not conflict with policies defined in the cluster
# configuration file.
#
# Example: to configure a role for user john from node noir to
# administer the package, enter:
# USER_NAME john
# USER_HOST noir
# USER_ROLE package_admin
#
# Legal values for USER_NAME:
# A string of tokens each of which starts with an alphanumeric character and contains
# only alphanumeric and underscore(_) characters. The tokens must be separated by a space
# or a tab character.
# Maximum length of each user_name is 39 character.
#
# Legal values for USER_HOST:
# Any string that starts and ends with an alphanumeric character, and
# contains only alphanumeric characters, dot(.), dash(-), or underscore(_)
# in the middle.
# Maximum length is 39 character.
#
# Legal values for USER_ROLE: package_admin.
#USER_NAME
#USER_HOST
#USER_ROLE
9. 创立包配置文件,实际就是激活共享卷组vgora的脚本:
#cmmakepkg -s /etc/cmcluster/orapkg1/cm1.cntl
修改/etc/cmcluster/orapkg1/ cm1.cntl:
# @(#) A.11.20.00 Date: 05/17/10 $
# **********************************************************************
# * *
# * HIGH AVAILABILITY PACKAGE CONTROL SCRIPT (template) *
# * *
# * Note: This file MUST be edited before it can be used. *
# * *
# **********************************************************************
# The environment variables PACKAGE, NODE, SG_PACKAGE,
# SG_NODE and SG_SCRIPT_LOG_FILE are set by Serviceguard
# at the time the control script is executed.
# Do not set these environment variables yourself!
# The package may fail to start or halt if the values for
# these environment variables are altered.
# NOTE: Starting from 11.17, all environment variables set by
# Serviceguard implicitly at the time the control script is
# executed will contain the prefix "SG_". Do not set any variable
# with the defined prefix, or the control script may not
# function as it should.
. ${SGCONFFILE:=/etc/cmcluster.conf}
# UNCOMMENT the variables as you set them.
# Set PATH to reference the appropriate directories.
PATH=$SGSBIN:/usr/bin:/usr/sbin:/etc:/bin
# VOLUME GROUP ACTIVATION:
# Specify the method of activation for volume groups.
# Leave the default (VGCHANGE="vgchange -a e") if you want volume
# groups activated in exclusive mode. This assumes the volume groups have
# been initialized with 'vgchange -c y' at the time of creation.
#
# Uncomment the first line (VGCHANGE="vgchange -a e -q n"), and comment
# out the default, if you want to activate volume groups in exclusive mode
# and ignore the disk quorum requirement. Since the disk quorum ensures
# the integrity of the LVM configuration, it is normally not advisable
# to override the quorum.
#
# Uncomment the second line (VGCHANGE="vgchange -a e -q n -s"), and comment
# out the default, if you want to activate volume groups in exclusive mode,
# ignore the disk quorum requirement, and disable the mirror
# resynchronization. Note it is normally not advisable to override the
# quorum.
#
# Uncomment the third line (VGCHANGE="vgchange -a s"), and comment
# out the default, if you want volume groups activated in shared mode.
# This assumes the volume groups have already been marked as sharable
# and a part of a Serviceguard cluster with 'vgchange -c y -S y'.
#
# Uncomment the fourth line (VGCHANGE="vgchange -a s -q n"), and comment
# out the default, if you want to activate volume groups in shared mode
# and ignore the disk quorum requirement. Note it is normally not
# advisable to override the quorum.
#
# Uncomment the fifth line (VGCHANGE="vgchange -a y") if you wish to
# use non-exclusive activation mode. Single node cluster configurations
# must use non-exclusive activation.
#
# VGCHANGE="vgchange -a e -q n"
# VGCHANGE="vgchange -a e -q n -s"
# VGCHANGE="vgchange -a s"
# VGCHANGE="vgchange -a s -q n"
# VGCHANGE="vgchange -a y"
VGCHANGE="vgchange -a s" # 因为是共享卷组。
# CVM DISK GROUP ACTIVATION:
# Specify the method of activation for CVM disk groups.
# Leave the default
# (CVM_ACTIVATION_CMD="vxdg -g \$DiskGroup set activation=exclusivewrite")
# if you want disk groups activated in the exclusive write mode.
#
# Uncomment the first line
# (CVM_ACTIVATION_CMD="vxdg -g \$DiskGroup set activation=readonly"),
# and comment out the default, if you want disk groups activated in
# the readonly mode.
#
# Uncomment the second line
# (CVM_ACTIVATION_CMD="vxdg -g \$DiskGroup set activation=sharedread"),
# and comment out the default, if you want disk groups activated in the
# shared read mode.
#
# Uncomment the third line
# (CVM_ACTIVATION_CMD="vxdg -g \$DiskGroup set activation=sharedwrite"),
# and comment out the default, if you want disk groups activated in the
# shared write mode.
#
# CVM_ACTIVATION_CMD="vxdg -g \$DiskGroup set activation=readonly"
# CVM_ACTIVATION_CMD="vxdg -g \$DiskGroup set activation=sharedread"
# CVM_ACTIVATION_CMD="vxdg -g \$DiskGroup set activation=sharedwrite"
CVM_ACTIVATION_CMD="vxdg -g \$DiskGroup set activation=exclusivewrite"
# VOLUME GROUPS
# Specify which volume groups are used by this package. Uncomment VG[0]=""
# and fill in the name of your first volume group. You must begin with
# VG[0], and increment the list in sequence.
#
# For example, if this package uses your volume groups vg01 and vg02, enter:
# VG[0]=vg01
# VG[1]=vg02
#
# The volume group activation method is defined above. The filesystems
# associated with these volume groups are specified below.
#
VG[0]="vgora"
# CVM DISK GROUPS
# Specify which cvm disk groups are used by this package. Uncomment
# CVM_DG[0]="" and fill in the name of your first disk group. You must
# begin with CVM_DG[0], and increment the list in sequence.
#
# For example, if this package uses your disk groups dg01 and dg02, enter:
# CVM_DG[0]=dg01
# CVM_DG[1]=dg02
#
# The cvm disk group activation method is defined above. The filesystems
# associated with these volume groups are specified below in the CVM_*
# variables.
#
#CVM_DG[0]=""
# NOTE: Do not use CVM and VxVM disk group parameters to reference
# devices used by CFS (cluster file system). CFS resources are
# controlled by the Disk Group and Mount Multi-node packages.
#
# VxVM DISK GROUPS
# Specify which VxVM disk groups are used by this package. Uncomment
# VXVM_DG[0]="" and fill in the name of your first disk group. You must
# begin with VXVM_DG[0], and increment the list in sequence.
#
# For example, if this package uses your disk groups dg01 and dg02, enter:
# VXVM_DG[0]=dg01
# VXVM_DG[1]=dg02
#
# The cvm disk group activation method is defined above.
#
#VXVM_DG[0]=""
#
# NOTE: A package could have LVM volume groups, CVM disk groups and VxVM
# disk groups.
#
# NOTE: When VxVM is initialized it will store the hostname of the
# local node in its volboot file in a variable called 'hostid'.
# The Serviceguard package control scripts use both the values of
# the hostname(1m) command and the VxVM hostid. As a result
# the VxVM hostid should always match the value of the
# hostname(1m) command.
#
# If you modify the local host name after VxVM has been
# initialized and such that hostname(1m) does not equal uname -n,
# you need to use the vxdctl(1m) command to set the VxVM hostid
# field to the value of hostname(1m). Failure to do so will
# result in the package failing to start.
# VXVM DISK GROUP IMPORT RETRY
# For packages using VXVM disk groups, if the import of a VXVM
# disk group fails then this parameter allows you to specify if you want
# to retry the import of disk group. Setting this parameter to "YES" will
# execute the command "vxdisk scandisks" to scan for potentially missing
# disks that might have caused the datagroup import to fail. This command
# can take a long time on a system which has a large IO subsystem.
# The use of this parameter is recommended in a Metrocluster with EMC SRDF
# environment.
# The legal values are "YES" and "NO". The default value is "NO"
VXVM_DG_RETRY="NO"
# VOLUME GROUP AND DISK GROUP DEACTIVATION RETRY COUNT
# Specify the number of deactivation retries for each disk group and volume
# group at package shutdown. The default is 2.
DEACTIVATION_RETRY_COUNT=2
# RAW DEVICES
# If you are using raw devices for your application, this parameter allows
# you to specify if you want to kill the processes that are accessing the
# raw devices at package halt time. If raw devices are still being accessed
# at package halt time, volume group or disk group deactivation can fail,
# causing the package halt to also fail. This problem usually happens when
# the application does not shut down properly.
# Note that if you are using Oracle's Cluster Ready Service, killing this
# service could cause the node to reboot.
# The legal values are "YES" and "NO". The default value is "NO".
# The value that is set for this parameter affects all raw devices associated
# with the LVM volume groups and CVM disk groups defined in the package.
KILL_PROCESSES_ACCESSING_RAW_DEVICES="NO"
# FILESYSTEMS
# Filesystems are defined as entries specifying the logical volume, the
# mount point, the mount, umount and fsck options and type of the file system.
# Each filesystem will be fsck'd prior to being mounted. The filesystems
# will be mounted in the order specified during package startup and will
# be unmounted in reverse order during package shutdown. Ensure that
# volume groups referenced by the logical volume definitions below are
# included in volume group definitions above.
#
# Specify the filesystems which are used by this package. Uncomment
# LV[0]=""; FS[0]=""; FS_MOUNT_OPT[0]=""; FS_UMOUNT_OPT[0]=""; FS_FSCK_OPT[0]=""
# FS_TYPE[0]="" and fill in the name of your first logical volume,
# filesystem, mount, umount and fsck options and filesystem type
# for the file system. You must begin with LV[0], FS[0],
# FS_MOUNT_OPT[0], FS_UMOUNT_OPT[0], FS_FSCK_OPT[0], FS_TYPE[0]
# and increment the list in sequence.
#
# Note: The FS_TYPE parameter lets you specify the type of filesystem to be
# mounted. Specifying a particular FS_TYPE will improve package failover time.
# The FSCK_OPT and FS_UMOUNT_OPT parameters can be used to include the
# -s option with the fsck and umount commands to improve performance for
# environments that use a large number of filesystems. (An example of a
# large environment is given below following the decription of the
# CONCURRENT_MOUNT_AND_UMOUNT_OPERATIONS parameter.)
#
# Example: If a package uses two JFS filesystems, pkg01a and pkg01b,
# which are mounted on LVM logical volumes lvol1 and lvol2 for read and
# write operation, you would enter the following:
# LV[0]=/dev/vg01/lvol1; FS[0]=/pkg01a; FS_MOUNT_OPT[0]="-o rw";
# FS_UMOUNT_OPT[0]=""; FS_FSCK_OPT[0]=""; FS_TYPE[0]="vxfs"
#
# LV[1]=/dev/vg01/lvol2; FS[1]=/pkg01b; FS_MOUNT_OPT[1]="-o rw"
# FS_UMOUNT_OPT[1]=""; FS_FSCK_OPT[1]=""; FS_TYPE[1]="vxfs"
#
#Nested mount points may also be configured
#
#LV[0]=""; FS[0]=""; FS_MOUNT_OPT[0]=""; FS_UMOUNT_OPT[0]=""; FS_FSCK_OPT[0]=""
#FS_TYPE[0]=""
#
# VOLUME RECOVERY
#
# When mirrored VxVM volumes are started during the package control
# bring up, if recovery is required the default behavior is for
# the package control script to wait until recovery has been
# completed.
#
# To allow mirror resynchronization to ocurr in parallel with
# the package startup, uncomment the line
# VXVOL="vxvol -g \$DiskGroup -o bg startall" and comment out the default.
#
# VXVOL="vxvol -g \$DiskGroup -o bg startall"
VXVOL="vxvol -g \$DiskGroup startall" # Default
# FILESYSTEM UNMOUNT COUNT
# Specify the number of unmount attempts for each filesystem during package
# shutdown. The default is set to 1.
FS_UMOUNT_COUNT=1
# FILESYSTEM MOUNT RETRY COUNT.
# Specify the number of mount retrys for each filesystem.
# The default is 0. During startup, if a mount point is busy
# and FS_MOUNT_RETRY_COUNT is 0, package startup will fail and
# the script will exit with 1. If a mount point is busy and
# FS_MOUNT_RETRY_COUNT is greater than 0, the script will attempt
# to kill the user responsible for the busy mount point
# and then mount the file system. It will attempt to kill user and
# retry mount, for the number of times specified in FS_MOUNT_RETRY_COUNT.
# If the mount still fails after this number of attempts, the script
# will exit with 1.
# NOTE: If the FS_MOUNT_RETRY_COUNT > 0, the script will execute
# "fuser -ku" to freeup busy mount point.
FS_MOUNT_RETRY_COUNT=0
#
# Configuring the concurrent operations below can be used to improve the
# performance for starting up or halting a package. The maximum value for
# each concurrent operation parameter is 1024. Set these values carefully.
# The performance could actually decrease if the values are set too high
# for the system resources available on your cluster nodes. Some examples
# of system resources that can affect the optimum number of concurrent
# operations are: number of CPUs, amount of available memory, the kernel
# configuration for nfile and nproc. In some cases, if you set the number
# of concurrent operations too high, the package may not be able to start
# or to halt. For example, if you set CONCURRENT_VGCHANGE_OPERATIONS=5
# and the node where the package is started has only one processor, then
# running concurrent volume group activations will not be beneficial.
# It is suggested that the number of concurrent operations be tuned
# carefully, increasing the values a little at a time and observing the
# effect on the performance, and the values should never be set to a value
# where the performance levels off or declines. Additionally, the values
# used should take into account the node with the least resources in the
# cluster, and how many other packages may be running on the node.
# For instance, if you tune the concurrent operations for a package so
# that it provides optimum performance for the package on a node while
# no other packages are running on that node, the package performance
# may be significantly reduced, or may even fail when other packages are
# already running on that node.
#
# CONCURRENT VGCHANGE OPERATIONS
# Specify the number of concurrent volume group activations or
# deactivations to allow during package startup or shutdown.
# Setting this value to an appropriate number may improve the performance
# while activating or deactivating a large number of volume groups in the
# package. If the specified value is less than 1, the script defaults it
# to 1 and proceeds with a warning message in the package control script
# logfile.
CONCURRENT_VGCHANGE_OPERATIONS=1
#
# USE MULTI-THREADED VGCHANGE
# Specify whether multi-threaded vgchange is to be used if available.
# 0 means that the multi-threaded option is not to be used and 1 means
# that the multi-threaded option is to be used. The default is set to 0.
# Multi-threaded vgchange has potential performance benefits.
# If the activation order of the paths defined in lvmtab is important then
# multi-threaded vgchange should not be used. If mirrored volume groups
# are synced during activation then using multi-threaded vgchange may
# worsen performance.
# Using the multi-threaded vgchange option can improve the activation
# performance of volume groups with multiple disks.
# CONCURRENT_VGCHANGE_OPERATIONS option is beneficial when mutiple
# volume groups need to be activated. To get the best performance for
# volume group activation, use the multi-threaded vgchange option in
# combination with the CONCURRENT_VGCHANGE_OPERATIONS option.
ENABLE_THREADED_VGCHANGE=0
# CONCURRENT FSCK OPERATIONS
# Specify the number of concurrent fsck to allow during package startup.
# Setting this value to an appropriate number may improve the performance
# while checking a large number of file systems in the package. If the
# specified value is less than 1, the script defaults it to 1 and proceeds
# with a warning message in the package control script logfile.
CONCURRENT_FSCK_OPERATIONS=1
# CONCURRENT MOUNT AND UMOUNT OPERATIONS
# Specify the number of concurrent mounts and umounts to allow during
# package startup or shutdown.
# Setting this value to an appropriate number may improve the performance
# while mounting or un-mounting a large number of file systems in the package.
# If the specified value is less than 1, the script defaults it to 1 and
# proceeds with a warning message in the package control script logfile.
CONCURRENT_MOUNT_AND_UMOUNT_OPERATIONS=1
# Example: If a package uses 50 JFS filesystems, pkg01aa through pkg01bx,
# which are mounted on the 50 logical volumes lvol1..lvol50 for read and write
# operation, you may enter the following:
#
# CONCURRENT_FSCK_OPERATIONS=50
# CONCURRENT_MOUNT_AND_UMOUNT_OPERATIONS=50
#
# LV[0]=/dev/vg01/lvol1; FS[0]=/pkg01aa; FS_MOUNT_OPT[0]="-o rw";
# FS_UMOUNT_OPT[0]="-s"; FS_FSCK_OPT[0]="-s"; FS_TYPE[0]="vxfs"
#
# LV[1]=/dev/vg01/lvol2; FS[1]=/pkg01ab; FS_MOUNT_OPT[1]="-o rw"
# FS_UMOUNT_OPT[1]="-s"; FS_FSCK_OPT[1]="-s"; FS_TYPE[0]="vxfs"
# : : :
# : : :
# : : :
# LV[49]=/dev/vg01/lvol50; FS[49]=/pkg01bx; FS_MOUNT_OPT[49]="-o rw"
# FS_UMOUNT_OPT[49]="-s"; FS_FSCK_OPT[49]="-s"; FS_TYPE[0]="vxfs"
#
# IP ADDRESSES
# Specify the IP and Subnet address pairs which are used by this package.
# You could specify IPv4 or IPv6 IP and subnet address pairs.
# Uncomment IP[0]="" and SUBNET[0]="" and fill in the name of your first
# IP and subnet address. You must begin with IP[0] and SUBNET[0] and
# increment the list in sequence.
#
# For example, if this package uses an IP of 192.10.25.12 and a subnet of
# 192.10.25.0 enter:
# IP[0]=192.10.25.12
# SUBNET[0]=192.10.25.0
# (netmask=255.255.255.0)
#
# Hint: Run "netstat -i" to see the available subnets in the Network field.
#
# For example, if this package uses an IPv6 IP of 2001::1/64
# The address prefix identifies the subnet as 2001::/64 which is an available
# subnet.
# enter:
# IP[0]=2001::1
# SUBNET[0]=2001::/64
# (netmask=ffff:ffff:ffff:ffff::)
# Alternatively the IPv6 IP/Subnet pair can be specified without the prefix
# for the IPv6 subnet.
# IP[0]=2001::1
# SUBNET[0]=2001::
# (netmask=ffff:ffff:ffff:ffff::)
#
# Hint: Run "netstat -i" to see the available IPv6 subnets by looking
# at the address prefixes
# IP/Subnet address pairs for each IP address you want to add to a subnet
# interface card. Must be set in pairs, even for IP addresses on the same
# subnet.
#
#IP[0]=""
#SUBNET[0]=""
# SERVICE NAMES AND COMMANDS.
# Specify the service name, command, and restart parameters which are
# used by this package. Uncomment SERVICE_NAME[0]="", SERVICE_CMD[0]="",
# SERVICE_RESTART[0]="" and fill in the name of the first service, command,
# and restart parameters. You must begin with SERVICE_NAME[0], SERVICE_CMD[0],
# and SERVICE_RESTART[0] and increment the list in sequence.
#
# For example:
# SERVICE_NAME[0]=pkg1a
# SERVICE_CMD[0]="/usr/bin/X11/xclock -display 192.10.25.54:0"
# SERVICE_RESTART[0]="" # Will not restart the service.
#
# SERVICE_NAME[1]=pkg1b
# SERVICE_CMD[1]="/usr/bin/X11/xload -display 192.10.25.54:0"
# SERVICE_RESTART[1]="-r 2" # Will restart the service twice.
#
# SERVICE_NAME[2]=pkg1c
# SERVICE_CMD[2]="/usr/sbin/ping"
# SERVICE_RESTART[2]="-R" # Will restart the service an infinite
# number of times.
#
# Note: No environmental variables will be passed to the command, this
# includes the PATH variable. Absolute path names are required for the
# service command definition. Default shell is /usr/bin/sh.
#
#SERVICE_NAME[0]=""
#SERVICE_CMD[0]=""
#SERVICE_RESTART[0]=""
# DEFERRED_RESOURCE NAME
# Specify the full path name of the 'DEFERRED' resources configured for
# this package. Uncomment DEFERRED_RESOURCE_NAME[0]="" and fill in the
# full path name of the resource.
#
#DEFERRED_RESOURCE_NAME[0]=""
# DTC manager information for each DTC.
# Example: DTC[0]=dtc_20
#DTC_NAME[0]=
# HA_NFS_SCRIPT_EXTENSION
# If the package uses HA NFS, this variable can be used to alter the
# name of the HA NFS script. If not set, the name of this script is
# assumed to be "hanfs.sh". If set, the "sh" portion of the default
# script name is replaced by the value of this variable. So if
# HA_NFS_SCRIPT_EXTENSION is set to "package1.sh", for example, the name
# of the HA NFS script becomes "hanfs.package1.sh". In any case,
# the HA NFS script must be placed in the same directory as the package
# control script. This allows multiple packages to be run out of the
# same directory, as needed by SGeSAP.
#HA_NFS_SCRIPT_EXTENSION=""
# Setting the log file
log_file=${SG_SCRIPT_LOG_FILE:-$0.log}
# START OF CUSTOMER DEFINED FUNCTIONS
# This function is a place holder for customer define functions.
# You should define all actions you want to happen here, before the service is
# started. You can create as many functions as you need.
function customer_defined_run_cmds
{
# ADD customer defined run commands.
: # do nothing instruction, because a function must contain some command.
test_return 51
}
# This function is a place holder for customer define functions.
# You should define all actions you want to happen here, after the service is
# halted.
function customer_defined_halt_cmds
{
# ADD customer defined halt commands.
: # do nothing instruction, because a function must contain some command.
test_return 52
}
# END OF CUSTOMER DEFINED FUNCTIONS
10. 检测集群配置信息
#cmcheckconf -v -C /etc/cmcluster/cmcl.ascii -P /etc/cmcluster/orapkg1/ora1.ascii
Begin cluster verification...
Checking cluster file: /etc/cmcluster/cmcl.ascii
Checking nodes ... Done
Checking existing configuration ... Done
Gathering storage information
Found 10 devices on node nodedb1
Found 9 devices on node nodedb2
Analysis of 19 devices should take approximately 3 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 4 volume groups on node nodedb1
Found 3 volume groups on node nodedb2
Analysis of 7 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Gathering network information
Beginning network probing (this may take a while)
Completed network probing
Cluster demodb_cluster is an existing cluster
Begin file consistency checking
/etc/nsswitch.conf not found
-rw-r--r-- 1 root root 1585 Mar 31 2010 /etc/cmcluster/cmclfiles2check
-r--r--r-- 1 root root 524 Oct 22 2009 /etc/cmcluster/cmignoretypes.conf
-r-------- 1 bin bin 118 Oct 22 2009 /etc/cmcluster/cmknowncmds
-rw-r--r-- 1 root root 667 Oct 22 2009 /etc/cmcluster/cmnotdisk.conf
-rw-r--r-- 1 root sys 753 Jun 7 17:08 /etc/hosts
-r--r--r-- 1 bin bin 12662 May 17 13:28 /etc/services
/etc/nsswitch.conf not found
-rw-r--r-- 1 root root 1585 Mar 31 2010 /etc/cmcluster/cmclfiles2check
-r--r--r-- 1 root root 524 Oct 22 2009 /etc/cmcluster/cmignoretypes.conf
-r-------- 1 bin bin 118 Oct 22 2009 /etc/cmcluster/cmknowncmds
-rw-r--r-- 1 root root 667 Oct 22 2009 /etc/cmcluster/cmnotdisk.conf
-rw-r--r-- 1 root sys 763 Jun 6 13:52 /etc/hosts
-r--r--r-- 1 bin bin 12662 May 17 17:22 /etc/services
cksum: can't open /etc/nsswitch.conf: No such file or directory
1244500118 1585 /etc/cmcluster/cmclfiles2check
3382448570 753 /etc/hosts
2206705817 12662 /etc/services
61360265 524 /etc/cmcluster/cmignoretypes.conf
344617849 118 /etc/cmcluster/cmknowncmds
1390752988 667 /etc/cmcluster/cmnotdisk.conf
cksum: can't open /etc/nsswitch.conf: No such file or directory
1244500118 1585 /etc/cmcluster/cmclfiles2check
1243335535 763 /etc/hosts
2206705817 12662 /etc/services
61360265 524 /etc/cmcluster/cmignoretypes.conf
344617849 118 /etc/cmcluster/cmknowncmds
1390752988 667 /etc/cmcluster/cmnotdisk.conf
ERROR: /etc/cmcluster/cmclfiles2check permissions could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmclfiles2check owner could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmclfiles2check checksum could not be checked on nodes nodedb1 nodedb2:
/etc/cmcluster/cmclfiles2check is the same across nodes nodedb1 nodedb2
ERROR: /etc/hosts permissions could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/hosts owner could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/hosts checksum could not be checked on nodes nodedb1 nodedb2:
/etc/hosts is the same across nodes nodedb1 nodedb2
ERROR: /etc/nsswitch.conf permissions could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/nsswitch.conf owner could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/nsswitch.conf checksum could not be checked on nodes nodedb1 nodedb2:
/etc/nsswitch.conf is the same across nodes nodedb1 nodedb2
ERROR: /etc/services permissions could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/services owner could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/services checksum could not be checked on nodes nodedb1 nodedb2:
/etc/services is the same across nodes nodedb1 nodedb2
ERROR: /etc/cmcluster/cmignoretypes.conf permissions could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmignoretypes.conf owner could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmignoretypes.conf checksum could not be checked on nodes nodedb1 nodedb2:
/etc/cmcluster/cmignoretypes.conf is the same across nodes nodedb1 nodedb2
ERROR: /etc/cmcluster/cmknowncmds permissions could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmknowncmds owner could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmknowncmds checksum could not be checked on nodes nodedb1 nodedb2:
/etc/cmcluster/cmknowncmds is the same across nodes nodedb1 nodedb2
ERROR: /etc/cmcluster/cmnotdisk.conf permissions could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmnotdisk.conf owner could not be checked on nodes nodedb1 nodedb2:
ERROR: /etc/cmcluster/cmnotdisk.conf checksum could not be checked on nodes nodedb1 nodedb2:
/etc/cmcluster/cmnotdisk.conf is the same across nodes nodedb1 nodedb2
Command 'cat /etc/cmcluster/cmclfiles2check | /usr/sbin/cmcompare -W -v -n nodedb1 -n nodedb2' exited with status 2
WARNING: Unable to check consistency of all files listed in /etc/cmcluster/cmclfiles2check
/etc/cmcluster/orapkg1/ora1.ascii: A legacy package is being used.
Package orapkg already exists. It will be modified.
WARNING: Incorrect permissions for /etc/cmcluster/orapkg1 (40777). Directory must be executable for owner, and not writable by group and others on node nodedb1.
WARNING: Incorrect permissions for /etc/cmcluster/orapkg1 (40777). Directory must be executable for owner, and not writable by group and others on node nodedb2.
Maximum configured packages parameter is 300.
Configuring 1 package(s).
Modifying configuration on node nodedb1
Modifying configuration on node nodedb2
Modifying the cluster configuration for cluster demodb_cluster
Modifying the package configuration for package orapkg.
cmcheckconf: Verification completed with no errors found.
Use the cmapplyconf command to apply the configuration
11. 发配置文件到所有结点
#cmhaltcl -v -f
#cmapplyconf -v -C /etc/cmcluster/cmcl.ascii -P /etc/cmcluster/orapkg1/ora1.ascii
12. 启停MC-SG:
启动Cluster,查看双机信息
nodedb1@[/etc/cmcluster/orapkg1#]cmruncl -v
nodedb1@[/etc/cmcluster/orapkg1#]cmviewcl -v
CLUSTER STATUS
demodb_cluster up
NODE STATUS STATE
nodedb1 up running
Cluster_Lock_LVM:
VOLUME_GROUP PHYSICAL_VOLUME STATUS
/dev/vglock /dev/dsk/c5t1d0 up
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up LinkAgg1 lan901
PRIMARY up LinkAgg2 lan902
NODE STATUS STATE
nodedb2 up running
Cluster_Lock_LVM:
VOLUME_GROUP PHYSICAL_VOLUME STATUS
/dev/vglock /dev/dsk/c5t1d0 up
Network_Parameters:
INTERFACE STATUS PATH NAME
PRIMARY up LinkAgg1 lan901
PRIMARY up LinkAgg2 lan902
MULTI_NODE_PACKAGES
PACKAGE STATUS STATE AUTO_RUN SYSTEM
orapkg up running enabled no
NODE_NAME STATUS STATE SWITCHING
nodedb1 up running enabled
NODE_NAME STATUS STATE SWITCHING
nodedb2 up running enabled
Other_Attributes:
ATTRIBUTE_NAME ATTRIBUTE_VALUE
Style legacy
Priority no_priority
停止:
首先停止应用包orapkg(其实就是共享磁盘vgora) ,任何一台机器都可以::
nodedb2@[/etc/cmcluster#]cmhaltpkg orapkg
Halting package orapkg on node nodedb2
Successfully halted package orapkg on node nodedb2
One or more packages or package instances have been halted.
cmhaltpkg: Completed successfully on all packages specified
然后停止cluster ,任何一台机器都可以:
nodedb2@[/etc/cmcluster#]cmhaltcl -v
Node nodedb1 is already halted.
Disabling all packages from starting on nodes to be halted.
Disabling all packages from running on nodedb2.
Warning: Do not modify or enable packages until the halt operation is completed.
Waiting for nodes to halt ..... done
Successfully halted all nodes specified.
Halt operation complete.
nodedb2@[/etc/cmcluster#]#
主要的错误信息在:
/var/adm/syslog/syslog.log
详细的错误信息在/etc/cmcluster下,如:/etc/cmcluster/orapkg1/cm1.cntl.log 。