Chinaunix首页 | 论坛 | 博客
  • 博客访问: 47325
  • 博文数量: 8
  • 博客积分: 2055
  • 博客等级: 大尉
  • 技术积分: 85
  • 用 户 组: 普通用户
  • 注册时间: 2007-04-10 10:28
文章分类

全部博文(8)

文章存档

2012年(1)

2007年(7)

我的朋友
最近访客

分类: 数据库开发技术

2007-08-14 11:20:52

Solaris 9 disk suite cluster3.1 oracle10g 双机安装文档(原创)

转载请注明出处

一、        硬件配置
Sun Fire V890 Server                  2台
2G b PCI 单边光纤主机适配器         2块
四口10/100/1000自适应以太网卡       2块
EMC  CX500
二、        软件配置
Solaris 9(9/05)
Sun Cluster 3.1(9/04)
Oracle 10g (10.2.0.1.0)
Sun补丁(9_Recommended 17/04/06)
三、        操作系统安装
    ①系统盘(146GB)分区:
0    /                  123006MB
1    swap              16009MB
2                      整个硬盘
3                     (未用分区)
4                     (未用分区)
5                     (未用分区)
6    /globaldevices       516MB
7                      109MB
②主机名命名、IP设置、子网掩码
    sys-1  192.168.22.14   255.255.255.0
    注:IPMP  测试地址 192.168.22.16 ,192.168.22.18
        oracle  对外地址 192.168.22.17
    sys-1  192.168.22.15   255.255.255.0
    注:IPMP  测试地址 192.168.22.19 ,192.168.22.20
        oracle  对外地址 192.168.22.17
    心跳网口   ce0 ce1
③补丁安装
    9_Recommended 17/04/06
④修改内核参数
  在sys-1和sys-2中的/etc/system文件中加入如下:
    set shmsys:shminfo_shmmax=4294967295
set semsys:seminfo_semmap=1024
set semsys:seminfo_semmni=2048
set semsys:seminfo_semmns=2048
set semsys:seminfo_semmsl=2048
set semsys:seminfo_semmnu=2048
set semsys:seminfo_semume=200
set shmsys:shminfo_shmmin=200
set shmsys:shminfo_shmmni=200
set shmsys:shminfo_shmseg=200
set semsys:seminfo_semvmx=32767
set noexec_user_stack=1
set noexec_user_stack_log=1
set ce:ce_reclaim_pending=1
set ce:ce_taskq_disable=1
注:set ce:ce_reclaim_pending=1对ce网卡在NAFO中的bug进行修正。
⑤HBA卡SG-XPCI1FC-QF2驱动程序安装
    Download.sun.com下载HBA卡驱动SAN_4.4.9_install_it.tar.Z分别在sys-1和sys-2上做如下操作:
      # compress -dc SAN_4.4.9_install_it.tar.Z |tar xvf –
      #./install_it
Logfile /var/tmp/install_it_Sun_StorEdge_SAN.log : created on Thu Apr 20 11:12:53 CST 2006


This routine installs the packages and patches that
make up Sun StorEdge SAN.

Would you like to continue with the installation?
[y,n,?] y


Verifying system...


Checking for incompatiable SAN patches


Begin installation of SAN software

Installing StorEdge SAN packages -

         Package SUNWsan        : Installed Successfully.
         Package SUNWcfpl       : Installed Successfully.
         Package SUNWcfplx      : Installed Successfully.
         Package SUNWcfclr      : Installed Successfully.
         Package SUNWcfcl       : Installed Successfully.
         Package SUNWcfclx      : Installed Successfully.
         Package SUNWfchbr      : Installed Successfully.
         Package SUNWfchba      : Installed Successfully.
         Package SUNWfchbx      : Installed Successfully.
         Package SUNWfcsm       : Installed Successfully.
         Package SUNWfcsmx      : Installed Successfully.
         Package SUNWmdiu       : Installed Successfully.
         Package SUNWjfca       : Installed Successfully.
         Package SUNWjfcax      : Installed Successfully.
         Package SUNWjfcau      : Installed Successfully.
         Package SUNWjfcaux     : Installed Successfully.
         Package SUNWemlxs      : Installed Successfully.
         Package SUNWemlxsx     : Installed Successfully.
         Package SUNWemlxu      : Installed Successfully.
         Package SUNWemlxux     : Installed Successfully.

StorEdge SAN packages installation completed.

Begin patch installation
        Patch 111847-08         : Installed Successfully.
        Patch 113046-01         : Installed Previously.
        Patch 113049-01         : Installed Previously.
        Patch 113039-13         : Installed Successfully.
        Patch 113040-18         : Installed Successfully.
        Patch 113041-11         : Installed Successfully.
        Patch 113042-14         : Installed Successfully.
        Patch 113043-12         : Installed Successfully.
        Patch 113044-05         : Installed Successfully.
        Patch 114476-07         : Installed Successfully.
        Patch 114477-03         : Installed Successfully.
        Patch 114478-07         : Installed Successfully.
        Patch 114878-10         : Installed Successfully.
        Patch 119914-08         : Installed Successfully.


Installation of Sun StorEdge SAN completed Successfully

-------------------------------------------
-------------------------------------------
        Please reboot your system.
-------------------------------------------
-------------------------------------------
⑥设置OK下local-mac-address? True
  分别在sys-1和sys-2上做如下操作:
  OK setenv local-mac-address? True
  OK reset-all
⑦编辑/etc/hosts文件
  在sys-1和sys-2上编辑成如下:
  127.0.0.1       localhost
192.168.22.14   sys-1    loghost
192.168.22.15   sys-2
192.168.22.17   oracle
四、        安装SSH
注:分别在sys-1和sys-2上做如下操作:
①        下载软件
gcc-3.3.2-sol9-sparc-local
make-3.80-sol9-sparc-local
ssh-3.2.5.tar
②        安装GCC、MAKE软件
#pkgadd -d gcc*
#pkgadd -d make*
③  编辑/.profile
# cp /etc/skel/local.profile /.profile
# vi /.profile
加入:
PATH=/usr/bin:/sbin:/usr/local/bin:/usr/loal/sbin:/usr/ccs/bin:/usr/sbin:/usr/openwin/bin/bin:/usr/ucb:/etc:.
export PATH
# . /.profile (使/.profile生效)
③        编译安装SSH
#tar xvf ssh-3.2.5.tar
#cd ssh*
#./configure
#make
#make install
    ④ 生成密匙
#ssh-keygen2 -b 1024 输入用户名root及密码
    ⑤ 启动sshd2
# /usr/local/sbin/sshd2
⑥ 设置ssh2自动启动
    #vi /etc/rc2.d/S99local 加入/usr/local/sbin/sshd2
   
五、        配置IPMP
注:分别在sys-1和sys-2上做如下操作:
使用eri0和ce3来配置IPMP
①        修改/etc/hostname.eri0
#vi /etc/hostname.eri0加入
192.168.22.14 netmask + broadcast + group xxml up addif 192.168.22.16 deprecated -failover netmask + broadcast + up
② 修改/etc/hostname.ce3
#vi /etc/hostname.ce3加入
    192.168.22.18 netmask + broadcast + group xxml up deprecated -failover standby up
③ 加入默认网关
   # vi /etc/defaultrouter
   加入192.168.22.10
   # ping 192.168.22.10
   192.168.22.10 alive
说明:必须加入默认网关,并且主机可以ping通网关,IPMP才可使用
六、安装Sun Cluster 3.1软件
注:分别在sys-1和sys-2上做如下操作:
①        安装Sun Web Console
#cd /cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_web_console/2.1
#./setup
………..

Installation complete.

Server not started! No management applications registered.
②        安装Sun Cluster3.1软件
#cd /cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Prudct/sun_cluster
#./installer

   点击Next
   

   点击next

     

   点击Next
   
     

点击 install now
  
点击exit完成安装

七、        建立节点
①        以sys-1作为第一节点建立cluster
#cd/cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools
#./scinstall
  *** Main Menu ***

    Please select from one of the following (*) options:

      * 1) Install a cluster or cluster node
        2) Configure a cluster to be JumpStarted from this install server
        3) Add support for new data services to this cluster node
      * 4) Print release information for this cluster node

      * ?) Help with menu options
      * q) Quit

    Option:  1

_[H_[J
  *** Install Menu ***

    Please select from any one of the following options:

        1) Install all nodes of a new cluster
        2) Install just this machine as the first node of a new cluster
        3) Add this machine as a node in an existing cluster

        ?) Help with menu options
        q) Return to the Main Menu

    Option:  2

_[H_[J
  *** Installing just the First Node of a New Cluster ***


    This option is used to establish a new cluster using this machine as
    the first node in that cluster.

    Once the cluster framework software is installed, you will be asked
    for the name of the cluster. Then, you will have the opportunity to
    run sccheck(1M) to test this machine for basic Sun Cluster
    pre-configuration requirements.

    After sccheck(1M) passes, you will be asked for the names of the
    other nodes which will initially be joining that cluster. In
    addition, you will be asked to provide certain cluster transport
    configuration information.

    Press Control-d at any time to return to the Main Menu.


    Do you want to continue (yes/no) [yes]?  

_[H_[J_[H_[J
  >>> Software Patch Installation <<<

    If there are any Solaris or Sun Cluster patches that need to be added
    as part of this Sun Cluster installation, scinstall can add them for
    you. All patches that need to be added must first be downloaded into
    a common patch directory. Patches can be downloaded into the patch
    directory either as individual patches or as patches grouped together
    into one or more tar, jar, or zip files.

    If a patch list file is provided in the patch directory, only those
    patches listed in the patch list file are installed. Otherwise, all
    patches found in the directory will be installed. Refer to the
    patchadd(1M) man page for more information regarding patch list files.

    Do you want scinstall to install patches for you (yes/no) [yes]?  

    What is the name of the patch directory?  /cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_8/Packages

    If a patch list file is provided in the patch directory, only those
    patches listed in the patch list file are installed. Otherwise, all
    patches found in the directory will be installed. Refer to the
    patchadd(1M) man page for more information regarding patch list files.

    Do you want scinstall to use a patch list file (yes/no) [no]?  

_[H_[J_[H_[J
  >>> Cluster Name <<<

    Each cluster has a name assigned to it. The name can be made up of
    any characters other than whitespace. Each cluster name should be
    unique within the namespace of your enterprise.

    What is the name of the cluster you want to establish?  test

_[H_[J_[H_[J
  >>> Check <<<

    This step allows you to run sccheck(1M) to verify that certain basic
    hardware and software pre-configuration requirements have been met.
    If sccheck(1M) detects potential problems with configuring this
    machine as a cluster node, a report of failed checks is prepared and
    available for display on the screen. Data gathering and report
    generation can take several minutes, depending on system
    configuration.

    Do you want to run sccheck (yes/no) [yes]?  no
_[H_[J_[H_[J
  >>> Cluster Nodes <<<

    This Sun Cluster release supports a total of up to 16 nodes.

    Please list the names of the other nodes planned for the initial
    cluster configuration. List one node name per line. When finished,
    type Control-D:

    Node name:  sys-1
    Node name:  sys-2
    Node name (Control-D to finish):  ^D__


    This is the complete list of nodes:

        sys-1
        sys-2

    Is it correct (yes/no) [yes]?  

_[H_[J_[H_[J
  >>> Authenticating Requests to Add Nodes <<<

    Once the first node establishes itself as a single node cluster,
    other nodes attempting to add themselves to the cluster configuration
    must be found on the list of nodes you just provided. You can modify
    this list using scconf(1M) or other tools once the cluster has been
    established.

    By default, nodes are not securely authenticated as they attempt to
    add themselves to the cluster configuration. This is generally
    considered adequate, since nodes which are not physically connected
    to the private cluster interconnect will never be able to actually
    join the cluster. However, DES authentication is available. If DES
    authentication is selected, you must configure all necessary
    encryption keys before any node will be allowed to join the cluster
    (see keyserv(1M), publickey(4)).

    Do you need to use DES authentication (yes/no) [no]?  

_[H_[J_[H_[J
  >>> Network Address for the Cluster Transport <<<

    The private cluster transport uses a default network address of
    172.16.0.0. But, if this network address is already in use elsewhere
    within your enterprise, you may need to select another address from
    the range of recommended private addresses (see RFC 1597 for details).

    If you do select another network address, bear in mind that the Sun
    Cluster software requires that the rightmost two octets always be
    zero.

    The default netmask is 255.255.0.0. You can select another netmask,
    as long as it minimally masks all bits given in the network address.

    Is it okay to accept the default network address (yes/no) [yes]?  

    Is it okay to accept the default netmask (yes/no) [yes]?  

_[H_[J_[H_[J
  >>> Point-to-Point Cables <<<

    The two nodes of a two-node cluster may use a directly-connected
    interconnect. That is, no cluster transport junctions are configured.
    However, when there are greater than two nodes, this interactive form
    of scinstall assumes that there will be exactly two cluster transport
    junctions.

    Does this two-node cluster use transport junctions (yes/no) [yes]?  

_[H_[J_[H_[J
  >>> Cluster Transport Junctions <<<

    All cluster transport adapters in this cluster must be cabled to a
    transport junction, or "switch". And, each adapter on a given node
    must be cabled to a different junction. Interactive scinstall
    requires that you identify two switches for use in the cluster and
    the two transport adapters on each node to which they are cabled.

    What is the name of the first junction in the cluster [switch1]?  

    What is the name of the second junction in the cluster [switch2]?  

_[H_[J_[H_[J
  >>> Cluster Transport Adapters and Cables <<<

    You must configure at least two cluster transport adapters for each
    node in the cluster. These are the adapters which attach to the
    private cluster interconnect.

    Select the first cluster transport adapter:

        1) ce0
        2) ce1
        3) ce2
        4) ce3
        5) ge0
        6) Other

    Option:  1

    Adapter "ce0" is an Ethernet adapter.

    Searching for any unexpected network traffic on "ce0" ... done
    Verification completed. No traffic was detected over a 10 second
    sample period.

    The "dlpi" transport type will be set for this cluster.

    Name of the junction to which "ce0" is connected [switch1]?  

    Each adapter is cabled to a particular port on a transport junction.
    And, each port is assigned a name. You can explicitly assign a name
    to each port. Or, for Ethernet switches, you can choose to allow
    scinstall to assign a default name for you. The default port name
    assignment sets the name to the node number of the node hosting the
    transport adapter at the other end of the cable.

    For more information regarding port naming requirements, refer to the
    scconf_transp_jct family of man pages (e.g.,
    scconf_transp_jct_dolphinswitch(1M)).

    Use the default port name for the "ce0" connection (yes/no) [yes]?  

    Select the second cluster transport adapter:

        1) ce0
        2) ce1
        3) ce2
        4) ce3
        5) ge0
        6) Other

    Option:  2

    Adapter "ce1" is an Ethernet adapter.

    Searching for any unexpected network traffic on "ce1" ... done
    Verification completed. No traffic was detected over a 10 second
    sample period.

    Name of the junction to which "ce1" is connected [switch2]?  

    Use the default port name for the "ce1" connection (yes/no) [yes]?  

_[H_[J_[H_[J
  >>> Global Devices File System <<<

    Each node in the cluster must have a local file system mounted on
    /global/.devices/node@ before it can successfully participate
    as a cluster member. Since the "nodeID" is not assigned until
    scinstall is run, scinstall will set this up for you.

    You must supply the name of either an already-mounted file system or
    raw disk partition which scinstall can use to create the global
    devices file system. This file system or partition should be at least
    512 MB in size.

    If an already-mounted file system is used, the file system must be
    empty. If a raw disk partition is used, a new file system will be
    created for you.

    The default is to use /globaldevices.

    Is it okay to use this default (yes/no) [yes]?  

_[H_[J_[H_[J
  >>> Automatic Reboot <<<

    Once scinstall has successfully installed and initialized the Sun
    Cluster software for this machine, it will be necessary to reboot.
    After the reboot, this machine will be established as the first node
    in the new cluster.

    Do you want scinstall to reboot for you (yes/no) [yes]?  

_[H_[J_[H_[J
  >>> Confirmation <<<

    Your responses indicate the following options to scinstall:

      scinstall -ik \
           -C test \
           -F \
           -M patchdir=/cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_8/Packages \
           -T node=sys-1,node=sys-2,authtype=sys \
           -A trtype=dlpi,name=ce0 -A trtype=dlpi,name=ce1 \
           -B type=switch,name=switch1 -B type=switch,name=switch2 \
           -m endpoint=:ce0,endpoint=switch1 \
           -m endpoint=:ce1,endpoint=switch2 \
           

    Are these the options you want to use (yes/no) [yes]?  

    Do you want to continue with the install (yes/no) [yes]?  


Checking device to use for global devices file system ... done
Installing patches ... failed

scinstall:  Problems detected during extraction or installation of patches.


Initializing cluster name to "xxml" ... done
Initializing authentication options ... done
Initializing configuration for adapter "ce0" ... done
Initializing configuration for adapter "ce1" ... done
Initializing configuration for junction "switch1" ... done
Initializing configuration for junction "switch2" ... done
Initializing configuration for cable ... done
Initializing configuration for cable ... done


Setting the node ID for "sys-1" ... done (id=1)

Setting the major number for the "did" driver ... done
"did" driver major number set to 300

Checking for global devices global file system ... done
Updating vfstab ... done

Verifying that NTP is configured ... done
Installing a default NTP configuration ... done
Please complete the NTP configuration after scinstall has finished.

Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done
Adding the "cluster" switch to "hosts" in nsswitch.conf ... done

Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done
Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done

Verifying that power management is NOT configured ... done
Unconfiguring power management ... done
/etc/power.conf has been renamed to /etc/power.conf.042006154016
Power management is incompatible with the HA goals of the cluster.
Please do not attempt to re-configure power management.

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done

Ensure network routing is disabled ... done
Network routing has been disabled on this node by creating /etc/notrouter.
Having a cluster node act as a router is not supported by Sun Cluster.
Please do not re-enable network routing.

Log file - /var/cluster/logs/install/scinstall.log.2140
②        将sys-2作为第二节点加入到cluster中
#cd/cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_9/Tools
#./scinstall
*** Main Menu ***

    Please select from one of the following (*) options:

      * 1) Install a cluster or cluster node
        2) Configure a cluster to be JumpStarted from this install server
        3) Add support for new data services to this cluster node
      * 4) Print release information for this cluster node

      * ?) Help with menu options
      * q) Quit

Option:  1
*** Install Menu ***

    Please select from any one of the following options:

        1) Install all nodes of a new cluster
        2) Install just this machine as the first node of a new cluster
        3) Add this machine as a node in an existing cluster

        ?) Help with menu options
        q) Return to the Main Menu

    Option:  3
*** Adding a Node to an Existing Cluster ***


    This option is used to add this machine as a node in an already
    established cluster. If this is an initial cluster install, there may
    only be a single node which has established itself in the new cluster.

    Once the cluster framework software is installed, you will be asked
    to provide both the name of the cluster and the name of one of the
    nodes already in the cluster. Then, sccheck(1M) is run to test this
    machine for basic Sun Cluster pre-configuration requirements.

    After sccheck(1M) passes, you may be asked to provide certain cluster
    transport configuration information.

    Press Control-d at any time to return to the Main Menu.


    Do you want to continue (yes/no) [yes]?  
  >>> Software Patch Installation <<<

    If there are any Solaris or Sun Cluster patches that need to be added
    as part of this Sun Cluster installation, scinstall can add them for
    you. All patches that need to be added must first be downloaded into
    a common patch directory. Patches can be downloaded into the patch
    directory either as individual patches or as patches grouped together
    into one or more tar, jar, or zip files.

    If a patch list file is provided in the patch directory, only those
    patches listed in the patch list file are installed. Otherwise, all
    patches found in the directory will be installed. Refer to the
    patchadd(1M) man page for more information regarding patch list files.

    Do you want scinstall to install patches for you (yes/no) [yes]?  

    What is the name of the patch directory [/cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_8/Packages]?  

    If a patch list file is provided in the patch directory, only those
    patches listed in the patch list file are installed. Otherwise, all
    patches found in the directory will be installed. Refer to the
    patchadd(1M) man page for more information regarding patch list files.

    Do you want scinstall to use a patch list file (yes/no) [no]?
  >>> Sponsoring Node <<<

    For any machine to join a cluster, it must identify a node in that
    cluster willing to "sponsor" its membership in the cluster. When
    configuring a new cluster, this "sponsor" node is typically the first
    node used to build the new cluster. However, if the cluster is
    already established, the "sponsoring" node can be any node in that
    cluster.

    Already established clusters can keep a list of hosts which are able
    to configure themselves as new cluster members. This machine should
    be in the join list of any cluster which it tries to join. If the
    list does not include this machine, you may need to add it using
    scconf(1M) or other tools.

    And, if the target cluster uses DES to authenticate new machines
    attempting to configure themselves as new cluster members, the
    necessary encryption keys must be configured before any attempt to
    join.

    What is the name of the sponsoring node [sys-1]?
>>> Cluster Name <<<

    Each cluster has a name assigned to it. When adding a node to the
    cluster, you must identify the name of the cluster you are attempting
    to join. A sanity check is performed to verify that the "sponsoring"
    node is a member of that cluster.

    What is the name of the cluster you want to join [test]?  

    Attempting to contact "sys-1" ... done

    Cluster name "xxml" is correct.
   
Press Enter to continue:
>>> Check <<<

    This step allows you to run sccheck(1M) to verify that certain basic
    hardware and software pre-configuration requirements have been met.
    If sccheck(1M) detects potential problems with configuring this
    machine as a cluster node, a report of failed checks is prepared and
    available for display on the screen. Data gathering and report
    generation can take several minutes, depending on system
    configuration.

    Do you want to run sccheck (yes/no) [yes]?  No
  >>> Autodiscovery of Cluster Transport <<<

    If you are using Ethernet adapters as your cluster transport
    adapters, autodiscovery is the best method for configuring the
    cluster transport.

    However, it appears that scinstall has already been run at least once
    before on this machine. You can either attempt to autodiscover or
    continue with the answers that you gave the last time you ran
    scinstall.

    Do you want to use autodiscovery anyway (yes/no) [no]?  yes
    Probing .....................

    The following connection was discovered:

        sys-1:ce1  switch2  sys-2:ce1

    Probes were sent out from all transport adapters configured for
    cluster node "sys-1". But, they were only received on one of the
    network adapters on this machine ("sys-2"). This may be due to
    any number of reasons, including improper cabling, an improper
    configuration for "sys-1", or a switch which was confused by the
    probes.

    You can either attempt to correct the problem and try the probes
    again or try to manually configure the transport. Correcting the
    problem may involve re-cabling, changing the configuration for
    "sys-1", or fixing hardware.

    Do you want to try again (yes/no) [yes]?  no

[H [J [H [J
  >>> Point-to-Point Cables <<<

    The two nodes of a two-node cluster may use a directly-connected
    interconnect. That is, no cluster transport junctions are configured.
    However, when there are greater than two nodes, this interactive form
    of scinstall assumes that there will be exactly two cluster transport
    junctions.

    Is this a two-node cluster (yes/no) [yes]?  

    Does this two-node cluster use transport junctions (yes/no) [yes]?  

[H [J [H [J
  >>> Cluster Transport Junctions <<<

    All cluster transport adapters in this cluster must be cabled to a
    transport junction, or "switch". And, each adapter on a given node
    must be cabled to a different junction. Interactive scinstall
    requires that you identify two switches for use in the cluster and
    the two transport adapters on each node to which they are cabled.

    What is the name of the first junction in the cluster [switch1]?  

    What is the name of the second junction in the cluster [switch2]?  

[H [J [H [J
  >>> Cluster Transport Adapters and Cables <<<

    You must configure at least two cluster transport adapters for each
    node in the cluster. These are the adapters which attach to the
    private cluster interconnect.

    What is the name of the first cluster transport adapter (help) [ce0]?  

    Adapter "ce0" is an Ethernet adapter.

    The "dlpi" transport type will be set for this cluster.

    Name of the junction to which "ce0" is connected [switch1]?  

    Each adapter is cabled to a particular port on a transport junction.
    And, each port is assigned a name. You can explicitly assign a name
    to each port. Or, for Ethernet switches, you can choose to allow
    scinstall to assign a default name for you. The default port name
    assignment sets the name to the node number of the node hosting the
    transport adapter at the other end of the cable.

    For more information regarding port naming requirements, refer to the
    scconf_transp_jct family of man pages (e.g.,
    scconf_transp_jct_dolphinswitch(1M)).

    Use the default port name for the "ce0" connection (yes/no) [yes]?  

    What is the name of the second cluster transport adapter (help) [ce1]?  

    Adapter "ce1" is an Ethernet adapter.

    Name of the junction to which "ce1" is connected [switch2]?  

    Use the default port name for the "ce1" connection (yes/no) [yes]?  

[H [J [H [J
  >>> Global Devices File System <<<

    Each node in the cluster must have a local file system mounted on
    /global/.devices/node@ before it can successfully participate
    as a cluster member. Since the "nodeID" is not assigned until
    scinstall is run, scinstall will set this up for you.

    You must supply the name of either an already-mounted file system or
    raw disk partition which scinstall can use to create the global
    devices file system. This file system or partition should be at least
    512 MB in size.

    If an already-mounted file system is used, the file system must be
    empty. If a raw disk partition is used, a new file system will be
    created for you.

    The default is to use /globaldevices.

    Is it okay to use this default (yes/no) [yes]?  

[H [J [H [J
  >>> Automatic Reboot <<<

    Once scinstall has successfully installed and initialized the Sun
    Cluster software for this machine, it will be necessary to reboot.
    The reboot will cause this machine to join the cluster for the first
    time.

    Do you want scinstall to reboot for you (yes/no) [yes]?  

[H [J [H [J
  >>> Confirmation <<<

    Your responses indicate the following options to scinstall:

      scinstall -ik \
           -C xxml \
           -N sjz-xxml-1 \
           -M patchdir=/cdrom/suncluster_31_u3_sol_sparc/Solaris_sparc/Product/sun_cluster/Solaris_8/Packages \
           -A trtype=dlpi,name=ce0 -A trtype=dlpi,name=ce1 \
           -m endpoint=:ce0,endpoint=switch1 \
           -m endpoint=:ce1,endpoint=switch2 \
           

    Are these the options you want to use (yes/no) [yes]?  

    Do you want to continue with the install (yes/no) [yes]?  


Checking device to use for global devices file system ... done
Installing patches ... failed

scinstall:  Problems detected during extraction or installation of patches.


Adding node "sys-2" to the cluster configuration ... done
Adding adapter "ce0" to the cluster configuration ... done
Adding adapter "ce1" to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Adding cable to the cluster configuration ... done

Copying the config from "sys-1" ... done
Copying the cacao keys from "sys-1" ... done


Setting the node ID for "sys-2" ... done (id=2)

Setting the major number for the "did" driver ...
Obtaining the major number for the "did" driver from "sys-1" ... done
"did" driver major number set to 300

Checking for global devices global file system ... done
Updating vfstab ... done

Verifying that NTP is configured ... done
Installing a default NTP configuration ... done
Please complete the NTP configuration after scinstall has finished.

Verifying that "cluster" is set for "hosts" in nsswitch.conf ... done
Adding the "cluster" switch to "hosts" in nsswitch.conf ... done

Verifying that "cluster" is set for "netmasks" in nsswitch.conf ... done
Adding the "cluster" switch to "netmasks" in nsswitch.conf ... done

Verifying that power management is NOT configured ... done
Unconfiguring power management ... done
/etc/power.conf has been renamed to /etc/power.conf.042206104133
Power management is incompatible with the HA goals of the cluster.
Please do not attempt to re-configure power management.

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ...
八、        建立共享磁盘集
注:一下只在sys-1上操作
①        查看DID设备
   # scdidadm -L
1        sys-1:/dev/rdsk/c0t0d0    /dev/did/rdsk/d1     
2        sys-1:/dev/rdsk/c1t2d0    /dev/did/rdsk/d2     
3        sys-1:/dev/rdsk/c1t0d0    /dev/did/rdsk/d3     
4        sys-1:/dev/rdsk/c1t5d0    /dev/did/rdsk/d4     
5        sys-1:/dev/rdsk/c1t1d0    /dev/did/rdsk/d5     
6        sys-1:/dev/rdsk/c1t4d0    /dev/did/rdsk/d6     
7        sys-1:/dev/rdsk/c1t3d0    /dev/did/rdsk/d7     
8        sys-1:/dev/rdsk/c3t5006016930226EF3d0 /dev/did/rdsk/d8     
8        sys-1:/dev/rdsk/c3t5006016030226EF3d0 /dev/did/rdsk/d8     
8        sys-1:/dev/rdsk/c2t5006016830226EF3d0 /dev/did/rdsk/d8     
8        sys-1:/dev/rdsk/c2t5006016130226EF3d0 /dev/did/rdsk/d8     
9        sys-1:/dev/rdsk/c2t5006016830226EF3d53 /dev/did/rdsk/d9     
9        sys-1:/dev/rdsk/c2t5006016130226EF3d53 /dev/did/rdsk/d9     
9        sys-1:/dev/rdsk/c3t5006016930226EF3d53 /dev/did/rdsk/d9     
9        sys-1:/dev/rdsk/c3t5006016030226EF3d53 /dev/did/rdsk/d9     
9        sys-1:/dev/rdsk/c4t60060160478D1900F658B0B052D0DA11d0 /dev/did/rdsk/d9     
9        sys-2:/dev/rdsk/c2t5006016830226EF3d53 /dev/did/rdsk/d9     
9        sys-2:/dev/rdsk/c2t5006016130226EF3d53 /dev/did/rdsk/d9     
9        sys-2:/dev/rdsk/c3t5006016030226EF3d53 /dev/did/rdsk/d9     
9        sys-2:/dev/rdsk/c3t5006016930226EF3d53 /dev/did/rdsk/d9     
10       sys-1:/dev/rdsk/c2t5006016830226EF3d52 /dev/did/rdsk/d10   
10       sys-1:/dev/rdsk/c2t5006016130226EF3d52 /dev/did/rdsk/d10   
10       sys-1:/dev/rdsk/c3t5006016930226EF3d52 /dev/did/rdsk/d10   
10       sys-1:/dev/rdsk/c3t5006016030226EF3d52 /dev/did/rdsk/d10   
10       sys-1:/dev/rdsk/c4t60060160478D19001269AABC52D0DA11d0 /dev/did/rdsk/d10   
10       sys-2:/dev/rdsk/c2t5006016830226EF3d52 /dev/did/rdsk/d10   
10       sys-2:/dev/rdsk/c2t5006016130226EF3d52 /dev/did/rdsk/d10   
10       sys-2:/dev/rdsk/c3t5006016030226EF3d52 /dev/did/rdsk/d10   
10       sys-2:/dev/rdsk/c3t5006016930226EF3d52 /dev/did/rdsk/d10   
11       sys-1:/dev/rdsk/c2t5006016830226EF3d51 /dev/did/rdsk/d11   
11      sys-1:/dev/rdsk/c2t5006016130226EF3d51 /dev/did/rdsk/d11   
11       sys-1:/dev/rdsk/c3t5006016930226EF3d51 /dev/did/rdsk/d11   
11       sys-1:/dev/rdsk/c3t5006016030226EF3d51 /dev/did/rdsk/d11   
11       sys-1:/dev/rdsk/c4t60060160478D1900F0377FC752D0DA11d0 /dev/did/rdsk/d11   
11       sys-2:/dev/rdsk/c2t5006016830226EF3d51 /dev/did/rdsk/d11   
11       sys-2:/dev/rdsk/c2t5006016130226EF3d51 /dev/did/rdsk/d11   
11       sys-2:/dev/rdsk/c3t5006016030226EF3d51 /dev/did/rdsk/d11   
11       sys-2:/dev/rdsk/c3t5006016930226EF3d51 /dev/did/rdsk/d11   
12       sys-1:/dev/rdsk/c2t5006016830226EF3d50 /dev/did/rdsk/d12   
12       sys-1:/dev/rdsk/c2t5006016130226EF3d50 /dev/did/rdsk/d12   
12       sys-1:/dev/rdsk/c3t5006016930226EF3d50 /dev/did/rdsk/d12   
12       sys-1:/dev/rdsk/c3t5006016030226EF3d50 /dev/did/rdsk/d12   
12       sys-1:/dev/rdsk/c4t60060160478D190082D97FD952D0DA11d0 /dev/did/rdsk/d12   
12       sys-2:/dev/rdsk/c2t5006016830226EF3d50 /dev/did/rdsk/d12   
12       sys-2:/dev/rdsk/c2t5006016130226EF3d50 /dev/did/rdsk/d12   
12       sys-2:/dev/rdsk/c3t5006016030226EF3d50 /dev/did/rdsk/d12   
12       sys-2:/dev/rdsk/c3t5006016930226EF3d50 /dev/did/rdsk/d12
②        建立metaset磁盘组
#metadb -a -f -c 3 c1t0d0s7 (同时在sys-2上操作)
# metadb
        flags           first blk       block count
     a        u         16              8192            /dev/dsk/c1t0d0s7
     a        u         8208            8192            /dev/dsk/c1t0d0s7
     a        u         16400           8192            /dev/dsk/c1t0d0s7
# metaset -s oraset -a -h sys-1 sys-2
# metaset –s oraset -t
# metaset

Set name = oraset, Set number = 1

Host                Owner
        sys-1         yes
        sys-2         
#
③        将共享DID设备加入到oraset组中
#metaset -s oraset -a /dev/did/rdsk/d9 /dev/did/rdsk/d10 \
/dev/did/rdsk/d11 /dev/did/rdsk/d12 /dev/did/rdsk/d13 \
/dev/did/rdsk/d14 /dev/did/rdsk/d15 /dev/did/rdsk/d16 \
/dev/did/rdsk/d17 /dev/did/rdsk/d18 /dev/did/rdsk/d19 \
/dev/did/rdsk/d20 /dev/did/rdsk/d21 /dev/did/rdsk/d22 \
/dev/did/rdsk/d23 /dev/did/rdsk/d24 /dev/did/rdsk/d25 \
/dev/did/rdsk/d26 /dev/did/rdsk/d27 /dev/did/rdsk/d28 \
/dev/did/rdsk/d29 /dev/did/rdsk/d30 /dev/did/rdsk/d31 \
/dev/did/rdsk/d32 /dev/did/rdsk/d33 /dev/did/rdsk/d34 \
/dev/did/rdsk/d35 /dev/did/rdsk/d36 /dev/did/rdsk/d37 \
/dev/did/rdsk/d38 /dev/did/rdsk/d39 /dev/did/rdsk/d40 \
/dev/did/rdsk/d41 /dev/did/rdsk/d42 /dev/did/rdsk/d43 \
/dev/did/rdsk/d44 /dev/did/rdsk/d45 /dev/did/rdsk/d46 \
/dev/did/rdsk/d47 /dev/did/rdsk/d48 /dev/did/rdsk/d49 \
/dev/did/rdsk/d50 /dev/did/rdsk/d51 /dev/did/rdsk/d52 \
/dev/did/rdsk/d53 /dev/did/rdsk/d54 /dev/did/rdsk/d55 \
/dev/did/rdsk/d56 /dev/did/rdsk/d57 /dev/did/rdsk/d58 \
/dev/did/rdsk/d59 /dev/did/rdsk/d60 /dev/did/rdsk/d61 \
/dev/did/rdsk/d62
#
④        13块盘建立Raid0 Concatenation
# metainit oraset/d110 13 1 /dev/did/rdsk/d9s0 1 \
/dev/did/rdsk/d10s0 1 /dev/did/rdsk/d11s0 1 \
/dev/did/rdsk/d12s0 1 /dev/did/rdsk/d13s0 1 \
/dev/did/rdsk/d14s0 1 /dev/did/rdsk/d15s0 1 \
/dev/did/rdsk/d16s0 1 /dev/did/rdsk/d17s0 1 \
/dev/did/rdsk/d18s0 1 /dev/did/rdsk/d19s0 1 \
/dev/did/rdsk/d20s0 1 /dev/did/rdsk/d21s0
oraset/d110: Concat/Stripe is setup
注:Concatenation和stripe区别
     RAID 0是把多个硬盘空间组织在一起形成一个大的逻辑盘。Concatenation方式相当把多个盘空间一个一个一次串接,stripe方式是把每个盘空间划分为一条条的,然后按条(不论该条在哪个盘上)将空间重新组织成一个大的逻辑盘。
     在使用上,前一种方式相当于一个物理盘存满后才用下一个物理盘;后一种方式相当于可以同时往存在于几个不同物理盘上的条上读写。因此后一种方式在I/O上性能更好。
⑤        在oraset/d110上建立软分区存放oracle数据库文件
    # metainit oraset/d111 -p d110 50m
oraset/d111: Soft Partition is setup
# metainit oraset/d112 -p d110 50m
oraset/d112: Soft Partition is setup
# metainit oraset/d113 -p d110 50m
oraset/d113: Soft Partition is setup
# metainit oraset/d114 -p d110 1024m
oraset/d114: Soft Partition is setup
# metainit oraset/d115 -p d110 1024m
oraset/d115: Soft Partition is setup
# metainit oraset/d116 -p d110 1024m
oraset/d116: Soft Partition is setup
# metainit oraset/d117 -p d110 1024m
oraset/d117: Soft Partition is setup
# metainit oraset/d118 -p d110 1024m
oraset/d118: Soft Partition is setup
# metainit oraset/d119 -p d110 2048m
oraset/d119: Soft Partition is setup
# metainit oraset/d120 -p d110 2048m
oraset/d120: Soft Partition is setup
# metainit oraset/d121 -p d110 2048m
oraset/d121: Soft Partition is setup
# metainit oraset/d122 -p d110 8192m
oraset/d122: Soft Partition is setup
⑥        改变新建裸设备宿主
    # chown oracle /dev/md/oraset/rdsk/d*
# chgrp dba /dev/md/oraset/rdsk/d*
# chmod 600 /dev/md/oraset/rdsk/d*
# ls -lL /dev/md/oraset/rdsk/d1*
注:以上需在两台主机上分别操作
  
九、        安装oracle 10G软件
①        获取oracle 10G(10.2.0.1.0)介质
②        设置oracle安装环境
注:以下需在两台主机上分别操作
●创建安装必需的组和用户
#groupadd oinstall
#groupadd dba
#useradd –d /export/home/oracle –g oinstall –G dba –m oracle
#passwd oracle
●创建安装目录
#mkdir /oracle
#mkdir /oracle/oradata
#chown –R oracleinstall /oracle/oradata
#chmod 755 /oracle/oradata
●设置oracle用户环境变量
#su – oracle
#vi .profile
加入如下内容:
        This is the default standard profile provided to a user.
#        They are expected to edit it to meet their own needs.

MAIL=/usr/mail/${LOGNAME}

umask 022
ORACLE_BASE=/oracle;export ORACLE_BASE
ORACLE_HOME=/oracle/product/10.2.0.1.0/db_1
export ORACLE_HOME
ORACLE_SID=orcl;export ORACLE_SID
PATH=$ORACLE_HOME/bin:/usr/bin:/usr/ucb:/etc:/usr/openwin/bin:/usr/ccs/bin
export PATH
③        安装oracle软件
注:只在sys-1上操作,安装过程中不进行数据库的创建
#cd /cdrom/cdrom0
#./runInstaller
安装过程省略………
十、        创建oracle数据库所需文件的链接至裸设备
注:只在sys-1上操作
# su – oracle
$ mkdir –p /oracle/oradata/orcl
$ cd /oracle/oradata/orcl
$ ln -s /dev/md/oraset/rdsk/d111 control01.ctl
$ ln -s /dev/md/oraset/rdsk/d112 control02.ctl
$ ln -s /dev/md/oraset/rdsk/d113 control03.ctl
$ ln -s /dev/md/oraset/rdsk/d114 sysaux01.dbf
$ ln -s /dev/md/oraset/rdsk/d115 system01.dbf
$ ln -s /dev/md/oraset/rdsk/d116 temp01.dbf
$ ln -s /dev/md/oraset/rdsk/d117 undotbs01.dbf
$ ln -s /dev/md/oraset/rdsk/d118 users01.dbf
$ ln -s /dev/md/oraset/rdsk/d119 redo01.log
$ ln -s /dev/md/oraset/rdsk/d120 redo02.log
$ ln -s /dev/md/oraset/rdsk/d121 redo03.log
$ mkdir flash_recovery_area
$ ln -s /dev/md/oraset/rdsk/d122 flash_recovery_area
十一、创建oracle数据库
    注:只在sys-1上操作
①        图形化登陆sys-1,运行dbca创建数据库
#cd /oracle/p*/*/*/bin
#./dbca


点击NEXT

   
点击Next
     

点击Next
   


输入Dabase Name orcl
    SID          orcl
点击Next



点击Next



输入The Same Password For All Accounts oracle 点击Next



选择Storage Options Raw Devices(裸设备)点击Next



Flash Recovery Area {ORACLE_BASE}/oradata/flash_recovery_area
Flash Recovery Size 4096MB
点击Next



选择如图,点击Next


File name                          File Directory
control01.ctl              {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/
control02.ctl              {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/
control03.ctl              {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/
system01.dbf               {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/
undotbs01.dbf              {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/
sysaux01.dbf               {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/
users01.dbf                {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/
redo01.log                 {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/
redo02.log                 {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/
redo03.log                 {ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/
点击Next



点击Finish


点击OK



创建完成exit退出



十二、创建监听
注:只在sys-1上操作
# cd /oracle/p*/*/*/bin
# netca

点击NEXT

点击 Next

完成创建
十三、启动数据库,并将sys-1上的oracle软件目录打包后ftp至sys-2上,并解压

①在sys-1上启动数据库测试
# su – oracle
$ sqlplus “as sysdba”
SQL*Plus: Release 10.2.0.1.0 - Production on Sun Apr 23 14:14:12 2006

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 4294967296 bytes
Fixed Size                  1984144 bytes
Variable Size             805312880 bytes
Database Buffers         3472883712 bytes
Redo Buffers               14786560 bytes
Database mounted.
Database opened.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>exit
②        创建数据库连接用户,并赋予权限
    SQL> create user oracle identified by oracle;

User created.

SQL> grant connect, resource to oracle;

Grant succeeded.

SQL>
③        将sys-1上的oracle软件目录打包后ftp至sys-2上,并解压
启动数据库进行测试
# su – oracle
$ sqlplus “as sysdba”
SQL*Plus: Release 10.2.0.1.0 - Production on Sun Apr 23 14:14:12 2006

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 4294967296 bytes
Fixed Size                  1984144 bytes
Variable Size             805312880 bytes
Database Buffers         3472883712 bytes
Redo Buffers               14786560 bytes
Database mounted.
Database opened.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>exit
④        修改/oracle/product/10.2.0.0.1/db_1/network/admin/listener.ora
        /oracle/product/10.2.0.0.1/db_1/network/admin/tnsnames.ora
注:两台机器上同时操作
将sys-1替换为192.168.22.17
  
十四、添加oracle Agent
     注:两台主机分别添加
     # ./scinstall


  *** Main Menu ***

    Please select from one of the following (*) options:

        1) Install a cluster or cluster node
        2) Configure a cluster to be JumpStarted from this install server
      * 3) Add support for new data services to this cluster node
      * 4) Print release information for this cluster node

      * ?) Help with menu options
      * q) Quit

    Option:  

*** Adding Data Service Software ***


    This option is used to install data services software.

    Where is the data services CD [/cdrom/cdrom0]?  /export/soft/sc-agents-3_1_904-sparc

    Select the data services you want to install:

           Identifier     Description                                       

        1) pax            Sun Cluster HA for AGFA IMPAX
        2) tomcat         Sun Cluster HA for Apache Tomcat
        3) apache         Sun Cluster HA for Apache
        4) wls            Sun Cluster HA for BEA WebLogic Server
        5) dhcp           Sun Cluster HA for DHCP
        6) dns            Sun Cluster HA for DNS
        7) ebs            Sun Cluster HA for Oracle E-Business Suite
         mqi            Sun Cluster HA for WebSphere MQ Integrator
        9) mqs            Sun Cluster HA for WebSphere MQ
       10) mys            Sun Cluster HA for MySQL

        n) Next >
        q) Done

    Option(s):  n

    Select the data services you want to install:

           Identifier     Description                                       

       11) sps            Sun Cluster HA for N1 Grid Service Provisioning
       12) nfs            Sun Cluster HA for NFS
       13) netbackup      Sun Cluster HA for NetBackup
       14) 9ias           Sun Cluster HA for Oracle9i Application Server
       15) oracle         Sun Cluster HA for Oracle
       16) sapdb          Sun Cluster HA for SAPDB
       17) sapwebas       Sun Cluster HA for SAP Web Application Server
       1 sap            Sun Cluster HA for SAP
       19) livecache      Sun Cluster HA for SAP liveCache
       20) sge            Sun Cluster HA for Sun Grid Engine

        p) < Previous
        n) Next >
        q) Done

    Option(s):  15
     Selected:  15

    Option(s):  q


    This is the complete list of data services you selected:

        oracle

    Is it correct (yes/no) [yes]?  

    Is it okay to add the software for this data service [yes]  


scinstall -ik -s oracle -d /export/soft/sc-agents-3_1_904-sparc


** Installing Sun Cluster HA for Oracle **
        SUNWscor....done
        SUNWcscor...done
        SUNWjscor...done

   
Press Enter to continue:  s


十五、创建Quorum devices
      注:只在sys-1上操作
     # scconf -a -q globaldev=d9
     # scconf -a -q globaldev=d10
     # scconf -c -q reset
十六、创建资源组并添加资源
     注:只在sys-1上操作
①        注册资源类型
         # scrgadm -a -t SUNW.oracle_server
# scrgadm -a -t SUNW.oracle_listener
②        创建空资源组
scrgadm -a -g orareg
③        增加IP资源Create a LogicalHostname resource
   # scrgadm -a -L -g orareg -l oracle
④        增加存储资源Create an HAStoragePlus resource
# scrgadm -a -j oradata -g orareg -t SUNW.HAStoragePlus \
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d112 \
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d113 \
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d114 \
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d115 \
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d116 \
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d117 \
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d118 \
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d119 \
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d120 \
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d121 \
-x GlobalDevicePaths=/dev/md/oraset/rdsk/d122
⑤        增加应用资源 oracle_server Create an oracle_server resource
# scrgadm -a -j oraser -g orareg \
-t SUNW.oracle_server \
-x ORACLE_SID=orcl \
-x ORACLE_HOME=/oracle/product/10.2.0.1.0/db_1 \
-x Alert_log_file=/oracle/admin/orcl/bdump/alert_orcl.log \
-x Parameter_file=/oracle/admin/orcl/pfile/init.ora \
-x Connect_string=oracle/oracle

⑥        增加资源Oracle_listener Create an oracle_listener resource by entering
# scrgadm -a -j oralistener -g orareg -t oracle_listener \
-x ORACLE_HOME=/oracle/product/10.2.0.1.0/db_1 \
-x LISTENER_NAME=LISTENER
⑦        启用资源组Bring the resource group online
# scswitch -Z -g orareg
⑧        查看cluster状态
    # scstat
------------------------------------------------------------------

-- Cluster Nodes --

                    Node name           Status
                    ---------           ------
  Cluster node:     sys-1          Online
  Cluster node:     sys-2          Online

------------------------------------------------------------------

-- Cluster Transport Paths --

                    Endpoint            Endpoint            Status
                    --------            --------            ------
  Transport path:   sys-1:ce1      sys-2:ce1      Path online
  Transport path:   sys-1:ce0      sys-2:ce0      Path online

------------------------------------------------------------------

-- Quorum Summary --

  Quorum votes possible:      4
  Quorum votes needed:        3
  Quorum votes present:       4


-- Quorum Votes by Node --

                    Node Name           Present Possible Status
                    ---------           ------- -------- ------
  Node votes:       sys-1          1        1       Online
  Node votes:       sys-2          1        1       Online


-- Quorum Votes by Device --

                    Device Name         Present Possible Status
                    -----------         ------- -------- ------
  Device votes:     /dev/did/rdsk/d9s2  1        1       Online
  Device votes:     /dev/did/rdsk/d10s2 1        1       Online

------------------------------------------------------------------

-- Device Group Servers --

                         Device Group        Primary             Secondary
                         ------------        -------             ---------
  Device group servers:  oraset              sys-1          sys-2

-- Device Group Status --

                              Device Group        Status              
                              ------------        ------              
  Device group status:        oraset              Online


-- Multi-owner Device Groups --

                              Device Group        Online Status
                              ------------        -------------

------------------------------------------------------------------

-- Resource Groups and Resources --

            Group Name          Resources
            ----------          ---------
Resources: orareg              oracle oradata oraser oralistener


-- Resource Groups --

            Group Name          Node Name           State
            ----------          ---------           -----
     Group: orareg              sys-1          Online
     Group: orareg              sys-2          Offline


-- Resources --

            Resource Name       Node Name           State     Status Message
            -------------       ---------           -----     --------------
  Resource: oracle              sys-1          Online    Online - LogicalHostname online.
  Resource: oracle              sys-2          Offline   Offline - LogicalHostname offline.

  Resource: oradata             sys-1          Online    Online
  Resource: oradata             sys-2          Offline   Offline

  Resource: oraser              sys-1          Online    Online
  Resource: oraser              sys-2          Offline   Offline

  Resource: oralistener         sys-1          Online    Online
  Resource: oralistener         sys-2          Offline   Offline

------------------------------------------------------------------

-- IPMP Groups --

              Node Name           Group   Status         Adapter   Status
              ---------           -----   ------         -------   ------
  IPMP Group: sys-1          xxml    Online         eri0      Online
  IPMP Group: sys-1          xxml    Online         ce3       Standby

  IPMP Group: sys-2          xxml    Online         eri0      Online
  IPMP Group: sys-2          xxml    Online         ce3       Standby


阅读(1165) | 评论(0) | 转发(0) |
0

上一篇:超级搞笑对话(转)

下一篇:十年

给主人留下些什么吧!~~