Chinaunix首页 | 论坛 | 博客
  • 博客访问: 337960
  • 博文数量: 62
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 710
  • 用 户 组: 普通用户
  • 注册时间: 2013-05-14 14:12
个人简介

太懒

文章分类

全部博文(62)

文章存档

2015年(8)

2014年(20)

2013年(34)

我的朋友

分类: Oracle

2013-05-22 12:04:39

 

安装grid

[oracle@node1 ~]$ grid_env
+ASM1
[oracle@node1 ~]$ cd  /nfs/oracle11G/i386/grid/
[oracle@node1 grid]$ ls
doc/  install/  readme.html*  response/  rpm/  runcluvfy.sh*  runInstaller*  sshsetup/  stage/  welcome.html*
[oracle@node1 grid]$ ./run
runcluvfy.sh  runInstaller 
[oracle@node1 grid]$ ./runInstaller -ignoreSysPrereqs
5-21-2013 12-19-33 PM5-21-2013 12-20-15 PM5-21-2013 12-20-39 PM5-21-2013 12-20-56 PM5-21-2013 12-21-25 PM5-21-2013 12-21-50 PM5-21-2013 12-22-48 PM5-21-2013 12-23-11 PM5-21-2013 12-24-17 PM5-21-2013 12-24-35 PM5-21-2013 12-25-11 PM5-21-2013 12-25-43 PM5-21-2013 12-26-03 PM5-21-2013 12-26-28 PM5-21-2013 12-27-24 PM5-21-2013 12-28-23 PM5-21-2013 12-28-49 PM5-21-2013 12-29-13 PM5-21-2013 12-30-05 PM5-21-2013 12-35-20 PM5-21-2013 12-35-36 PM5-21-2013 12-45-33 PM5-21-2013 12-45-57 PM5-21-2013 12-46-41 PM5-21-2013 12-46-57 PM5-21-2013 12-47-20 PM
5-21-2013 1-36-13 PM
分别打开两个终端,以root用户登录

login as: root
Access denied
root@10.101.5.70's password:
Last login: Tue May 21 12:17:46 2013 from 10.101.5.66
[root@node1 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@node1 ~]# /u01/app/grid/11.2.3/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/grid/11.2.3
Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.3/crs/install/crsconfig_params
Creating trace directory
Failed to create keys in the OLR, rc = 127, Message:
  /u01/app/grid/11.2.3/bin/clscfg.bin: error while loading shared libraries: libcap.so.1: cannot open shared object file: No such file or directory
Failed to create keys in the OLR at /u01/app/grid/11.2.3/crs/install/crsconfig_lib.pm line 7497.
/u01/app/grid/11.2.3/perl/bin/perl -I/u01/app/grid/11.2.3/perl/lib -I/u01/app/grid/11.2.3/crs/install /u01/app/grid/11.2.3/crs/install/rootcrs.pl execution failed
[root@node1 ~]#

node2同样运行一次

login as: root
Access denied
root@10.101.5.71's password:
Last login: Tue May 21 13:04:22 2013 from 10.101.5.66
[root@node2 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@node2 ~]# /u01/app/grid/11.2.3/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/grid/11.2.3
Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.3/crs/install/crsconfig_params
Creating trace directory
Failed to create keys in the OLR, rc = 127, Message:
  /u01/app/grid/11.2.3/bin/clscfg.bin: error while loading shared libraries: libcap.so.1: cannot open shared object file: No such file or directory
Failed to create keys in the OLR at /u01/app/grid/11.2.3/crs/install/crsconfig_lib.pm line 7497.
/u01/app/grid/11.2.3/perl/bin/perl -I/u01/app/grid/11.2.3/perl/lib -I/u01/app/grid/11.2.3/crs/install /u01/app/grid/11.2.3/crs/install/rootcrs.pl execution failed
[root@node2 ~]#

运行完以后点击ok继续,出现一个错误

Image

[INS-41807] The installer has detected that Oracle Clusterware is not running on the following nodes:  node1 node2.

Are you sure you want to continue ?

Cause - Either there was an error in starting the Oracle Clusterware stack on the specified nodes, or the root scripts on the specified nodes were not run.  Action - Run the root scripts on the nodes listed. If root scripts have already been run on these nodes, then examine the log file /u01/app/grid/11.2.3/cfgtoollogs/crsconfig/rootcrs_.log on each failed node to determine the reason for the Oracle Clusterware stack not starting  Exception Details:
PRCI-1108 : Failed to check CRS running state for CRS home /u01/app/grid/11.2.3 on node node1
PRCT-1003 : Failed to run "crsctl" on node "node1"
PRCI-1108 : Failed to check CRS running state for CRS home /u01/app/grid/11.2.3 on node node2
PRCT-1003 : Failed to run "crsctl" on node "node2"

检查日志

more /u01/app/grid/11.2.3/cfgtoollogs/crsconfig/rootcrs_node1.log

所有日志都有相同的输出:

2013-05-21 13:51:42: Executing cmd: /u01/app/grid/11.2.3/bin/clscfg -localadd
2013-05-21 13:51:42: Command output:
>  /u01/app/grid/11.2.3/bin/clscfg.bin: error while loading shared libraries: libcap.so.1: cannot open shared object file: No such file or directory
>End Command output
2013-05-21 13:51:42: '/u01/app/grid/11.2.3/bin/clscfg -localadd' - successful
2013-05-21 13:51:42: Failed to create keys in the OLR, rc = 127, Message:
  /u01/app/grid/11.2.3/bin/clscfg.bin: error while loading shared libraries: libcap.so.1: cannot open shared object file: No such file or directory
2013-05-21 13:51:42: Running as user oracle: /u01/app/grid/11.2.3/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_OLR -state FAIL
2013-05-21 13:51:42: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/grid/11.2.3/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_OLR -state FAIL '
2013-05-21 13:51:42: Removing file /tmp/filejzyWKa
2013-05-21 13:51:42: Successfully removed file: /tmp/filejzyWKa
2013-05-21 13:51:42: /bin/su successfully executed
2013-05-21 13:51:42: Succeeded in writing the checkpoint:'ROOTCRS_OLR' with status:FAIL
2013-05-21 13:51:42: CkptFile: /u01/app/oracle/Clusterware/ckptGridHA_node1.xml
2013-05-21 13:51:42: Sync the checkpoint file '/u01/app/oracle/Clusterware/ckptGridHA_node1.xml'
2013-05-21 13:51:42: Sync '/u01/app/oracle/Clusterware/ckptGridHA_node1.xml' to the physical disk
2013-05-21 13:51:42: ###### Begin DIE Stack Trace ######
2013-05-21 13:51:42:     Package         File                 Line Calling
2013-05-21 13:51:42:     --------------- -------------------- ---- ----------
2013-05-21 13:51:42:  1: main            rootcrs.pl            375 crsconfig_lib::dietrap
2013-05-21 13:51:42:  2: crsconfig_lib   crsconfig_lib.pm     7497 main::__ANON__
2013-05-21 13:51:42:  3: crsconfig_lib   crsconfig_lib.pm     7397 crsconfig_lib::olr_initial_config
2013-05-21 13:51:42:  4: main            rootcrs.pl            674 crsconfig_lib::perform_olr_config
2013-05-21 13:51:42: ####### End DIE Stack Trace #######
2013-05-21 13:51:42: 'ROOTCRS_OLR' checkpoint has failed
2013-05-21 13:51:42: Running as user oracle: /u01/app/grid/11.2.3/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_OLR -state FAIL
2013-05-21 13:51:42: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/grid/11.2.3/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_OLR -state FAIL '
2013-05-21 13:51:43: Removing file /tmp/fileqQjNUI
2013-05-21 13:51:43: Successfully removed file: /tmp/fileqQjNUI
2013-05-21 13:51:43: /bin/su successfully executed
2013-05-21 13:51:43: Succeeded in writing the checkpoint:'ROOTCRS_OLR' with status:FAIL
2013-05-21 13:51:43: CkptFile: /u01/app/oracle/Clusterware/ckptGridHA_node1.xml
2013-05-21 13:51:43: Sync the checkpoint file '/u01/app/oracle/Clusterware/ckptGridHA_node1.xml'
2013-05-21 13:51:43: Sync '/u01/app/oracle/Clusterware/ckptGridHA_node1.xml' to the physical disk
[root@node1 ~]#

[root@node1 ~]# rpm -qa|grep libcap
libcap-ng-0.6.4-3.el6_0.1.i686
libcap-2.16-5.5.el6.i686
[root@node1 ~]# rpm -ql libcap|grep libcap.so.1
[root@node1 ~]#

[root@node1 ~]# yum provides libcap.so.1
Loaded plugins: security
compat-libcap1-1.10-1.i686 : Library for getting and setting POSIX.1e capabilities
Repo        : ol6_latest
Matched from:
Other       : libcap.so.1
[root@node1 ~]# rpm -qa|grep compat-libcap

[root@node1 ~]#

缺包,装上再说,所有的节点都装上

[root@node1 ~]# yum install compat-libcap1
Loaded plugins: security
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package compat-libcap1.i686 0:1.10-1 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================================================================================================
Package                                         Arch                                  Version                                 Repository                                   Size
=================================================================================================================================================================================
Installing:
compat-libcap1                                  i686                                  1.10-1                                  ol6_latest                                   16 k
Transaction Summary
=================================================================================================================================================================================
Install       1 Package(s)
Total download size: 16 k
Installed size: 24 k
Is this ok [y/N]: y
Downloading Packages:
compat-libcap1-1.10-1.i686.rpm                                                                                                                            |  16 kB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
  Installing : compat-libcap1-1.10-1.i686                                                                                                                                    1/1
  Verifying  : compat-libcap1-1.10-1.i686                                                                                                                                    1/1
Installed:
  compat-libcap1.i686 0:1.10-1
Complete!
[root@node1 ~]#

先点No然后点击retry,重新执行root脚本


5-21-2013 2-01-30 PM5-21-2013 2-23-09 PM




[root@node1 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@node1 ~]# /u01/app/grid/11.2.3/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/grid/11.2.3
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.3/crs/install/crsconfig_params
OLR initialization - successful

  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'node1'
CRS-2676: Start of 'ora.mdnsd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'node1'
CRS-2676: Start of 'ora.gpnpd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1'
CRS-2672: Attempting to start 'ora.gipcd' on 'node1'
CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node1'
CRS-2672: Attempting to start 'ora.diskmon' on 'node1'
CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded
CRS-2676: Start of 'ora.cssd' on 'node1' succeeded
ASM created and started successfully.
Disk Group OCR created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 4433c865f4da4f25bf172a60fcfac49c.
Successful addition of voting disk 88ac6a4f38124fe2bf7f7bfc31b43e5c.
Successful addition of voting disk 14536de7c7a84f7fbf39508ef52e5e6f.
Successfully replaced voting disk group with +OCR.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   4433c865f4da4f25bf172a60fcfac49c (/dev/asm-diskf) [OCR]
2. ONLINE   88ac6a4f38124fe2bf7f7bfc31b43e5c (/dev/asm-diski) [OCR]
3. ONLINE   14536de7c7a84f7fbf39508ef52e5e6f (/dev/asm-diskj) [OCR]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'node1'
CRS-2676: Start of 'ora.asm' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.OCR.dg' on 'node1'
CRS-2676: Start of 'ora.OCR.dg' on 'node1' succeeded

Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@node1 ~]#

[root@node2 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@node2 ~]#

[root@node2 ~]# /u01/app/grid/11.2.3/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/grid/11.2.3
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.3/crs/install/crsconfig_params
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node node1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@node2 ~]#

到100%左右又出一错误

Image(1)

5-21-2013 2-27-00 PM5-21-2013 2-32-09 PM5-21-2013 2-32-32 PM

点击details看看

Cause - The plug-in failed in its perform method 

Action - Refer to the logs or contact Oracle Support Services. 

Log File Location
/u01/app/oraInventory/logs/installActions2013-05-21_12-56-33PM.log

[root@node1 ~]# more /u01/app/oraInventory/logs/installActions2013-05-21_12-56-33PM.log 

最后几行:

INFO: Query of CTSS for time offset passed
INFO: Check CTSS state started...
INFO: CTSS is in Observer state. Switching over to clock synchronization checks using NTP
INFO: Starting Clock synchronization checks using Network Time Protocol(NTP)...
INFO: NTP Configuration file check started...
INFO: NTP Configuration file check passed
INFO: Checking daemon liveness...
INFO: Liveness check failed for "ntpd"
INFO: Check failed on nodes:
INFO:   node2,node1
INFO: PRVF-5494 : The NTP Daemon or Service was not alive on all nodes
INFO: PRVF-5415 : Check to see if NTP daemon or service is running failed
INFO: Clock synchronization check using Network Time Protocol(NTP) failed
INFO: PRVF-9652 : Cluster Time Synchronization Services check failed
INFO: Checking VIP configuration.
INFO: Checking VIP Subnet configuration.
INFO: Check for VIP Subnet configuration passed.
INFO: Checking VIP reachability
INFO: Check for VIP reachability passed.
INFO: Post-check for cluster services setup was unsuccessful on all the nodes.
INFO:
WARNING:
INFO: Completed Plugin named: Oracle Cluster Verification Utility
[root@node1 ~]#

忘了禁用系统ntp服务

[root@node1 ~]# chkconfig --list|grep ntp
ntpd            0:off   1:off   2:off   3:off   4:off   5:off   6:off
ntpdate         0:off   1:off   2:off   3:off   4:off   5:off   6:off
[root@node1 ~]# ll /etc/|grep ntp
drwxr-xr-x.  3 root   root       4096 May 17 16:03 ntp
-rw-r--r--.  1 root   root       1917 Feb 24 06:32 ntp.conf
[root@node1 ~]#
[root@node1 ~]# mv /etc/ntp.conf /etc/ntp.conf.orig
[root@node1 ~]#

又有错误

Image(2)

检查日志

INFO: Checking existence of ONS node application (optional)
INFO: ONS node application check passed
INFO: Checking Single Client Access Name (SCAN)...
INFO: Checking TCP connectivity to SCAN Listeners...
INFO: TCP connectivity to SCAN Listeners exists on all cluster nodes
INFO: Checking name resolution setup for "rac-scan1.momo.org"...
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "rac-scan1.momo.org" (IP address: 208.87.35.103) failed
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "rac-scan1.momo.org" (IP address: 10.101.5.77) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan1.momo.org"
INFO: Verification of SCAN VIP and Listener setup failed
INFO: Checking OLR integrity...
INFO: Checking OLR config file...
INFO: OLR config file check successful
INFO: Checking OLR file attributes...
INFO: OLR file check successful
INFO: WARNING:
INFO: This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.
INFO: OLR integrity check passed
INFO: User "oracle" is not part of "root" group. Check passed
INFO: Checking if Clusterware is installed on all nodes...
INFO: Check of Clusterware install passed
INFO: Checking if CTSS Resource is running on all nodes...
INFO: CTSS resource check passed
INFO: Querying CTSS for time offset on all nodes...
INFO: Query of CTSS for time offset passed
INFO: Check CTSS state started...
INFO: CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
INFO: PRVF-9661 :  Time offset is greater than acceptable limit on node "node2" [actual = "-152000.0", acceptable = "1000.0" ]
INFO: PRVF-9652 : Cluster Time Synchronization Services check failed
INFO: Checking VIP configuration.
INFO: Checking VIP Subnet configuration.
INFO: Check for VIP Subnet configuration passed.
INFO: Checking VIP reachability
INFO: Check for VIP reachability passed.
INFO: Post-check for cluster services setup was unsuccessful on all the nodes.
INFO:
WARNING:
INFO: Completed Plugin named: Oracle Cluster Verification Utility
[root@node1 ~]#

临时禁用一下DNS

[root@node1 ~]# nslookup
> rac-scan1.momo.org
Server:         10.101.0.16
Address:        10.101.0.16#53
Non-authoritative answer:
Name:   rac-scan1.momo.org
Address: 208.87.35.103
> ^C[root@node1 ~]#
[root@node1 ~]#
[root@node1 ~]# vi /etc/resolv.conf
[root@node1 ~]#
[root@node1 ~]# cd /etc/
[root@node1 etc]#
[root@node1 etc]# mv resolv.conf resolv.conf.orig
[root@node1 etc]#
[root@node1 etc]#
[root@node1 etc]# nslookup
> rac-scan1.momo.org
^C
[root@node1 etc]#

node2也一样

retry还是报错

INFO: GSD node application is offline on nodes "node2,node1"
INFO: Checking existence of ONS node application (optional)
INFO: ONS node application check passed
INFO: Checking Single Client Access Name (SCAN)...
INFO: Checking TCP connectivity to SCAN Listeners...
INFO: TCP connectivity to SCAN Listeners exists on all cluster nodes
INFO: Checking name resolution setup for "rac-scan1.momo.org"...
INFO: ERROR:
INFO: PRVG-1101 : SCAN name "rac-scan1.momo.org" failed to resolve
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "rac-scan1.momo.org" (IP address: 10.101.5.77) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan1.momo.org"
INFO: Verification of SCAN VIP and Listener setup failed
INFO: Checking OLR integrity...
INFO: Checking OLR config file...
INFO: OLR config file check successful

5-21-2013 2-37-02 PM5-21-2013 2-40-39 PM5-21-2013 2-46-26 PM

看来想通过必须得在DNS上配置域名解析,skip

5-21-2013 2-46-42 PM5-21-2013 2-47-09 PM5-21-2013 2-47-34 PM
Grid安装完成
阅读(16481) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~