http://blog.chinaunix.net/uid/16979052.html
全部博文(286)
分类: Oracle
2013-08-05 08:56:58
原文地址:Ocfs2文件系统常见问题解决方法1 作者:zhshujun
reference:
现象一:
mount -t ocfs2 -o datavolume,nointr /dev/sdb1 /webdata
mount.ocfs2: Transport endpoint is not connected while mounting /dev/sdb1 on /webdata. Check 'dmesg' for more information on this error.
可能问题:
1:防火墙打开着,没有关闭,屏蔽了心跳端口
2:各个节点的/etc/init.d/o2cb configure值配置不同导致。
3:一个节点处于挂载中,另外一个节点刚刚配置好,重启了ocfs2服务导致,此时只要把连个节点都重启一下服务即可完成挂载。
4:SElinux没有关闭导致。
下面是一个案例:
[root@test02 ~]# mount -t ocfs2 /dev/vg_ocfs/lv_u02 /u02
mount.ocfs2: Transport endpoint is not connected while mounting /dev/vg_ocfs/lv_u02 on /u02. Check 'dmesg' for more information on this error.
出现这个错误是由于配置OCFS时O2CB_HEARTBEAT_THRESHOLD各节点的值不一样导致的。我用/etc/init.d/o2cb configure时其实各个节点的值已经都一样了,不过第一个节点忘了重启o2cb,结果查了好久才发现。接下当然是把已经MOUNT的OCFS目录UMOUNT掉,结果又出错了:
[root@test01 u02]# umount -f /u02
umount2: Device or resource busy
umount: /u02: device is busy
umount2: Device or resource busy
umount: /u02: device is busy
这时候应该用/etc/init.d/ocfs2 stop和/etc/init.d/o2cb stop停掉OCFS2和O2CB再UMOUJNT才行,然后把OCFS2和O2CB启动以后节点就可以顺利MOUNT OCFS了。
现象二:
# /etc/init.d/o2cb online ocfs2
Starting cluster ocfs2: Failed
Cluster ocfs2 created
o2cb_ctl: Configuration error discovered while populating cluster ocfs2. None of its nodes were considered local. A node is considered local when its node name in the configuration maches this machine's host name.
Stopping cluster ocfs2: OK
主机名问题,检查more /etc/ocfs2/cluster.conf以及/etc/hosts文件信息,修改相应的主机名即可
注意:为了保证开机能自动挂载ocfs2文件系统,需要在/etc/fstab加入自动启动选项后,必须在/etc/hosts中加入两个节点的主机名和ip的对应解析,主机名和 /etc/ocfs2/cluster.conf配置的主机名一定要相同。
现象三
1: Starting O2CB cluster ocfs2: Failed
在安装完ocfs2 后,配置o2cb 出错:
[root@rac1 ocfs2]# /etc/init.d/o2cb configure
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting
will abort.
Load O2CB driver on boot (y/n) [y]:
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [7]:
Writing O2CB configuration: OK
Starting O2CB cluster ocfs2: Failed
Cluster ocfs2 created
o2cb_ctl: Configuration error discovered while populating cluster ocfs2. None of its nodes were considered local. A node is considered local when its node name in the configuration matches this machine's host name.
Stopping O2CB cluster ocfs2: OK
出现这中情况,应该是OCFS没有配置,可以看一下,有一个图形ocfs配置命令,首先要配置他,而且最好 用IP地址,不要用主机名!
也就是说,在启动ocfs2时,ocfs节点配置文件一定要配置好,如果没有配置正确,就会报错,同时在用图形界面配置的时候,/etc/ocfs2/cluster.conf文件最好是空文件,要不然也会报错!
现象四
挂载ocfs2文件系统遇到
mount.ocfs2: Error when attempting to run /sbin/ocfs2_hb_ctl: "Operation not permitted"
mount -t ocfs2 -o datavolume /dev/sdb1 /u02/oradata/orcl
ocfs2_hb_ctl: Bad magic number in superblock while reading uuid
mount.ocfs2: Error when attempting to run /sbin/ocfs2_hb_ctl: "Operation not permitted"
这个问题是由于ocfs2文件文件系统分区没有格式化引起的错误,在挂载ocfs2文件系统之前,用于这个文件系统的分区一定要进行格式化.
现象五:
Configuration assistant " Cluster Verification Utility" failed
10g rac 安装请教 oracle 10.2.0.1 solaris 5.9 双机 安装crs最后一步有错,不知如何解决?
LOG 信息:
INFO: Configuration assistant "Oracle Cluster Verification Utility" failed
-----------------------------------------------------------------------------
*** Starting OUICA ***
Oracle Home set to /orabase/product/10.2
Configuration directory is set to /orabase/product/10.2/cfgtoollogs. All xml files under the directory will be processed
INFO: The "/orabase/product/10.2/cfgtoollogs/configToolFailedCommands" script. contains all commands that failed, were skipped or were cancelled. This file may be used to run these configuration assistants outside of OUI. Note that you may have to update this script. with passwords (if any) before executing the same.
-----------------------------------------------------------------------------
SEVERE: OUI-25031:Some of the configuration assistants failed. It is strongly recommended that you retry the configuration assistants at this time. Not successfully running any "Recommended" assistants means your system will not be correctly configured.
1. Check the Details panel on the Configuration Assistant Screen to see the errors resulting in the failures.
2. Fix the errors causing these failures.
3. Select the failed assistants and click the 'Retry' button to retry them.
INFO: User Selected: Yes/OK
是vip地址没有启动造成的,建议在执行完orainstRoot.sh和root.sh命令后新开个窗口执行vipca,把crs服务都起来后再执行最后的verify步骤,可以尝试一下。
去crs的bin目录下执行crs_stat -t 看看服务是不是都起了,这种情况应该是vip没起来。
现象六:
Failed to upgrade Oracle Cluster Registry configuration
在安装CRS时,在第二个节点执行./root.sh时,出现如下提示,我在第一个节点执行正常.请大虾指点一些,不胜感激!谢谢!
[root@RACtest2 crs]# ./root.sh
WARNING: directory '/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/app/oracle/product' is not owned by root
WARNING: directory '/app/oracle' is not owned by root
WARNING: directory '/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
PROT-1: Failed to initialize ocrconfig
Failed to upgrade Oracle Cluster Registry configuration
错误原因:
是因为安装crs的设备权限有问题,例如我的设备用raw来放置ocr和vote,此时要设置好这些硬件设备以及连接的文件的权限,下面是我的环境:
[root@rac2 oracrs]#
lrwxrwxrwx 1 root root 13 Jan 27 12:49 ocr.crs -> /dev/raw/raw1
lrwxrwxrwx 1 root root 13 Jan 26 13:31 vote.crs -> /dev/raw/raw2
chown root:oinstall /dev/raw/raw1
chown root:oinstall /dev/raw/raw2
chmod 660 /dev/raw/raw1
chmod 660 /dev/raw/raw2
其中/dev/sdb1放置ocr,/dev/sdb2放置vote.
[root@rac2 oracrs]# service rawdevices reload
Assigning devices:
/dev/raw/raw1 --> /dev/sdb1
/dev/raw/raw1: bound to major 8, minor 17
/dev/raw/raw2 --> /dev/sdb2
/dev/raw/raw2: bound to major 8, minor 18
Done
然后再次执行就ok了.
[root@rac2 oracrs]# /oracle/app/oracle/product/crs/root.sh
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node
node 1: rac1 priv1 rac1
node 2: rac2 priv2 rac2
clscfg: Arguments check out successfully