Chinaunix首页 | 论坛 | 博客
  • 博客访问: 3194181
  • 博文数量: 443
  • 博客积分: 11301
  • 博客等级: 上将
  • 技术积分: 5679
  • 用 户 组: 普通用户
  • 注册时间: 2004-10-08 12:30
个人简介

欢迎加入IT云增值在线QQ交流群:342584734

文章分类

全部博文(443)

文章存档

2022年(1)

2021年(1)

2015年(2)

2014年(1)

2013年(1)

2012年(4)

2011年(19)

2010年(32)

2009年(2)

2008年(4)

2007年(31)

2006年(301)

2005年(42)

2004年(2)

分类: 系统运维

2011-01-06 10:30:40

This infodoc covers the scenario when a Sun[TM] Cluster 3.x node is rebooted or panicked and during boot it comes up with a bad /global file system requiring fsck. The procedure assumes that the other cluster node(s) is/are up and running and its/their global filesystem is ok.
 

During boot you see messages like this:

WARNING - Unable to repair the /global/.devices/node@2 filesystem.
Run fsck manually (fsck -F ufs /dev/rdsk/c0t0d8s6).     


If you cannot repair the filesystem by fsck, you can follow this procedure and recreate your global filesystem:

The mount point from /etc/vfstab is similar to this:

/dev/did/dsk/d8s6 /dev/did/rdsk/d8s6 /global/.devices/node@2 ufs 2 no global

If your global devices file system is a Veritas Volume Manager volume one must use doc ID 79281 which explain in a simple way how to rebuild global/.devices/node@X when encapsulated

1. Boot to single user mode outside of the cluster.

ok> boot -sx     


2. Run newfs as follows.

# newfs /dev/did/rdsk/d8s6
newfs: /dev/did/rdsk/d8s6 last mounted as /global/.devices/node@2
newfs: construct a new file system /dev/did/rdsk/d8s6: (y/n)? y
/dev/did/rdsk/d8s6: 205200 sectors in 135 cylinders of 19 tracks, 80 sectors
 100.2MB in 9 cyl groups (16 c/g, 11.88MB/g, 5696 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 24432, 48832, 73232, 97632, 122032, 146432, 170832, 195232,     


3. Perform the fsck.

# fsck /dev/did/rdsk/d8s6
** /dev/did/rdsk/d8s6
** Last Mounted on
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
2 files, 9 used, 96022 free (14 frags, 12001 blocks, 0.0% fragmentation)     


4. Reboot the node into the cluster.

# reboot     


When the system is joining, it will rebuild the node's global devices. During reboot you will see messages like this:

obtaining access to all attached disks
Configuring the /dev/global directory (global devices)     


Once the system is back in the cluster, you will see that the global devices now are identical on both nodes. This procedure is probably faster than recovering from tape and safer than trying to manually manipulate a corrupt global device filesystem. Again, this is for cases when one node's global devices filesystem is corrupt when it is trying to join the cluster, but the other nodes are ok. 

If your Sun Cluster is experiencing problems with a global devices filesystem while all nodes are in the cluster, use the scgdevs utility to clean it up.

阅读(2129) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~