Chinaunix首页 | 论坛 | 博客
  • 博客访问: 66708
  • 博文数量: 34
  • 博客积分: 5
  • 博客等级: 民兵
  • 技术积分: 180
  • 用 户 组: 普通用户
  • 注册时间: 2011-06-03 22:02
文章分类

全部博文(34)

文章存档

2013年(2)

2012年(3)

2011年(29)

分类:

2011-12-16 15:37:09

                         RHEL5下安装GFS集群文件系统
GFS是RedHat公司Global File System的简称,GFS是一个文件系统,为底层的共享块设备在文件系统级别提供并发的读写功能。与传统的NAS结构不同,GFS的文件系统层运行于高带宽的FC协议网络(NAS的文件系统层依托传统的IP网络实现),因此更能发挥SAN存储架构的性能优势。GFS提供三种存储解决方案架构:
高性能和高可测性

                   GFS with a SAN
如图一,多台GFS节点直接通过FC交换机连接到SAN存储体,应用程序直接跑在GFS节点上,避免了传统NAS架构中文件服务器节点的各种瓶颈和IP网络延迟。这种架构可支持300个GFS节点。

兼顾性能、可测性、价格

               GFS and GNBD with a SAN
如图二,这种方式跨越IP跟FC两种协议,SAN存储通过GNBD(Global Network Block Device)被export到传统的IP网络,这似乎和NAS的架构有点类似,但实际上有巨大的区别,首先NAS的文件服务器只有一个(当然也可以实现互为HA的架构),而GFS种的GNBD可以有多个,且是负载均衡(负载均衡本身包含了HA)的架构;其次NAS在IP网络上运行的是文件系统级别的操作,我们知道,文件系统级别的操作对系统资源的消耗是巨大的,而GFS在IP网络上跑的是比文件系统更底层的类似ISCSI的协议,即数据块被封装在IP包中进行传输,因此在效率上会比NAS高。每一个GNBD在内存中都有一个线程池,线程用于处理到SAN存储中一个数据块的路径映射,因此,每一个SAN存储体都能通过任意一个GNBD来访问,实现了负载均衡和HA。
更经济,兼顾性能

           GFS and GNBD with Directly Connected Storage
如图三,这种架构和图二中的架构最大的区别是没有SAN存储体,存储方式是直连存储(DAS),这种方式可以在传统环境中轻松实现,但是需要考虑对GNBD和它的DAS的冗余。

第二种方法在节点少的情况下没有优势反而存在gndb单点故障(如果gnbd上不做HA或多个gnbd的话),我根据需求,选择第一种方法。这样可以避免gnbd单点故障问题。
一、硬件环境
   3台HPdl380 g3
   配置:Xeon(TM)2.80G*2/2G/36G*3(raid5)
   操作系统:Red Hat Enterprise Linux Server release 5.3 (Tikanga)32bit  
   2.6.18-128.el5xen
   主机:
   1. 主机名      IP地址
      gfs1       192.168.0.21
      gfs2       192.168.0.22
      gfs3       192.168.0.23
   2.fence_ilo资源
      fence名     IP地址        登录用户名       密码
      gfs1ilo    192.168.0.11  Administrator   123456
      gfs2ilo    192.168.0.12  Administrator   123456
      gfs2ilo    192.168.0.13  Administrator   123456
   软件版本:
cman-2.0.98-1.el5.i386.rpm
gfs2-utils-0.1.53-1.el5.i386.rpm
gfs-utils-0.1.18-1.el5.i386.rpm
ipvsadm-1.24-8.1.i386.rpm
kmod-gfs-0.1.31-3.el5.i686.rpm(因为我的cpu是xen 这个不用安装)
kmod-gfs-xen-0.1.31-3.el5.i686.rpm
openais-0.80.3-22.el5.i386.rpm
perl-Net-Telnet-3.03-5.noarch.rpm
perl-XML-LibXML-1.58-5.i386.rpm
perl-XML-LibXML-Common-0.13-8.2.2.i386.rpm
perl-XML-NamespaceSupport-1.09-1.2.1.noarch.rpm
perl-XML-SAX-0.14-5.noarch.rpm
pexpect-2.3-1.el5.noarch.rpm
piranha-0.8.4-11.el5.i386.rpm
rgmanager-2.0.46-1.el5.centos.i386.rpm
system-config-cluster-1.0.55-1.0.noarch.rpm

二,Red Hat Cluster及gfs安装步骤:
安装有依赖关系,循列为:

rpm -ivh perl-Net-Telnet-3.03-5.noarch.rpm
rpm -ivh perl-XML-SAX-0.14-5.noarch.rpm
rpm -ivh perl-XML-NamespaceSupport-1.09-1.2.1.noarch.rpm
rpm -ivh perl-XML-LibXML-Common-0.13-8.2.2.i386.rpm
rpm -ivh perl-XML-LibXML-1.58-5.i386.rpm
rpm -ivh pexpect-2.3-1.el5.noarch.rpm
rpm -ivh openais-0.80.3-22.el5.i386.rpm
rpm -ivh ipvsadm-1.24-8.1.i386.rpm
rpm -ivh piranha-0.8.4-11.el5.i386.rpm
rpm -ivh gfs2-utils-0.1.53-1.el5.i386.rpm
rpm -ivh gfs-utils-0.1.18-1.el5.i386.rpm
rpm -ivh kmod-gfs-xen-0.1.31-3.el5.i686.rpm
rpm -ivh cman-2.0.98-1.el5.i386.rpm
rpm -ivh rgmanager-2.0.46-1.el5.centos.i386.rpm

rpm -ivh system-config-cluster-1.0.55-1.0.noarch.rpm


设置hosts

vi /etc/hosts 加入

192.168.0.23            gfs3
192.168.0.22            gfs2
192.168.0.21            gfs1


设置集群配置文件

vi /etc/cluster/cluster.conf  加入



       
       
               
                       
                               
                                       
                               

                       

               

               
                       
                               
                                       
                               

                       

               

               
                       
                               
                                       
                               

                       

               

       

       
       
               
               
               
       

       
               
                       
               

               
       



测式fence设备

fence_ilo -a 192.168.0.11 -l Administrator -p 123456 -o status

Status: ON

fence_ilo -a 192.168.0.12 -l Administrator -p 123456 -o status

Status: ON

fence_ilo -a 192.168.0.13 -l Administrator -p 123456 -o status

Status: ON

说明三台服务器fence设备正常.


启动集群服务

[root@gfs1 ~]# service cman start
Starting cluster:
   Enabling workaround for Xend bridged networking... done
   Loading modules... done
   Mounting configfs... done
   Starting ccsd... done
   Starting cman... done
   Starting daemons... done
   Starting fencing... done
                                                           [确定]

[root@gfs1 ~]# service rgmanager start     

    分别在三台服务器上启动

显示集群状态

root@gfs1 ~]# clustat
Cluster Status for alpha_cluster @ Fri Sep 11 16:06:05 2009
Member Status: Quorate

 Member Name                                          ID   Status
 ------ ----                                          ---- ------
 gfs1                                                     1 Online, Local
 gfs2                                                     2 Online
 gfs3                                                     3 Online


到目前为止集群已经配置成功了,还差gfs服务.


由于环境没有nas 或san阵列环境 我用软件iscsi-initiator scsi-target-utils 组合来模拟

步骤见我blog上另一篇文章.


创建gfs系统

gfs_mkfs -p lock_dlm -t alpha_cluster:gfs -j 3 /dev/sda1

It appears to contain a GFS filesystem.

        Are you sure you want to proceed? [y/n] y

        Device:                    /dev/sda1
        Blocksize:                 4096
        Filesystem Size:           669344
        Journals:                  2
        Resource Groups:           12
        Locking Protocol:          lock_dlm
        Lock Table:                alpha_cluster:gfs
        Syncing...
        All Done

在三个节点挂载文件系统 

[root@gfs1 cluster] # mount -t gfs /dev/sda1 /mnt/gfs
[root@gfs2 cluster] # mount -t gfs /dev/sda1 /mnt/gfs
[root@gfs3 cluster] # mount -t gfs /dev/sda1 /mnt/gfs


收尾步骤:

vi /etc/fstab 加入

/dev/sda1               /mnt/gfs1               gfs     defaults        0 0


加入开机启动参数(注意先后循序)

chkconfig --level 2345 rgmanager on
chkconfig --level 2345 gfs on
chkconfig --level 2345 cman on 

 

  1.  

以下是redhat官方的解释

-----------------------------------------------------------------------

故障测试:

把gfs3主机的网线拔了

在gfs1上看日志如下

Sep 11 16:38:01 gfs1 openais[3408]: [TOTEM] The token was lost in the OPERATIONAL state.
Sep 11 16:38:01 gfs1 openais[3408]: [TOTEM] Receive multicast socket recv buffer size (288000 bytes).
Sep 11 16:38:01 gfs1 openais[3408]: [TOTEM] Transmit multicast socket send buffer size (262142 bytes).
Sep 11 16:38:01 gfs1 openais[3408]: [TOTEM] entering GATHER state from 2.
Sep 11 16:38:06 gfs1 openais[3408]: [TOTEM] entering GATHER state from 0.
Sep 11 16:38:06 gfs1 openais[3408]: [TOTEM] Creating commit token because I am the rep.
Sep 11 16:38:06 gfs1 openais[3408]: [TOTEM] Saving state aru 50 high seq received 50
Sep 11 16:38:06 gfs1 openais[3408]: [TOTEM] Storing new sequence id for ring 153b0
Sep 11 16:38:06 gfs1 openais[3408]: [TOTEM] entering COMMIT state.
Sep 11 16:38:06 gfs1 openais[3408]: [TOTEM] entering RECOVERY state.
Sep 11 16:38:06 gfs1 openais[3408]: [TOTEM] position [0] member 192.168.0.21:
Sep 11 16:38:06 gfs1 openais[3408]: [TOTEM] previous ring seq 86956 rep 192.168.0.21
Sep 11 16:38:06 gfs1 openais[3408]: [TOTEM] aru 50 high delivered 50 received flag 1
Sep 11 16:38:06 gfs1 openais[3408]: [TOTEM] position [1] member 192.168.0.22:
Sep 11 16:38:06 gfs1 openais[3408]: [TOTEM] previous ring seq 86956 rep 192.168.0.21
Sep 11 16:38:06 gfs1 openais[3408]: [TOTEM] aru 50 high delivered 50 received flag 1
Sep 11 16:38:06 gfs1 openais[3408]: [TOTEM] Did not need to originate any messages in recovery.
Sep 11 16:38:06 gfs1 kernel: dlm: closing connection to node 3
Sep 11 16:38:06 gfs1 fenced[3428]: gfs3 not a cluster member after 0 sec post_fail_delay
Sep 11 16:38:06 gfs1 openais[3408]: [TOTEM] Sending initial ORF token
Sep 11 16:38:06 gfs1 fenced[3428]: fencing node "gfs3"
Sep 11 16:38:06 gfs1 openais[3408]: [CLM  ] CLM CONFIGURATION CHANGE
Sep 11 16:38:06 gfs1 openais[3408]: [CLM  ] New Configuration:
Sep 11 16:38:06 gfs1 openais[3408]: [CLM  ]     r(0) ip(192.168.0.21) 
Sep 11 16:38:06 gfs1 openais[3408]: [CLM  ]     r(0) ip(192.168.0.22) 
Sep 11 16:38:06 gfs1 openais[3408]: [CLM  ] Members Left:
Sep 11 16:38:06 gfs1 openais[3408]: [CLM  ]     r(0) ip(192.168.0.23) 
Sep 11 16:38:06 gfs1 openais[3408]: [CLM  ] Members Joined:
Sep 11 16:38:06 gfs1 openais[3408]: [CLM  ] CLM CONFIGURATION CHANGE
Sep 11 16:38:06 gfs1 openais[3408]: [CLM  ] New Configuration:
Sep 11 16:38:06 gfs1 openais[3408]: [CLM  ]     r(0) ip(192.168.0.21) 
Sep 11 16:38:06 gfs1 openais[3408]: [CLM  ]     r(0) ip(192.168.0.22) 
Sep 11 16:38:06 gfs1 openais[3408]: [CLM  ] Members Left:
Sep 11 16:38:06 gfs1 openais[3408]: [CLM  ] Members Joined:
Sep 11 16:38:06 gfs1 openais[3408]: [SYNC ] This node is within the primary component and will provide service.
Sep 11 16:38:06 gfs1 openais[3408]: [TOTEM] entering OPERATIONAL state.
Sep 11 16:38:06 gfs1 openais[3408]: [CLM  ] got nodejoin message 192.168.0.21
Sep 11 16:38:06 gfs1 openais[3408]: [CLM  ] got nodejoin message 192.168.0.22
Sep 11 16:38:06 gfs1 openais[3408]: [CPG  ] got joinlist message from node 2
Sep 11 16:38:06 gfs1 openais[3408]: [CPG  ] got joinlist message from node 1
Sep 11 16:38:19 gfs1 fenced[3428]: fence "gfs3" success
Sep 11 16:38:19 gfs1 kernel: GFS: fsid=alpha_cluster:gfs.2: jid=0: Trying to acquire journal lock...
Sep 11 16:38:19 gfs1 kernel: GFS: fsid=alpha_cluster:gfs.2: jid=0: Looking at journal...
Sep 11 16:38:20 gfs1 kernel: GFS: fsid=alpha_cluster:gfs.2: jid=0: Acquiring the transaction lock...
Sep 11 16:38:20 gfs1 kernel: GFS: fsid=alpha_cluster:gfs.2: jid=0: Replaying journal...
Sep 11 16:38:22 gfs1 kernel: GFS: fsid=alpha_cluster:gfs.2: jid=0: Replayed 0 of 1 blocks
Sep 11 16:38:22 gfs1 kernel: GFS: fsid=alpha_cluster:gfs.2: jid=0: replays = 0, skips = 0, sames = 1
Sep 11 16:38:22 gfs1 kernel: GFS: fsid=alpha_cluster:gfs.2: jid=0: Journal replayed in 3s
Sep 11 16:38:22 gfs1 kernel: GFS: fsid=alpha_cluster:gfs.2: jid=0: Done


刚才被拔网线的gfs3主机被fence_ilo 成功,也就是被fence_ilo命令重启.

[root@gfs3 ~]# last
root     pts/1        192.168.13.120   Sat Sep 12 00:45   still logged in   
reboot   system boot  2.6.18-128.el5xe Sat Sep 12 00:42          (00:03) 










阅读(403) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~