分类:
2012-11-09 12:50:21
一、简介
Glusterfs是一个具有可以扩展到几个PB数量级的集群文件系统。它可以把多个不同类型的存储块通过Infiniband RDMA或者TCP/IP汇聚成一个大的并行网络文件系统。
注:InfiniBand 架构是一种支持多并发链接的“转换线缆”技术。InfiniBand技术不是用于一般网络连接的,它的主要设计目的是针对服务器端的连接问题的。因 此,InfiniBand技术将会被应用于服务 器与服务器(比如复制,分布式工作等),服务器和存储设备(比如SAN和直接存储附件)以及服务器和网络之间(比如LAN, WANs和the Internet)的通信。
|
Single ( )
|
Double ( )
|
Quad ( )
|
1X
|
2 Gbit/s
|
4 Gbit/s
|
8 Gbit/s
|
4X
|
8 Gbit/s
|
16 Gbit/s
|
32 Gbit/s
|
12X
|
24 Gbit/s
|
48 Gbit/s
|
96 Gbit/s
|
二、测试环境介绍
测试服务器共2台:GLUSTERFS1和GLUSTERFS2,CentOS5.3。
其中GLUSTERFS1扮演CLIENT和
SERVER双重角色,GLUSTERFS2只扮演SERVER角色。(注:在GlusterFS中,Server角色主要将服务器中文件夹声明成存储单
元,而Client通过配置相应的Translator实现复制、分布式存储的。Client亦可是一台单独的服务器)
存储单元:在GLUSTERFS1和GLUSTERFS2中分别声明出BRICK-A和BRICK-B两个存储单元
预期达成效果:希望能实现类似硬盘RAID1+0的效果,就是说先将两台服务器的BRICK-A做一个镜像,然后再此基础上做一个分散式存储,并mount到一个叫glusterfs的文件夹供用户使用。
三、安装部署
1.下载/安装源码
• FUSE(Filesystem in Userspace):
– fuse-2.7.4.tar.gz
• GlusterFS:
– glusterfs-2.0.8.tar.gz
解压后在相应数据夹下执行命令配置、编译及安装FUSE和GlusterFS:
./configure
make
make install
2.建立相关文件夹
在GLUSTER1建立export-a、export-b、glusterfs
#mkdir /data/export-a
#mkdir /data/export-b
#mkdir /mnt/glusterfs
在GLUSTER2建立export-a、export-b
#mkdir /data/export-a
#mkdir /data/export-b
3.分别在两台GLUSTERFS1和GLUSTERFS2完成GlusterFS Server端配置档编写
#vim /etc/glusterfs/glusterfsd.vol
volume posix-a
type storage/posix
option directory /data/export-a
end-volume
volume locks-a
type features/locks
subvolumes posix-a
end-volume
volume brick-a
type performance/io-threads
option thread-count 8
subvolumes locks-a
end-volume
volume posix-b
type storage/posix
option directory /data/export-b
end-volume
volume locks-b
type features/locks
subvolumes posix-b
end-volume
volume brick-b
type performance/io-threads
option thread-count 8
subvolumes locks-b
end-volume
volume server
type protocol/server
option transport-type tcp
option auth.addr.brick-a.allow *
option auth.addr.brick-b.allow *
subvolumes brick-a brick-b
end-volume
4.在GLUSTERFS1完成GlusterFS Client端配置档编写
#vim /etc/glusterfs/glusterfs.vol
volume brick-1a
type protocol/client
option transport-type tcp/client
option remote-host 10.167.10.118 # IP address of the remote brick
option remote-subvolume brick-a # name of the remote volume
end-volume
### Add client feature and attach to remote brick-a subvolume of server2
volume brick-2a
type protocol/client
option transport-type tcp/client
option remote-host 10.167.10.119 # IP address of the remote brick
option remote-subvolume brick-a # name of the remote volume
end-volume
### Add client feature and attach to remote brick-b subvolume of server1
volume brick-1b
type protocol/client
option transport-type tcp/client
option remote-host 10.167.10.118 # IP address of the remote brick
option remote-subvolume brick-b # name of the remote volume
end-volume
### Add client feature and attach to remote brick-b subvolume of server2
volume brick-2b
type protocol/client
option transport-type tcp/client
option remote-host 10.167.10.119 # IP address of the remote brick
option remote-subvolume brick-b # name of the remote volume
end-volume
#The replicated volume with brick-a
volume replication1
type cluster/replicate
subvolumes brick-1a brick-2a
end-volume
#The replicated volume with brick-b
volume replication2
type cluster/replicate
subvolumes brick-1b brick-2b
end-volume
#The distribution of all replication volumes (used for > 2 servers)
volume distribution
type cluster/distribute
option lookup-unhashed yes
subvolumes replication1 replication2
end-volume
5.分别在GLUSTERFS1和GLUSTERFS2中运行GlusterFS Server端
[root@glusterfs1]#glusterfs –f /etc/glusterfs/glusterfsd.vol –l /tmp/glusterfsd.log
[root@glusterfs1]#tail /tmp/glusterfsd.log
[2009-11-16 17:11:04] N [glusterfsd.c:1306:main] glusterfs: Successfully started
[root@glusterfs1]# #netstat –tl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:ideafarm-catch *:* LISTEN
tcp 0 0 *:netbios-ssn *:* LISTEN
tcp 0 0 *:sunrpc *:* LISTEN
tcp 0 0 *:6996 *:* LISTEN
tcp 0 0 localhost.localdomain:ipp *:* LISTEN
tcp 0 0 localhost.localdomain:smtp *:* LISTEN
tcp 0 0 *:microsoft-ds *:* LISTEN
tcp 0 0 *:http *:* LISTEN
tcp 0 0 *:ssh *:* LISTEN
tcp 0 0 *:https *:* LISTEN
成功启动后,系统中出现6996监听端口。
6.在GLUSTERFS1中运行GlusterFS Client端
#glusterfs –f /etc/glusterfs/glusterfs.vol –l /tmp/glusterfs.log /mnt/glusterfs
#df –h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.5G 3.8G 5.3G 42% /
/dev/sda5 20G 522M 18G 3% /home
/dev/sda1 99M 12M 83M 12% /boot
tmpfs 506M 0 506M 0% /dev/shm
glusterfs#/etc/glusterfs/glusterfs.vol
19G 7.5G 11G 42% /mnt/glusterfs
成功运行后, 就可以通过/mnt/glusterfs来访问了。
下期预告:针对GlusterFS进行简单的功能测试。