上次的学习已经对这个系统有了初步的认识,这次继续,上次只是简单的实现了一个类似NFS的配置,但这个并不是出现的主要目的,而且也不是我们应用glusterfs的主要目的,我们要是实现的是,另外一个最简单的想法就是,如何能把多台server目前空余的硬盘空间利用起来。这其实就是glusterfs最主要的功能之一,多存储空间的聚合。
实现目的:3存储节点,1clinet,3存储节点使用round-robin的方式进行存储
试验环境:vmware6.0 linux2个,分别server:192.168.211.128 client:192.168.211.129
软件安装这里不在概述
先来看看server端的配置
首先在home目录下建立4个用于共享出来的文件夹,其中一个是用于namespace的
mkdir -p /home/{dir1,dir2,dir3,dir4} chmod 1777 /home/dir[1-4] |
然后看下3个glusterfs-server的配置文件
cat /etc/glusterfs/server1.vol
### Export volume "brick" with the contents of "/home/export" directory. volume brick type storage/posix # POSIX FS translator option directory /home/dir1 # Export this directory end-volume
### Add network serving capability to above brick. volume server type protocol/server option transport-type tcp/server option bind-address 192.168.211.128 # Default is to listen on all interfaces option listen-port 6996 # Default is 6996 subvolumes brick option auth.addr.brick.allow * # Allow access to "brick" volume end-volume |
cat /etc/glusterfs/server2.vol
### Export volume "brick" with the contents of "/home/export" directory. volume brick type storage/posix # POSIX FS translator option directory /home/dir2 # Export this directory end-volume
### Add network serving capability to above brick. volume server type protocol/server option transport-type tcp/server option bind-address 192.168.211.128 # Default is to listen on all interfaces option listen-port 6997 # Default is 6996 subvolumes brick option auth.addr.brick.allow * # Allow access to "brick" volume end-volume |
cat /etc/glusterfs/server3.vol
### Export volume "brick" with the contents of "/home/export" directory. volume brick type storage/posix # POSIX FS translator option directory /home/dir3 # Export this directory end-volume
### Add network serving capability to above brick. volume server type protocol/server option transport-type tcp/server option bind-address 192.168.211.128 # Default is to listen on all interfaces option listen-port 6998 # Default is 6996 subvolumes brick option auth.addr.brick.allow * # Allow access to "brick" volume end-volume |
cat /etc/glusterfs/server4.vol
### Export volume "brick" with the contents of "/home/export" directory. volume brick type storage/posix # POSIX FS translator option directory /home/dir4 # Export this directory end-volume
### Add network serving capability to above brick. volume server type protocol/server option transport-type tcp/server option bind-address 192.168.211.128 # Default is to listen on all interfaces option listen-port 6999 # Default is 6996 subvolumes brick option auth.addr.brick.allow * # Allow access to "brick" volume end-volume |
接下来看下client的配置
cat /etc/glusterfs/client.vol
### Add client feature and attach to remote subvolume volume client1 type protocol/client option transport-type tcp/client option remote-host 192.168.211.128 # IP address of the remote brick option remote-port 6996 # default server port is 6996 option remote-subvolume brick # name of the remote volume end-volume
volume client2 type protocol/client option transport-type tcp/client option remote-host 192.168.211.128 option remote-port 6997 option remote-subvolume brick end-volume
volume client3 type protocol/client option transport-type tcp/client option remote-host 192.168.211.128 option remote-port 6998 option remote-subvolume brick end-volume
volume namespacenode type protocol/client option transport-type tcp option remote-host 192.168.211.128 option remote-port 6999 option remote-subvolume brick end-volume
volume bricks type cluster/unify subvolumes client1 client2 client3 option scheduler rr option namespace namespcenode end-volume
### Add writeback feature volume writeback type performance/write-behind option block-size 1MB option cache-size 2MB option flush-behind off subvolumes bricks end-volume
### Add readahead feature volume readahead type performance/read-ahead option page-size 1MB # unit in bytes option page-count 2 # cache per file = (page-count x page-size) subvolumes writeback end-volume |
这样所有准备工作就做完了,接下来启动服务
server端
glusterfsd -f /etc/glusterfs/server1.vol glusterfsd -f /etc/glusterfs/server2.vol glusterfsd -f /etc/glusterfs/server3.vol glusterfsd -f /etc/glusterfs/server4.vol 如果启动没有报错,可以执行 ps fax|grep gluseterfs进行查看 1762 tty6 Ss+ 0:00 /sbin/mingetty tty6 1858 ? Ssl 0:00 glusterfsd -f ./server1.vol 1861 ? Ssl 0:00 glusterfsd -f ./server2.vol 1864 ? Ssl 0:00 glusterfsd -f ./server3.vol 1867 ? Ssl 0:00 glusterfsd -f ./server4.vol 当然还可以通过端口进行查看 netstat -ln Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 192.168.211.128:6996 0.0.0.0:* LISTEN tcp 0 0 192.168.211.128:6997 0.0.0.0:* LISTEN tcp 0 0 192.168.211.128:6998 0.0.0.0:* LISTEN tcp 0 0 192.168.211.128:6999 0.0.0.0:* LISTEN |
到此,服务器端启动完毕
然后启动client端
modprobe fuse 先挂着fuse模块 glusterfs -l /tmp/glusterfs.log -f /etc/glusterfs/client.vol /mnt 执行完毕后可以使用df -h进行查看是否mount成功,如果成功结果如下 [root@contos5-1-4 glusterfs]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 7.1G 2.5G 4.3G 37% / /dev/sda1 190M 11M 170M 7% /boot tmpfs 125M 0 125M 0% /dev/shm glusterfs 22G 8.7G 12G 43% /mnt 如果没有如上显示,则说明挂载没有成功,可以查看tmp下建立的日志进行排错 |
接下来就准备进行测试工作了
测试目的很简单,就是对round-robin进行测试,主要使用touch命令
在client端的/mnt目录下 touch {1,2,3,4,5,6,7,8,9} 然后执行ls查看结果如下 [root@contos5-1-4 mnt]# ls 1 2 3 4 5 6 7 8 9 现在在到server端去查看 cd到/home目录 执行ls *进行查看,结果如下 [root@contos5-1-1 home]# ls * dir1: 1 4 7
dir2: 2 5 8
dir3: 3 6 9
dir4: 1 2 3 4 5 6 7 8 9 |
由上面看到,9个新的文件是依次创建到了dir1 dir2 dir3中,dir4就是我们配置的namespace,用于交换空间
到此,我的试验就算完成了,而且试验目的也达成了
但到此我却发现了几个问题
1,交换空间也就是namespace需要设置多大,是应该每个存储空间之和还是和一个空间大小一样就行
2,如果其中一个节点down机,上面的数据如何处理
对于以上几个问题,我会再之后的学习中进行研究,并实现glusterfs的其他功能,对于其中的参数下次一并讨论吧
原文
阅读(540) | 评论(0) | 转发(0) |