GFS测试
硬件环境
HP刀片服务器,共使用了6个刀片。分别是
192.168.104.23(cman 192.168.105.23, ILO 192.168.105.12 )
192.168.104.24(cman 192.168.105.24, ILO 192.168.105.14 )
192.168.104.25(cman 192.168.105.25, ILO 192.168.105.18 )
192.168.104.26(cman 192.168.105.26, ILO 192.168.105.19 )
192.168.104.27(cman 192.168.105.27, ILO 192.168.105.20 )
192.168.104.28(cman 192.168.105.28, ILO 192.168.105.21 )
软件配置和硬件配置参考sysreport.
需要事情配置/etc/hosts, 请参考sysreport
网络配置请参考sysrepot中的配置。
1,操作系统软件包
[所有节点都要执行]rpm -Uvh fonts-xorg-base-6.8.2-1.EL.noarch.rpm gnome-python2-2.6.0-3.i386.rpm gnome-python2-canvas-2.6.0-3.i386.rpm pygtk2-2.4.0-2.el4.i386.rpm pygtk2-libglade-2.4.0-2.el4.i386.rpm seamonkey-nss-1.0.9-2.el4.i386.rpm tog-pegasus-2.5.1-5.EL4.i386.rpm GConf2-2.8.1-1.el4.i386.rpm ORBit2-2.12.0-3.i386.rpm atk-1.8.0-2.i386.rpm gnome-vfs2-2.8.2-8.6.EL4.i386.rpm gtk2-2.4.13-22.i386.rpm chkfontpath-1.10.0-2.i386.rpm gamin-0.1.7-1.4.EL4.i386.rpm gnome-mime-data-2.4.1-5.i386.rpm gnome-python2-bonobo-2.6.0-3.i386.rpm libIDL-0.8.4-1.i386.rpm libart_lgpl-2.3.16-3.i386.rpm libbonobo-2.8.0-2.i386.rpm libbonoboui-2.8.0.99cvs20040929-2.i386.rpm libglade2-2.4.0-5.i386.rpm libgnome-2.8.0-2.i386.rpm libgnomecanvas-2.8.0-1.i386.rpm libgnomeui-2.8.0-1.i386.rpm pango-1.6.0-9.i386.rpm audiofile-0.2.6-1.el4.1.i386.rpm esound-0.2.35-2.i386.rpm gnome-keyring-0.4.0-1.2.EL4.i386.rpm pyorbit-2.0.1-1.i386.rpm seamonkey-nspr-1.0.9-2.el4.i386.rpm shared-mime-info-0.15-10.1.el4.i386.rpm xorg-x11-font-utils-6.8.2-1.EL.33.i386.rpm xorg-x11-xfs-6.8.2-1.EL.33.i386.rpm alsa-lib-1.0.6-5.RHEL4.i386.rpm ttmkfdir-3.0.9-20.el4.i386.rpm oddjob-0.26-1.1.i386.rpm oddjob-libs-0.26-1.1.i386.rpm perl-Crypt-SSLeay-0.51-5.i386.rpm
2, 安装CLUSTER和GFS(请注意使用的kernel是否是kernel-smp-2.6.9-67 )
[所有节点都要执行]rpm -ivh ccs-1.0.11-1.i686.rpm cluster-cim-0.11.0-3.i386.rpm cman-1.0.17-0.i686.rpm cman-kernel-2.6.9-53.5.i686.rpm cman-kernheaders-2.6.9-53.5.i686.rpm dlm-1.0.7-1.i686.rpm dlm-kernel-2.6.9-52.2.i686.rpm dlm-kernheaders-2.6.9-52.2.i686.rpm fence-1.32.50-2.i686.rpm gulm-1.0.10-0.i686.rpm iddev-2.0.0-4.i686.rpm magma-1.0.8-1.i686.rpm magma-plugins-1.0.12-0.i386.rpm modcluster-0.11.0-3.i386.rpm perl-Net-Telnet-3.03-3.noarch.rpm rgmanager-1.9.72-1.i386.rpm system-config-cluster-1.0.51-2.0.noarch.rpm cman-kernel-smp-2.6.9-53.5.i686.rpm dlm-kernel-smp-2.6.9-52.2.i686.rpm
[所有节点都要执行]rpm -ivh cmirror-1.0.1-1.i386.rpm cmirror-kernel-2.6.9-38.5.i686.rpm GFS-6.1.15-1.i386.rpm GFS-kernel-2.6.9-75.9.i686.rpm GFS-kernheaders-2.6.9-75.9.i686.rpm gnbd-1.0.9-1.i686.rpm gnbd-kernel-2.6.9-10.29.i686.rpm gnbd-kernheaders-2.6.9-10.29.i686.rpm lvm2-cluster-2.02.27-2.el4.i386.rpm cmirror-kernel-smp-2.6.9-38.5.i686.rpm GFS-kernel-smp-2.6.9-75.9.i686.rpm gnbd-kernel-smp-2.6.9-10.29.i686.rpm
3, 升级system-config-cluster-1.0.51-2.0.el4_6.1.noarch.rpm
[所有节点都要执行]rpm -Uvh system-config-cluster-1.0.51-2.0.el4_6.1.noarch.rpm
4, 拷贝fence_ilo2到/sbin下
[所有节点都要执行]cp fence_ilo2 /sbin
5, [在一个节点上执行]配置/etc/cluster/cluster.conf
注意,如果要使用fence_ilo2需要修改fence部分的参数。具体请参考下面的样例。
配置文件
然后拷贝配置文件到其他所有的节点上。
scp /etc/cluster/cluster.conf vcom1:/etc/cluster/cluster.conf
6, 启动所有节点的ccsd和cman服务
[所有节点都要执行]
service ccsd restart
service cman restart
7, 检查cluster状态
clustat
8, 如果服务器状态正常可以启动fence
[所有节点都要执行]
service fenced start
9,创建分区,文件系统。
[在一个节点上执行]
fdisk /dev/sda
fdisk /dev/sdb
partprobe
gfs_mkfs -p lock_dlm -t vcom_cluster:media1 -j 6 -J 1024 /dev/sda1
gfs_mkfs -p lock_dlm -t vcom_cluster:media2 -j 6 -J 1024 /dev/sdb1
GFS的配置如下。
IO scheduler: CFQ
如果节点是点播节点:配置如下
umount /dev/sda1
umount /dev/sdb1
mount -t gfs /dev/sda1 /home/apache/media/media1 -o noatime,ro,localcaching,localflocks,noquota
mount -t gfs /dev/sdb1 /home/apache/media/media2 -o noatime,ro,localcaching,localflocks,noquota
#echo 1000> /sys/block/sda/queue/iosched/read_expire
#echo 1000> /sys/block/sdb/queue/iosched/read_expire
echo 8192 > /sys/block/sda/queue/nr_requests
echo 512 > /sys/block/sda/queue/read_ahead_kb
echo 8192 > /sys/block/sdb/queue/nr_requests
echo 512 > /sys/block/sdb/queue/read_ahead_kb
echo 16 > /sys/block/sda/queue/iosched/queued
echo 16 > /sys/block/sdb/queue/iosched/queued
for i in media1 media2;
do
echo -----------------------------;
gfs_tool settune /home/apache/media/$i atime_quantum 86400
gfs_tool settune /home/apache/media/$i quota_enforce 0
gfs_tool settune /home/apache/media/$i jindex_refresh_secs 600
gfs_tool settune /home/apache/media/$i quota_account 0
gfs_tool settune /home/apache/media/$i seq_readahead 128
#gfs_tool setflag inherit_jdata /home/apache/media/$i
gfs_tool settune /home/apache/media/$i scand_secs 60
gfs_tool settune /home/apache/media/$i recoverd_secs 600
gfs_tool settune /home/apache/media/$i logd_secs 30
gfs_tool settune /home/apache/media/$i quotad_secs 600
gfs_tool settune /home/apache/media/$i inoded_secs 60
echo -----------------------------;
done
如果是写入节点
umount /dev/sda1
umount /dev/sdb1
mount -t gfs /dev/sda1 /home/apache/media/media1 -o noatime,noquota
mount -t gfs /dev/sdb1 /home/apache/media/media2 -o noatime,noquota
echo 8192> /sys/block/sda/queue/nr_requests
echo 512 > /sys/block/sda/queue/read_ahead_kb
echo 8192> /sys/block/sdb/queue/nr_requests
echo 512 > /sys/block/sdb/queue/read_ahead_kb
echo 16 > /sys/block/sda/queue/iosched/queued
echo 16 > /sys/block/sdb/queue/iosched/queued
for i in media1 media2;
do
echo -----------------------------;
gfs_tool settune /home/apache/media/$i atime_quantum 86400
gfs_tool settune /home/apache/media/$i quota_enforce 0
gfs_tool settune /home/apache/media/$i jindex_refresh_secs 600
gfs_tool settune /home/apache/media/$i quota_account 0
gfs_tool settune /home/apache/media/$i seq_readahead 128
#gfs_tool setflag inherit_jdata /home/apache/media/$i
gfs_tool settune /home/apache/media/$i scand_secs 60
gfs_tool settune /home/apache/media/$i recoverd_secs 600
gfs_tool settune /home/apache/media/$i logd_secs 30
gfs_tool settune /home/apache/media/$i quotad_secs 600
gfs_tool settune /home/apache/media/$i inoded_secs 60
echo -----------------------------;
done
阅读(4192) | 评论(0) | 转发(0) |