Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1520559
  • 博文数量: 230
  • 博客积分: 474
  • 博客等级: 下士
  • 技术积分: 1955
  • 用 户 组: 普通用户
  • 注册时间: 2010-03-19 18:40
文章分类

全部博文(230)

文章存档

2020年(3)

2019年(3)

2018年(12)

2017年(13)

2016年(11)

2015年(55)

2014年(74)

2013年(39)

2012年(2)

2011年(18)

我的朋友

分类: LINUX

2013-10-10 14:03:26

GPFS for Redhat linux Installation

 

一:获取GPFS安装包(分别在cluster的每个节点上执行)

#chmod 755 gpfs_install-3.1.0-0_i386

#./gpfs_install-3.1.0-0_i386

  注:第二条命令需在图形界面下执行,且保证操作系统JAVA环境在"1.5.0_05"或以上。

   执行完以上命令后,会在系统下产生一个目录 /usr/lpp/mmfs/3.1/

     #cd /usr/lpp/mmfs/3.1

     #ls -al

drwxr-xr-x   3 root  root     4096 Aug 31 10:20 .

drwxr-xr-x  10 root  root     4096 Aug 31 10:22 ..

-rw-r--r--   1 54325 users 6119957 Mar 16  2006 gpfs.base-3.1.0-0.i386.rpm

-rw-r--r--   1 54325 users  120870 Mar 16  2006 gpfs.docs-3.1.0-0.noarch.rpm

-rw-r--r--   1 54325 users  338743 Mar 16  2006 gpfs.gpl-3.1.0-0.noarch.rpm

-rw-r--r--   1 54325 users   61812 Mar 16  2006 gpfs.msg.en_US-3.1.0-0.noarch.rpm

drwxr-xr-x   2 root  root     4096 Aug 31 10:20 license

-rw-r--r--   1 root  root       39 Aug 31 10:20 status.dat

二:安装GPFS软件(分别在cluster的每个节点上执行)

# cd /usr/lpp/mmfs/3.1

#rpm –ivh *.rpm

#rpm –qa | grep gpfs  确认GPFS软件的安装

安装完成之后,到IBM网站下载更新补丁:

  

#tar -zxvf gpfs-3.1.0-29.i386.update.tar.gz

#rpm –Uvh gpfs*.rpm

Preparing...                ########################################### [100%]

Unable to unload the 'mmfs' kernel extension.

You may need to reboot the node.

 1:gpfs.base              ########################################### [ 25%]

 2:gpfs.docs              ########################################### [ 50%]

 3:gpfs.gpl               ########################################### [ 75%]

 4:gpfs.msg.en_US         ########################################### [100%]

 升级完成之后,请重启cluster中各节点。

三:GPFS配置

   1.修改/etc/hots文件

#vi /etc/hosts

127.0.0.1             localhost.localdomain localhost

192.168.1.245       node1

192.168.1.246       node2

192.168.0.2         node1_priv

192.168.0.3         node2_priv

2.对节点进行等效性建立

       以下命令分别在node1和node2上都执行一遍

    #mkdir ~/.ssh

       #chmod 700 ~/.ssh

       #ssh-keygen -t rsa

       #ssh-keygen -t dsa

    在node1上执行以下命令

    #cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

       #cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

       #ssh node2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

       #ssh node2 cat ~/.ssh/id_dsa.pud >> ~/.ssh/authorized_keys

       #scp ~/.ssh/authorized_keys node2:~/.ssh/authorized_keys

   测试两个节点的连接等效性

    #ssh node1 date

       #ssh node2 date

       #ssh node1_priv date

       #ssh node2_priv date

3.以下步骤只需在一个节点上执行

#cd /usr/lpp/mmfs/src/config/

   #./configure       执行完成后会在该目录下产生一个site.mcr文件

   #vi site.mcr

 LINUX_DISTRIBUTION = REDHAT_AS_LINUX

 LINUX_DISTRIBUTION_LEVEL 40

LINUX_KERNEL_VERSION 2060942

 KERNEL_HEADER_DIR = /lib/modules/2.6.9-42.ELsmp/build/include

KERNEL_BUILD_DIR = /lib/modules/2.6.9-42.ELsmp/build

  确认以上几个参数与操作系统的参数一致

#cd /usr/lpp/mmfs/src
#export SHARKCLONEROOT=/usr/lpp/mmfs/src
#make World
#make InstallImages
cd gpl-linux; /usr/bin/make InstallImages;
make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux'
mmfslinux
mmfs26
lxtrace
dumpconv
tracedev
/sbin/ldconfig
make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux' 
#cd /usr/lpp/mmfs/bin
#scp mmfslinux mmfs26 lxtrace dumpconv tracedev node2:/usr/lpp/mmfs/bin
4.确认主机操作系统与共享存储器的连接,分别在cluster每个节点上执行
   #fdisk –l
    Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        1958    15623212+  8e  Linux LVM
 
Disk /dev/sdb: 10.7 GB, 10736369664 bytes
64 heads, 32 sectors/track, 10239 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
 
Disk /dev/sdb doesn't contain a valid partition table
 
Disk /dev/sdc: 5367 MB, 5367660544 bytes
166 heads, 62 sectors/track, 1018 cylinders
Units = cylinders of 10292 * 512 = 5269504 bytes
 
Disk /dev/sdc doesn't contain a valid partition table
此例中共享存储器为/dev/sdb、/dev/sdc
 
5.执行完成之后,在/tmp目录下建立以下两个文件,此步只需在一个节点上执行
   #touch gpfs_node
#touch gpfs_disk
#vi gpfs_node
node1:quorum-manager
node2:quorum-manager
#vi gpfs_disk
/dev/sdb:node1:node2:dataAndMetadata::coc_data_metadata_only
/dev/sdc:node1:node2:dataOnly:: 
6.建立GPFS cluster
   #cd /usr/lpp/mmfs/
   # ./bin/mmcrcluster -n /tmp/gpfs_node -p node1 -s node2 -r /usr/bin/ssh -R /usr/bin/scp
Mon Aug 31 10:35:37 CST 2009: mmcrcluster: Processing node node1
Mon Aug 31 10:35:38 CST 2009: mmcrcluster: Processing node node2
mmcrcluster: Command successfully completed
mmcrcluster: Propagating the cluster configuration data to all
affected nodes.  This is an asynchronous process.
#./bin/mmlscluster
GPFS cluster information
========================
  GPFS cluster name:         node1
  GPFS cluster id:           13882348004399855353
  GPFS UID domain:           node1
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp
 
GPFS cluster configuration servers:
-----------------------------------
  Primary server:    node1
  Secondary server:  node2
 
 Node  Daemon node name            IP address       Admin node name             Designation    
1   node1                   192.168.1.245    node1     quorum-manager
2   node2                   192.168.1.246    node2     quorum-manager
#./bin/mmcrnsd -F /tmp/gpfs_disk -v yes
mmcrnsd: Processing disk sdb
mmcrnsd: Processing disk sdc
mmcrnsd: Propagating the cluster configuration data to all
affected nodes.  This is an asynchronous process.
#cat /tmp/gpfs_disk 
# /dev/sdb:node1:node2:dataAndMetadata::coc_data_metadata_only
coc_data_metadata_only:::dataAndMetadata:4001::
# /dev/sdc:node1:node2:dataOnly::
gpfs1nsd:::dataOnly:4001::
#scp /tmp/gpfs_disk node2:/tmp/gpfs_disk
#./bin/mmlsnsd –m
Disk name    NSD volume ID      Device         Node name             Remarks       
-------------------------------------------------------------------------------
 coc_data_metadata_only C0A801F54A9B3730   /dev/sdb       node1    primary node
 coc_data_metadata_only C0A801F54A9B3730   /dev/sdb       node2    backup node
 gpfs1nsd     C0A801F54A9B3732   /dev/sdc       node1              primary node
 gpfs1nsd     C0A801F54A9B3732   /dev/sdc       node2             backup node
#./bin/mmstartup –a
Mon Aug 31 10:37:48 CST 2009: mmstartup: Starting GPFS ...
#./bin/mmgetstate -a
 
 Node number  Node name        GPFS state 
------------------------------------------
       1      node1            active
       2      node2            active
#./bin/mmlsconfig 
Configuration data for cluster node1:
-------------------------------------
clusterName node1
clusterId 13882348004399855353
clusterType lc
autoload no
useDiskLease yes
maxFeatureLevelAllowed 930
[node1]
takeOverSdrServ yes
 
File systems in cluster node1:
------------------------------
(none)
7.建立GPFS 文件系统
#mkdir /gpfs    建立挂载点
#./bin/mmcrfs /gpfs gpfsdev -F /tmp/gpfs_disk -A yes -B 1024K -v yes
The following disks of gpfsdev will be formatted on node node1:
    coc_data_metadata_only: size 10484736 KB
    gpfs1nsd: size 5241856 KB
Formatting file system ...
Disks up to size 91 GB can be added to storage pool 'system'.
Creating Inode File
Creating Allocation Maps
Clearing Inode Allocation Map
Clearing Block Allocation Map
Completed creation of file system /dev/gpfsdev.
mmcrfs: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
#./bin/mmlsconfig 
Configuration data for cluster node1:
-------------------------------------
clusterName node1
clusterId 13882348004399855353
clusterType lc
autoload yes
useDiskLease yes
maxFeatureLevelAllowed 930
[node1]
takeOverSdrServ yes
 
File systems in cluster node1:
------------------------------
/dev/gpfsdev
#cat /etc/fstab
………………………
/dev/gpfsdev     /gpfs    gpfs    rw,mtime,atime,dev=gpfsdev,autostart 0 0
 
至此GPFS for Redhat linux installation全部完成。
阅读(4122) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~