Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1127107
  • 博文数量: 276
  • 博客积分: 10077
  • 博客等级: 上将
  • 技术积分: 2513
  • 用 户 组: 普通用户
  • 注册时间: 2007-08-24 20:31
文章分类

全部博文(276)

文章存档

2020年(1)

2015年(5)

2012年(2)

2011年(6)

2010年(7)

2009年(224)

2008年(31)

我的朋友

分类: LINUX

2009-09-09 13:53:49

          測試環境:

測試平台 :Red Hat V5

          主機信息: test1  10.148.54.115

                             test2  10.148.54.120

          主機型號:HP DL360

           HP DL380

          存儲型號:EVA4000

          硬件需求:光纖卡兩張

                             HP ilo Fence設備兩個

                           (Cluster中節點超過兩個可不設置Fence)

   

Ø          配置環境架設

1.        系統安裝

2.        下載光卡型號對應的驅動﹐并進行安裝

3.        將主機連接至光纖環境

4.        在存儲上Present 所需空間給主機

5.        #hp_rescan –a

scsi0  00 00 00 HP         MSL6000    0518       Medium

scsi0  00 00 01 HP         Ultrium    F68W       Sequential-Access

scsi0  00 00 02 HP         NS         571f        RAID

scsi0  00 01 00 HP         HSV200     6110       RAID

scsi0  00 01 01 HP         HSV200     6110       Direct-Access

scsi0  00 01 02 HP         HSV200     6110       Direct-Access

scsi0  00 01 03 HP         HSV200     6110       Direct-Access

scsi0  00 02 00 HP         HSV200     6110       RAID

scsi0  00 02 01 HP         HSV200     6110       Direct-Access

scsi0  00 02 02 HP         HSV200     6110       Direct-Access

scsi0  00 02 03 HP         HSV200     6110       Direct-Access

6.        #fdisk –l

(可以看到除了本地硬盤多出六條設備路徑﹐實際為三個可用空間)

Disk /dev/cciss/c0d0: 73.3 GB, 73369497600 bytes

255 heads, 63 sectors/track, 8920 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

        Device Boot      Start         End      Blocks   Id  System

/dev/cciss/c0d0p1   *           1        8397    67448871   83  Linux

/dev/cciss/c0d0p2            8398        8919     4192965   82  Linux swap / Solaris

 

Disk /dev/sda: 107.3 GB, 107374182400 bytes

255 heads, 63 sectors/track, 13054 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Disk /dev/sda doesn't contain a valid partition table

 

Disk /dev/sdb: 53.6 GB, 53687091200 bytes

64 heads, 32 sectors/track, 51200 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

 

Disk /dev/sdb doesn't contain a valid partition table

 

Disk /dev/sdc: 53.6 GB, 53687091200 bytes

64 heads, 32 sectors/track, 51200 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

 

Disk /dev/sdc doesn't contain a valid partition table

 

Disk /dev/sdd: 107.3 GB, 107374182400 bytes

255 heads, 63 sectors/track, 13054 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

Disk /dev/sdd doesn't contain a valid partition table

 

Disk /dev/sde: 53.6 GB, 53687091200 bytes

64 heads, 32 sectors/track, 51200 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

 

Disk /dev/sde doesn't contain a valid partition table

 

Disk /dev/sdf: 53.6 GB, 53687091200 bytes

64 heads, 32 sectors/track, 51200 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

 

Disk /dev/sdf doesn't contain a valid partition table

 

Ø          配置Cluster詳見Cluster configuration for gfs

 

Ø          配置GFS

1.        檢查以下包是否存在﹐若沒有請先安裝:

         [root@test1 ~]# rpm -qa|grep gfs

gfs2-utils-0.1.25-1.el5

kmod-gfs-0.1.16-5.2.6.18_8.el5

kmod-gfs-PAE-0.1.16-5.2.6.18_8.el5

gfs-utils-0.1.11-1.el5

kmod-gfs-xen-0.1.16-5.2.6.18_8.el5

 

[root@test2 ~]# rpm -qa|grep gnbd

kmod-gnbd-PAE-0.1.3-4.2.6.18_8.el5

kmod-gnbd-0.1.3-4.2.6.18_8.el5

kmod-gnbd-xen-0.1.3-4.2.6.18_8.el5

gnbd-1.1.5-1.el5

 

2.          創建PV

[root@test1 ~]# pvcreate /dev/sda

Physical volume "/dev/sda”  successfully created

 

[root@test1 ~]# pvcreate /dev/sdb

Physical volume "/dev/sdb”  successfully created

 

[root@test1 ~]# pvscan

Found duplicate PV cL6CW2ghfIEZ0h84I8X2nTCHqOnAQ3j3: using /dev/sdd not /dev/sda

  Found duplicate PV DONj2CEVoVOvAPk1YUi1W32HwmZyOj4H: using /dev/sde not /dev/sdb

  PV /dev/sdd         lvm2 [100.00 GB]

  PV /dev/sde         lvm2 [50.00 GB]

            Total: 2 [150.00 GB] / in use: 0 [0   ] / in no VG: 2 [150.00 GB]

 

3.        創建VG

[root@test1 /]# vgcreate vg01 /dev/sda /dev/sdb

[root@test1 /]# vgscan

Reading all physical volumes.  This may take a while...

Found duplicate PV cL6CW2ghfIEZ0h84I8X2nTCHqOnAQ3j3: using /dev/sdd not /dev/sda

Found duplicate PV DONj2CEVoVOvAPk1YUi1W32HwmZyOj4H: using /dev/sde not /dev/sdb

Found duplicate PV I0BvK3T1DT3DbKDGH8JSGj44HPxp001P: using /dev/sdf not /dev/sdc

Found volume group "vg01" using metadata type lvm2

 

4.        創建LV

[root@test1 /]# lvcreate -L 10G -n lvol1 vg01

           Found duplicate PV rNPHw2fg9wwIGSQgChC4XbHyTLmi7a8w: using /dev/sdd not /dev/sda

     Found duplicate PV K94B087qllWile3PM06K3Uk01z0pnv9s: using /dev/sde not /dev/sdb

Logical volume "lvol1" created

 

[root@test1 /]# lvcreate -L 10G -n lvol2 vg01

 

[root@test1 /]# lvscan

Found duplicate PV rNPHw2fg9wwIGSQgChC4XbHyTLmi7a8w: using /dev/sdd not /dev/sda

Found duplicate PV K94B087qllWile3PM06K3Uk01z0pnv9s: using /dev/sde not /dev/sdb

ACTIVE            '/dev/vg01/lvol1' [10.00 GB] inherit

ACTIVE            '/dev/vg01/lvol2' [10.00 GB] inherit

 

5.        test2上激活LV

         [root@test2 dev]# lvscan

          Found duplicate PV rNPHw2fg9wwIGSQgChC4XbHyTLmi7a8w: using /dev/sdd not /dev/sda

    Found duplicate PV K94B087qllWile3PM06K3Uk01z0pnv9s: using /dev/sde not /dev/sdb

    inactive          '/dev/vg01/lvol1' [10.00 GB] inherit

    inactive          '/dev/vg01/lvol2' [10.00 GB] inherit

 

  [root@test2 dev]# lvchange -a y /dev/vg01/lvol1

  Found duplicate PV rNPHw2fg9wwIGSQgChC4XbHyTLmi7a8w: using /dev/sdd not /dev/sda

  Found duplicate PV K94B087qllWile3PM06K3Uk01z0pnv9s: using /dev/sde not /dev/sdb

 

  [root@test2 dev]# lvchange -a y /dev/vg01/lvol2

  [root@test2 dev]# lvscan

  Found duplicate PV rNPHw2fg9wwIGSQgChC4XbHyTLmi7a8w: using /dev/sdd not /dev/sda

  Found duplicate PV K94B087qllWile3PM06K3Uk01z0pnv9s: using /dev/sde not /dev/sdb

  ACTIVE            '/dev/vg01/lvol1' [10.00 GB] inherit

  ACTIVE            '/dev/vg01/lvol2' [10.00 GB] inherit

 

6.        創建GFS文件系統

         [root@test1 sky]# gfs_mkfs -t clu_gfs:gfs -p lock_dlm -j 2 /dev/vg01/lvol1

          :clu_gfscluster名﹐gfs應為唯一﹐指定lock_dlm模式需在Cluster環境中﹐以下命令僅在初次創建文件系統時使用﹐若有數據請慎用﹐詳請參考man

This will destroy any data on /dev/vg01/lvol1.

Are you sure you want to proceed? [y/n] y

Device:                    /dev/vg01/lvol1

Blocksize:                 4096

Filesystem Size:           2555704

Journals:                  2

Resource Groups:           40

Locking Protocol:          lock_dlm

Lock Table:                gfs:clu_gfs

Syncing...

All Done

 

7.        啟動Gnbd Server端并Export Gnbds

:/sky/scrīpt/gnbd_serv.shClusterscrīpt指定項﹐詳請參考Cluster配置第11項中第2

[root@test1 scrīpt]# cat /sky/scrīpt/gnbd_serv.sh

/sbin/gnbd_serv –K

kill gnbd_serv even if there are exported devices

/sbin/gnbd_serv –v

Start gnbd server daemon

/sbin/gnbd_export -e lvol1 -d /dev/vg01/lvol1 –c

/sbin/gnbd_export -e lvol2 -d /dev/vg01/lvol2 –c

export the specified GNBD and enable caching

 

/sbin/gnbd_export -l –v

list the exported GNBDS (default)

阅读(1511) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~