linux与solaris中的ZFS+ISCSI的结合
solaris端配置:
solaris10一台,其中zfs的zpool名为zfszpool,大小是136GB,创建了三个zfs文件系统,分别为u01、u02、u03。
[root@node02 /]#mkdir /u03
[root@node02 /]#zfs create zfspool/u03
[root@node02 /]#zfs set mountpoint=/u03 zfspool/u03
[root@node02 /]#df
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 29G 9.1G 20G 32% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 2.3G 1.4M 2.3G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
fd 0K 0K 0K 0% /dev/fd
swap 2.3G 8K 2.3G 1% /tmp
swap 2.3G 40K 2.3G 1% /var/run
/dev/dsk/c0t0d0s3 2.0G 2.2M 1.9G 1% /export/home
zfspool/u01 30G 3.1G 27G 11% /u01
zfspool/u02 30G 1.0G 29G 4% /u02
zfspool 134G 24K 130G 1% /zfspool
zfspool/u03 134G 24K 130G 1% /u03
[root@node02 /]#zfs set quota=30GB zfspool/u03
[root@node02 /]#df
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 29G 9.1G 20G 32% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 2.3G 1.4M 2.3G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
fd 0K 0K 0K 0% /dev/fd
swap 2.3G 8K 2.3G 1% /tmp
swap 2.3G 40K 2.3G 1% /var/run
/dev/dsk/c0t0d0s3 2.0G 2.2M 1.9G 1% /export/home
zfspool/u01 30G 3.1G 27G 11% /u01
zfspool/u02 30G 1.0G 29G 4% /u02
zfspool 134G 24K 130G 1% /zfspool
zfspool/u03 30G 24K 30G 1% /u03
下来创建两个逻辑卷:
[root@node02 /]#zfs create -V 10GB zfspool/u03/vol01
[root@node02 /]#zfs create -V 10GB zfspool/u03/vol02
查看一下刚创建的逻辑卷的属性信息:
[root@node02 /]#zfs get all |grep 'zfspool/u03'
zfspool/u03 type filesystem -
zfspool/u03 creation Wed Dec 17 9:50 2008 -
zfspool/u03 used 20.0G -
zfspool/u03 available 10.0G -
zfspool/u03 referenced 24.5K -
zfspool/u03 compressratio 1.00x -
zfspool/u03 mounted yes -
zfspool/u03 quota 30G local
zfspool/u03 reservation none default
zfspool/u03 recordsize 128K default
zfspool/u03 mountpoint /u03 local
zfspool/u03 sharenfs off default
zfspool/u03 checksum on default
zfspool/u03 compression off default
zfspool/u03 atime on default
zfspool/u03 devices on default
zfspool/u03 exec on default
zfspool/u03 setuid on default
zfspool/u03 readonly off default
zfspool/u03 zoned off default
zfspool/u03 snapdir hidden default
zfspool/u03 aclmode groupmask default
zfspool/u03 aclinherit secure default
zfspool/u03 canmount on default
zfspool/u03 shareiscsi off default
zfspool/u03 xattr on default
zfspool/u03/vol01 type volume -
zfspool/u03/vol01 creation Wed Dec 17 9:54 2008 -
zfspool/u03/vol01 used 22.5K -
zfspool/u03/vol01 available 20.0G -
zfspool/u03/vol01 referenced 22.5K -
zfspool/u03/vol01 compressratio 1.00x -
zfspool/u03/vol01 reservation 10G local
zfspool/u03/vol01 volsize 10G -
zfspool/u03/vol01 volblocksize 8K -
zfspool/u03/vol01 checksum on default
zfspool/u03/vol01 compression off default
zfspool/u03/vol01 readonly off default
zfspool/u03/vol01 shareiscsi off default
zfspool/u03/vol02 type volume -
zfspool/u03/vol02 creation Wed Dec 17 9:54 2008 -
zfspool/u03/vol02 used 22.5K -
zfspool/u03/vol02 available 20.0G -
zfspool/u03/vol02 referenced 22.5K -
zfspool/u03/vol02 compressratio 1.00x -
zfspool/u03/vol02 reservation 10G local
zfspool/u03/vol02 volsize 10G -
zfspool/u03/vol02 volblocksize 8K -
zfspool/u03/vol02 checksum on default
zfspool/u03/vol02 compression off default
zfspool/u03/vol02 readonly off default
zfspool/u03/vol02 shareiscsi off default
使zfspool/u03/vol01和zfspool/u03/vol02卷成为一个iscsi目标设备:
[root@node02 /]#zfs set shareiscsi=on zfspool/u03/vol01
[root@node02 /]#zfs set shareiscsi=on zfspool/u03/vol02
[root@node02 /]#iscsitadm list target
Target: zfspool/u03/vol01
iSCSI Name: iqn.1986-03.com.sun:02:5b7939b1-73a9-ef2d-ab61-fd19d57b9c8b
Connections: 0
Target: zfspool/u03/vol02
iSCSI Name: iqn.1986-03.com.sun:02:bc1d6f76-21d9-4acb-e0c0-adf41eeca3b9
Connections: 0
[root@node02 /]#zfs get all zfspool/u03/vol01
NAME PROPERTY VALUE SOURCE
zfspool/u03/vol01 type volume -
zfspool/u03/vol01 creation Wed Dec 17 9:54 2008 -
zfspool/u03/vol01 used 30.5K -
zfspool/u03/vol01 available 20.0G -
zfspool/u03/vol01 referenced 30.5K -
zfspool/u03/vol01 compressratio 1.00x -
zfspool/u03/vol01 reservation 10G local
zfspool/u03/vol01 volsize 10G -
zfspool/u03/vol01 volblocksize 8K -
zfspool/u03/vol01 checksum on default
zfspool/u03/vol01 compression off default
zfspool/u03/vol01 readonly off default
zfspool/u03/vol01 shareiscsi on local
[root@node02 /]#zfs get all zfspool/u03/vol02
NAME PROPERTY VALUE SOURCE
zfspool/u03/vol02 type volume -
zfspool/u03/vol02 creation Wed Dec 17 9:54 2008 -
zfspool/u03/vol02 used 30.5K -
zfspool/u03/vol02 available 20.0G -
zfspool/u03/vol02 referenced 30.5K -
zfspool/u03/vol02 compressratio 1.00x -
zfspool/u03/vol02 reservation 10G local
zfspool/u03/vol02 volsize 10G -
zfspool/u03/vol02 volblocksize 8K -
zfspool/u03/vol02 checksum on default
zfspool/u03/vol02 compression off default
zfspool/u03/vol02 readonly off default
zfspool/u03/vol02 shareiscsi on local
ok,到此,solaris端已配置完毕了。
linux端配置:
yum -y install kexec-tools bridge-untils libsane-hpaio device-mapper-multipath iscsi-initiator-utils scsi-target-utils
chkconfig iscsi on
service iscsi restart
首先,运行以下命令,发现solaris上的逻辑卷:
[root@server2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.121
192.168.1.121:3260,1 iqn.1986-03.com.sun:02:5b7939b1-73a9-ef2d-ab61-fd19d57b9c8b
192.168.1.121:3260,1 iqn.1986-03.com.sun:02:bc1d6f76-21d9-4acb-e0c0-adf41eeca3b9
ok,solaris上的共享逻辑卷已经被linux识别了。
使用iscsiadm登录solaris的iscsitarget(solaris):
[root@server2 ~]# iscsiadm -m node -T iqn.1986-03.com.sun:02:5b7939b1-73a9-ef2d-ab61-fd19d57b9c8b -p 192.168.1.121 -l
Logging in to [iface: default, target: iqn.1986-03.com.sun:02:5b7939b1-73a9-ef2d-ab61-fd19d57b9c8b, portal: 192.168.1.121,3260]
Login to [iface: default, target: iqn.1986-03.com.sun:02:5b7939b1-73a9-ef2d-ab61-fd19d57b9c8b, portal: 192.168.1.121,3260]: successful
[root@server2 ~]# iscsiadm -m node -T iqn.1986-03.com.sun:02:bc1d6f76-21d9-4acb-e0c0-adf41eeca3b9 -p 192.168.1.121 -l
Logging in to [iface: default, target: iqn.1986-03.com.sun:02:bc1d6f76-21d9-4acb-e0c0-adf41eeca3b9, portal: 192.168.1.121,3260]
Login to [iface: default, target: iqn.1986-03.com.sun:02:bc1d6f76-21d9-4acb-e0c0-adf41eeca3b9, portal: 192.168.1.121,3260]: successful
ok,都登录成功了,使用fdisk看看:
[root@server2 ~]# fdisk -l
Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1175 9333765 83 Linux
/dev/sda3 1176 1305 1044225 82 Linux swap / Solaris
Disk /dev/sdb: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdc doesn't contain a valid partition table
已经被识别为linux上的sdb和sdc了。
现在你就可以在这两个设备上进行分区,创建文件系统了.
[root@server2 ~]# fdisk -l
Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1175 9333765 83 Linux
/dev/sda3 1176 1305 1044225 82 Linux swap / Solaris
Disk /dev/sdb: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 10240 10485744 83 Linux
Disk /dev/sdc: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 10240 10485744 83 Linux
[root@server2 ~]# mkfs.ext3 /dev/sdb1
[root@server2 ~]# mount /dev/sdb1 /mnt
[root@server2 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 8.7G 2.4G 5.9G 30% /
/dev/sda1 99M 11M 83M 12% /boot
tmpfs 76M 0 76M 0% /dev/shm
/dev/sdb1 9.9G 151M 9.2G 2% /mnt
[root@server2 ~]# cat /mnt/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
192.168.1.11 server2.rhel5.com server2
192.168.1.10 server1.rhel5.com server1
192.168.1.156 xzxj
192.168.1.12 server3.rhel5.com server3
[root@server2 ~]# cd /mnt/
[root@server2 mnt]# ls
hosts lost+found
就这样结束了。
阅读(1517) | 评论(0) | 转发(0) |