Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1817843
  • 博文数量: 276
  • 博客积分: 1574
  • 博客等级: 上尉
  • 技术积分: 2894
  • 用 户 组: 普通用户
  • 注册时间: 2010-05-26 23:23
个人简介

生活的美妙在于,不知道一下秒是惊艳还是伤神,时光流转,珍惜现在的拥有的时光

文章分类

全部博文(276)

文章存档

2017年(17)

2016年(131)

2015年(63)

2013年(2)

2012年(32)

2011年(31)

分类: 服务器与存储

2015-09-17 11:35:18

pre.cjk { font-family: "Nimbus Mono L",monospace; }h2.western { font-family: "Liberation Sans",sans-serif; font-size: 16pt; }h2.cjk { font-family: "思源黑体 CN Regular"; font-size: 16pt; }h2.ctl { font-family: "Lohit Devanagari"; font-size: 16pt; }h1 { margin-bottom: 0.21cm; }h1.western { font-family: "Liberation Sans",sans-serif; font-size: 18pt; }h1.cjk { font-family: "思源黑体 CN Regular"; font-size: 18pt; }h1.ctl { font-family: "Lohit Devanagari"; font-size: 18pt; }p { margin-bottom: 0.25cm; line-height: 120%; }a:link { }tt.cjk { font-family: "Nimbus Mono L",monospace; }

块设备快速入门

学习本节,必须在已经执行完 的基础上。首先保证已经部署的 active + clean状态。

The Ceph Block Device is also known as or Block Device.


ceph-client 必须是一台单独的虚拟机,但不要使用与部署Ceph Storage Cluster相同的一台物理机的虚拟机, 除非你没有别的机器。

安装 Ceph


点击(此处)折叠或打开

  1. [talen@ceph_admin mycluster]$ cat ~/.ssh/config

  2. Host ceph_node1

  3. Hostname ceph_node1

  4. User talen

  5. Host ceph_node2

  6. Hostname ceph_node2

  7. User talen

  8. Host ceph_monitor

  9. Hostname ceph_monitor

  10. User talen

  11. Host ceph_client 添加服务器到ssh_config

  12. Hostname ceph_client

  13. User talen


  1. 确保你有合适的内核版本。具体可查看OS推荐里的信息。

     
    			
    1. lsb_release -a与uname -r
  2. 在管理节点上,使用ceph-deploy安装ceph到客户端节点,这里主机名为ceph_client

     
    			
    1. ceph-deploy install ceph_client
  3. 在管理节点上,使用ceph-deploy部署ceph配置文件与认证文件ceph.client.admin.keyring 到客户端。

     
    			
    1. ceph-deploy admin ceph_client

    认为文件保存在客户端的/etc/ceph目录下,确保talen用户可以访问此文件


    1. sudo chmod +r /etc/ceph/ceph.client.admin.keyring



  4. 验证:


点击(此处)折叠或打开

  1. [root@ceph_client ceph]# su - talen

  2. Last login: Thu Sep 17 11:30:51 CST 2015 from 192.168.100.199 on pts/1

  3. [talen@ceph_client ~]$ cd /etc/ceph/

  4. [talen@ceph_client ceph]$ ll

  5. total 8

  6. -rw-------. 1 root root 63 Sep 17 11:33 ceph.client.admin.keyring

  7. -rw-r--r--. 1 root root 265 Sep 17 11:33 ceph.conf

  8. -rw-------. 1 root root 0 Sep 14 18:42 tmp5AWVt0

  9. -rw-------. 1 root root 0 Sep 17 11:33 tmpY_56Be

  10. [talen@ceph_client ceph]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring

  11. [talen@ceph_client ceph]$ ll

  12. total 8

  13. -rw-r--r--. 1 root root 63 Sep 17 11:33 ceph.client.admin.keyring

  14. -rw-r--r--. 1 root root 265 Sep 17 11:33 ceph.conf

  15. -rw-------. 1 root root 0 Sep 14 18:42 tmp5AWVt0

  16. -rw-------. 1 root root 0 Sep 17 11:33 tmpY_56Be

  17. [talen@ceph_client ceph]$ ceph status

  18. cluster 08416be1-f6e7-4c5a-b7b3-7eb148b0c467

  19. health HEALTH_WARN

  20. clock skew detected on mon.ceph_node2, mon.ceph_monitor

  21. Monitor clock skew detected

  22. monmap e3: 3 mons at {ceph_monitor=10.0.2.33:6789/0,ceph_node1=10.0.2.31:6789/0,ceph_node2=10.0.2.32:6789/0}

  23. election epoch 6, quorum 0,1,2 ceph_node1,ceph_node2,ceph_monitor

  24. osdmap e15: 3 osds: 3 up, 3 in

  25. pgmap v192: 192 pgs, 2 pools, 0 bytes data, 0 objects

  26. 15463 MB used, 9079 MB / 24543 MB avail

  27. 192 active+clean

  28. [talen@ceph_client ceph]$




点击(此处)折叠或打开

  1. [talen@ceph_admin mycluster]$ ceph-deploy install ceph_client
  2. [ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
  3. [ceph_deploy.cli][INFO ] Invoked (1.5.28): /bin/ceph-deploy install ceph_client
  4. [ceph_deploy.cli][INFO ] ceph-deploy options:
  5. [ceph_deploy.cli][INFO ] verbose : False
  6. [ceph_deploy.cli][INFO ] testing : None
  7. [ceph_deploy.cli][INFO ] cd_conf :
  8. [ceph_deploy.cli][INFO ] cluster : ceph
  9. [ceph_deploy.cli][INFO ] install_mds : False
  10. [ceph_deploy.cli][INFO ] stable : None
  11. [ceph_deploy.cli][INFO ] default_release : False
  12. [ceph_deploy.cli][INFO ] username : None
  13. [ceph_deploy.cli][INFO ] adjust_repos : True
  14. [ceph_deploy.cli][INFO ] func :
  15. [ceph_deploy.cli][INFO ] install_all : False
  16. [ceph_deploy.cli][INFO ] repo : False
  17. [ceph_deploy.cli][INFO ] host : ['ceph_client']
  18. [ceph_deploy.cli][INFO ] install_rgw : False
  19. [ceph_deploy.cli][INFO ] repo_url : None
  20. [ceph_deploy.cli][INFO ] ceph_conf : None
  21. [ceph_deploy.cli][INFO ] install_osd : False
  22. [ceph_deploy.cli][INFO ] version_kind : stable
  23. [ceph_deploy.cli][INFO ] install_common : False
  24. [ceph_deploy.cli][INFO ] overwrite_conf : False
  25. [ceph_deploy.cli][INFO ] quiet : False
  26. [ceph_deploy.cli][INFO ] dev : master
  27. [ceph_deploy.cli][INFO ] local_mirror : None
  28. [ceph_deploy.cli][INFO ] release : None
  29. [ceph_deploy.cli][INFO ] install_mon : False
  30. [ceph_deploy.cli][INFO ] gpg_url : None
  31. [ceph_deploy.install][DEBUG ] Installing stable version hammer on cluster ceph hosts ceph_client
  32. [ceph_deploy.install][DEBUG ] Detecting platform for host ceph_client ...
  33. Warning: Permanently added the ECDSA host key for IP address '10.0.2.34' to the list of known hosts.
  34. [ceph_client][DEBUG ] connection detected need for sudo
  35. [ceph_client][DEBUG ] connected to host: ceph_client
  36. [ceph_client][DEBUG ] detect platform information from remote host
  37. [ceph_client][DEBUG ] detect machine type
  38. [ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.1.1503 Core
  39. [ceph_client][INFO ] installing Ceph on ceph_client
  40. [ceph_client][INFO ] Running command: sudo yum clean all
  41. [ceph_client][DEBUG ] Loaded plugins: fastestmirror, priorities
  42. [ceph_client][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-extras ceph-source epel extras
  43. [ceph_client][DEBUG ] : updates
  44. [ceph_client][DEBUG ] Cleaning up everything
  45. [ceph_client][DEBUG ] Cleaning up list of fastest mirrors
  46. [ceph_client][INFO ] Running command: sudo yum -y install epel-release
  47. [ceph_client][DEBUG ] Loaded plugins: fastestmirror, priorities
  48. [ceph_client][WARNIN] [Errno 14] HTTP Error 404 - Not Found
  49. [ceph_client][WARNIN] Trying other mirror.
  50. [ceph_client][DEBUG ] Determining fastest mirrors
  51. [ceph_client][DEBUG ] * base: mirrors.skyshe.cn
  52. [ceph_client][DEBUG ] * epel: mirror01.idc.hinet.net
  53. [ceph_client][DEBUG ] * extras: mirrors.aliyun.com
  54. [ceph_client][DEBUG ] * updates: mirrors.aliyun.com
  55. [ceph_client][DEBUG ] 59 packages excluded due to repository priority protections
  56. [ceph_client][DEBUG ] Package epel-release-7-5.noarch already installed and latest version
  57. [ceph_client][DEBUG ] Nothing to do
  58. [ceph_client][INFO ] Running command: sudo yum -y install yum-plugin-priorities
  59. [ceph_client][DEBUG ] Loaded plugins: fastestmirror, priorities
  60. [ceph_client][DEBUG ] Loading mirror speeds from cached hostfile
  61. [ceph_client][DEBUG ] * base: mirrors.skyshe.cn
  62. [ceph_client][DEBUG ] * epel: mirror01.idc.hinet.net
  63. [ceph_client][DEBUG ] * extras: mirrors.aliyun.com
  64. [ceph_client][DEBUG ] * updates: mirrors.aliyun.com
  65. [ceph_client][DEBUG ] 59 packages excluded due to repository priority protections
  66. [ceph_client][DEBUG ] Package yum-plugin-priorities-1.1.31-29.el7.noarch already installed and latest version
  67. [ceph_client][DEBUG ] Nothing to do
  68. [ceph_client][DEBUG ] Configure Yum priorities to include obsoletes
  69. [ceph_client][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
  70. [ceph_client][INFO ] Running command: sudo rpm --import
  71. [ceph_client][INFO ] Running command: sudo rpm -Uvh --replacepkgs
  72. [ceph_client][DEBUG ] Retrieving
  73. [ceph_client][DEBUG ] Preparing... ########################################
  74. [ceph_client][DEBUG ] Updating / installing...
  75. [ceph_client][DEBUG ] ceph-release-1-1.el7 ########################################
  76. [ceph_client][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
  77. [ceph_client][WARNIN] altered ceph.repo priorities to contain: priority=1
  78. [ceph_client][INFO ] Running command: sudo yum -y install ceph ceph-radosgw
  79. [ceph_client][DEBUG ] Loaded plugins: fastestmirror, priorities
  80. [ceph_client][DEBUG ] Loading mirror speeds from cached hostfile
  81. [ceph_client][DEBUG ] * base: mirrors.skyshe.cn
  82. [ceph_client][DEBUG ] * epel: mirror01.idc.hinet.net
  83. [ceph_client][DEBUG ] * extras: mirrors.aliyun.com
  84. [ceph_client][DEBUG ] * updates: mirrors.aliyun.com
  85. [ceph_client][DEBUG ] 59 packages excluded due to repository priority protections
  86. [ceph_client][DEBUG ] Package 1:ceph-0.94.3-0.el7.x86_64 already installed and latest version
  87. [ceph_client][DEBUG ] Package 1:ceph-radosgw-0.94.3-0.el7.x86_64 already installed and latest version
  88. [ceph_client][DEBUG ] Nothing to do
  89. [ceph_client][INFO ] Running command: sudo ceph --version
  90. [ceph_client][DEBUG ] ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
  91. [talen@ceph_admin mycluster]$ ceph-deploy admin ceph_client
  92. [ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
  93. [ceph_deploy.cli][INFO ] Invoked (1.5.28): /bin/ceph-deploy admin ceph_client
  94. [ceph_deploy.cli][INFO ] ceph-deploy options:
  95. [ceph_deploy.cli][INFO ] username : None
  96. [ceph_deploy.cli][INFO ] verbose : False
  97. [ceph_deploy.cli][INFO ] overwrite_conf : False
  98. [ceph_deploy.cli][INFO ] quiet : False
  99. [ceph_deploy.cli][INFO ] cd_conf :
  100. [ceph_deploy.cli][INFO ] cluster : ceph
  101. [ceph_deploy.cli][INFO ] client : ['ceph_client']
  102. [ceph_deploy.cli][INFO ] func :
  103. [ceph_deploy.cli][INFO ] ceph_conf : None
  104. [ceph_deploy.cli][INFO ] default_release : False
  105. [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph_client
  106. [ceph_client][DEBUG ] connection detected need for sudo
  107. [ceph_client][DEBUG ] connected to host: ceph_client
  108. [ceph_client][DEBUG ] detect platform information from remote host
  109. [ceph_client][DEBUG ] detect machine type
  110. [ceph_client][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  111. [ceph_deploy.admin][ERROR ] RuntimeError: config file /etc/ceph/ceph.conf exists with different content; use --overwrite-conf to overwrite
  112. [ceph_deploy][ERROR ] GenericError: Failed to configure 1 admin hosts

  113. [talen@ceph_admin mycluster]$ ceph-deploy --overwrite-conf admin ceph_client
  114. [ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
  115. [ceph_deploy.cli][INFO ] Invoked (1.5.28): /bin/ceph-deploy --overwrite-conf admin ceph_client
  116. [ceph_deploy.cli][INFO ] ceph-deploy options:
  117. [ceph_deploy.cli][INFO ] username : None
  118. [ceph_deploy.cli][INFO ] verbose : False
  119. [ceph_deploy.cli][INFO ] overwrite_conf : True
  120. [ceph_deploy.cli][INFO ] quiet : False
  121. [ceph_deploy.cli][INFO ] cd_conf :
  122. [ceph_deploy.cli][INFO ] cluster : ceph
  123. [ceph_deploy.cli][INFO ] client : ['ceph_client']
  124. [ceph_deploy.cli][INFO ] func :
  125. [ceph_deploy.cli][INFO ] ceph_conf : None
  126. [ceph_deploy.cli][INFO ] default_release : False
  127. [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph_client
  128. [ceph_client][DEBUG ] connection detected need for sudo
  129. [ceph_client][DEBUG ] connected to host: ceph_client
  130. [ceph_client][DEBUG ] detect platform information from remote host
  131. [ceph_client][DEBUG ] detect machine type
  132. [ceph_client][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  133. [talen@ceph_admin mycluster]$
pre.cjk { font-family: "Nimbus Mono L",monospace; }h2.western { font-family: "Liberation Sans",sans-serif; font-size: 16pt; }h2.cjk { font-family: "思源黑体 CN Regular"; font-size: 16pt; }h2.ctl { font-family: "Lohit Devanagari"; font-size: 16pt; }p { margin-bottom: 0.25cm; line-height: 120%; }a:link { }tt.cjk { font-family: "Nimbus Mono L",monospace; }

客户端使用块设备

  1. 首先在客户端节点创建一个块设备镜像文件,大小4G,名称暂定为foo.

    [talen@ceph_client ceph]$ rbd create foo --size 4096 -m 10.0.2.33 -k /etc/ceph/ceph.client.admin.keyring

  2. 然后在客户端节点映射镜像文件到一个块设备/dev/rbd0

    [talen@ceph_client ceph]$ sudo rbd map foo --name client.admin -m 10.0.2.33 -k /etc/ceph/ceph.client.admin.keyring

    /dev/rbd0

  3. 在客户端服务器上为新块设备创建文件系统,这里以EXT4为例进行格式化。

    [talen@ceph_client rbd]$ ll /dev/rbd/rbd/foo

    lrwxrwxrwx. 1 root root 10 Sep 17 15:09 /dev/rbd/rbd/foo -> ../../rbd0

    [talen@ceph_client rbd]$ sudo mkfs.ext4 -m0 /dev/rbd0

    mke2fs 1.42.9 (28-Dec-2013)

    Discarding device blocks: done

    Filesystem label=

    OS type: Linux

    Block size=4096 (log=2)

    Fragment size=4096 (log=2)

    Stride=1024 blocks, Stripe width=1024 blocks

    262144 inodes, 1048576 blocks

    0 blocks (0.00%) reserved for the super user

    First data block=0

    Maximum filesystem blocks=1073741824

    32 block groups

    32768 blocks per group, 32768 fragments per group

    8192 inodes per group

    Superblock backups stored on blocks:

    32768, 98304, 163840, 229376, 294912, 819200, 884736

    Allocating group tables: done

    Writing inode tables: done

    Creating journal (32768 blocks): done

    Writing superblocks and filesystem accounting information: done

    [talen@ceph_client rbd]$

  4. 挂载已经格式化好的文件系统盘。
    [talen@ceph_client rbd]$ mkdir /mnt/ceph-block-device
    mkdir: cannot create directory ‘/mnt/ceph-block-device’: Permission denied
    [talen@ceph_client rbd]$ sudo mkdir /mnt/ceph-block-device
    [talen@ceph_client rbd]$ sudo mount /dev/rbd0 /mnt/ceph-block-device/
    [talen@ceph_client rbd]$ cd /mnt/ceph-block-device/
    [talen@ceph_client ceph-block-device]$ touch testfile
    touch: cannot touch ‘testfile’: Permission denied
    [talen@ceph_client ceph-block-device]$ sudo touch testfile
    [talen@ceph_client ceph-block-device]$ df
    Filesystem              1K-blocks    Used Available Use% Mounted on
    /dev/mapper/centos-root   7022592 1410872   5611720  21% /
    devtmpfs                   933432       0    933432   0% /dev
    tmpfs                      942208       0    942208   0% /dev/shm
    tmpfs                      942208    8560    933648   1% /run
    tmpfs                      942208       0    942208   0% /sys/fs/cgroup
    /dev/vda1                  508588  139920    368668  28% /boot
    /dev/rbd0                 3997376   16376   3964616   1% /mnt/ceph-block-device
    [talen@ceph_client ceph-block-device]$ df -h
    Filesystem               Size  Used Avail Use% Mounted on
    /dev/mapper/centos-root  6.7G  1.4G  5.4G  21% /
    devtmpfs                 912M     0  912M   0% /dev
    tmpfs                    921M     0  921M   0% /dev/shm
    tmpfs                    921M  8.4M  912M   1% /run
    tmpfs                    921M     0  921M   0% /sys/fs/cgroup
    /dev/vda1                497M  137M  361M  28% /boot
    /dev/rbd0                3.9G   16M  3.8G   1% /mnt/ceph-block-device
    [talen@ceph_client ceph-block-device]$

下面来测试一下
[talen@ceph_client ceph-block-device]$ sudo cp /boot/* /mnt/ceph-block-device/
cp: omitting directory ‘/boot/grub’
cp: omitting directory ‘/boot/grub2’
[talen@ceph_client ceph-block-device]$
[talen@ceph_admin mycluster]$ ceph -w
    cluster 08416be1-f6e7-4c5a-b7b3-7eb148b0c467
     health HEALTH_WARN
            clock skew detected on mon.ceph_node2, mon.ceph_monitor
            Monitor clock skew detected
     monmap e3: 3 mons at {ceph_monitor=10.0.2.33:6789/0,ceph_node1=10.0.2.31:6789/0,ceph_node2=10.0.2.32:6789/0}
            election epoch 6, quorum 0,1,2 ceph_node1,ceph_node2,ceph_monitor
     osdmap e15: 3 osds: 3 up, 3 in
      pgmap v218: 192 pgs, 2 pools, 136 MB data, 45 objects
            15888 MB used, 8654 MB / 24543 MB avail
                 192 active+clean

2015-09-17 15:37:20.759666 mon.0 [WRN] mon.1 10.0.2.32:6789/0 clock skew 0.916739s > max 0.05s
2015-09-17 15:39:38.416394 mon.0 [INF] pgmap v219: 192 pgs: 192 active+clean; 136 MB data, 15888 MB used, 8654 MB / 24543 MB avail; 36 B/s wr, 0 op/s
2015-09-17 15:39:49.853666 mon.0 [INF] pgmap v220: 192 pgs: 192 active+clean; 143 MB data, 15901 MB used, 8641 MB / 24543 MB avail; 20610 B/s wr, 0 op/s
2015-09-17 15:39:52.633944 mon.0 [INF] pgmap v221: 192 pgs: 192 active+clean; 165 MB data, 15934 MB used, 8608 MB / 24543 MB avail; 613 B/s rd, 2076 kB/s wr, 8 op/s
2015-09-17 15:39:53.739112 mon.0 [INF] pgmap v222: 192 pgs: 192 active+clean; 176 MB data, 15981 MB used, 8561 MB / 24543 MB avail; 1360 B/s rd, 5313 kB/s wr, 22 op/s
2015-09-17 15:39:56.360405 mon.0 [INF] pgmap v223: 192 pgs: 192 active+clean; 188 MB data, 16009 MB used, 8533 MB / 24543 MB avail; 5369 kB/s wr, 22 op/s
2015-09-17 15:40:18.457344 mon.0 [INF] pgmap v224: 192 pgs: 192 active+clean; 192 MB data, 16039 MB used, 8503 MB / 24543 MB avail; 647 kB/s wr, 2 op/s
2015-09-17 15:40:21.161880 mon.0 [INF] pgmap v225: 192 pgs: 192 active+clean; 203 MB data, 16055 MB used, 8487 MB / 24543 MB avail; 649 kB/s wr, 2 op/s
2015-09-17 15:40:25.202123 mon.0 [INF] pgmap v226: 192 pgs: 192 active+clean; 216 MB data, 16102 MB used, 8440 MB / 24543 MB avail; 4418 kB/s wr, 18 op/s
2015-09-17 15:40:27.279849 mon.0 [INF] pgmap v227: 192 pgs: 192 active+clean; 226 MB data, 16129 MB used, 8413 MB / 24543 MB avail; 4107 kB/s wr, 17 op/s
2015-09-17 15:40:28.366190 mon.0 [INF] pgmap v228: 192 pgs: 192 active+clean; 233 MB data, 16160 MB used, 8382 MB / 24543 MB avail; 4050 kB/s wr, 17 op/s
2015-09-17 15:40:29.458292 mon.0 [INF] pgmap v229: 192 pgs: 192 active+clean; 233 MB data, 16185 MB used, 8357 MB / 24543 MB avail; 2276 kB/s wr, 11 op/s
2015-09-17 15:40:30.713275 mon.0 [INF] pgmap v230: 192 pgs: 192 active+clean; 241 MB data, 16197 MB used, 8345 MB / 24543 MB avail; 3627 kB/s wr, 20 op/s
2015-09-17 15:41:00.788336 mon.0 [INF] pgmap v231: 192 pgs: 192 active+clean; 241 MB data, 16197 MB used, 8345 MB / 24543 MB avail; 408 B/s wr, 0 op/s


阅读(3010) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~