Chinaunix首页 | 论坛 | 博客
  • 博客访问: 3624726
  • 博文数量: 880
  • 博客积分: 0
  • 博客等级: 民兵
  • 技术积分: 6155
  • 用 户 组: 普通用户
  • 注册时间: 2016-11-11 09:12
个人简介

To be a better coder

文章分类

全部博文(880)

文章存档

2022年(5)

2021年(60)

2020年(175)

2019年(207)

2018年(210)

2017年(142)

2016年(81)

分类: 服务器与存储

2020-07-08 11:17:15

原文地址:Ceph集群部署 作者:gongping11

Ceph简介

Ceph的底层是一个分布式对象存储系统,通过lib库的方式对外提供了块设备、文件系统、对象存储的服务,因此将块设备、文件系统、对象存储的服务进行了统一。Ceph的底层是Rados集群,在Rados集群之上提供了librados。在librados之上又提供了librbd、libgw。


ceph的构成


     Ceph的基础是Rados,Rados也被称为Ceph集群,是其他服务的基础。关于Rados集群中最基本的组件是监控器和OSD,监控器负责维护集群的状态,维护集群的状态。而OSD包含OSD设备和OSD设备的守护进程。而Rados集群的部署主要是监控器和OSD的部署。关于Ceph的详细信息可参看官方文档。本文主要记录一下Ceph的部署。

准备工作

服务器情况如下所示:
       ip地址         主机名    osd磁盘
      192.168.1.212  cell01    sdb1  
      192.168.1.213  cell02    sdb1
      192.168.1.214  cell03    sdb1
     本次部署过程中采用192.168.1.212即作为服务器节点又作为管理节点。因此本文大部分的部署都是在192.168.1.212的ceph用户下进行部署。同时每个节点中都部署osd和monitor。本文基于Ceph-deploy进行部署

      由于Ceph采用了本地编译好的rpm包进行管理,同时提供了yum的repo文件,本来可采用简单的rpm包进行安装,但Ceph的依赖关系太过复杂,因此最方便的方式是采用yum进行安装,这样的安装方式不需要关注包之间的依赖,yum会进行包的关联处理。
首先采用局域网中的软件仓库安装createrepo工具。

点击(此处)折叠或打开

  1. [root@cell01 tmp]# yum install createrepo
     安装该文件之后,将对应的rpm包放置在提供的repo文件指定的路劲中,并拷贝对应的repo文件到/etc/yum.repo.d/目录中。更新之后采用yum repolist all查看刚才添加的repo对象是否存在。

点击(此处)折叠或打开

  1. [root@cell01 tmp]# cp xxx.repo /etc/yum.repos.d/xxx.repo
  2. [root@cell01 tmp]# yum repolist all
     添加用户,可以默认使用root或者其他的用户,但为了统一的处理,在不同的服务器上可添加一个ceph的用户。

点击(此处)折叠或打开

  1. [root@cell01 /]# useradd -d /home/ceph -m ceph <<----添加用户
  2. [root@cell01 /]# passwd ceph <<----添加密码
  3. Changing password for user ceph.
  4. New password:
  5. BAD PASSWORD: it is based on a dictionary word
  6. BAD PASSWORD: is too simple
  7. Retype new password:
  8. passwd: all authentication tokens updated successfully.
  9. [root@cell01 /]# echo "ceph ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph <<--------设置sudo的操作不需要输入密码
  10. ceph ALL = (root) NOPASSWD:ALL
  11. [root@cell01 /]# chmod 0440 /etc/sudoers.d/ceph <<----设置对应的配置文件权限
  12. [root@cell01 /]# su - ceph
  13. [ceph@cell01 ~]$ ll
  14. total 0
  15. [ceph@cell01 ~]$ pwd
  16. /home/ceph
  17. [ceph@cell01 ~]$ ll -al
  18. total 24
  19. drwx------ 3 ceph ceph 4096 Sep 1 09:34 .
  20. drwxr-xr-x. 4 root root 4096 Sep 1 09:34 ..
  21. -rw-r--r-- 1 ceph ceph 18 Jul 18 2013 .bash_logout
  22. -rw-r--r-- 1 ceph ceph 176 Jul 18 2013 .bash_profile
  23. -rw-r--r-- 1 ceph ceph 124 Jul 18 2013 .bashrc
  24. drwxr-xr-x 4 ceph ceph 4096 Aug 31 16:29 .mozilla
  25. [ceph@cell01 ~]$ exit
  26. logout
  27. [root@cell01 /]# ssh 192.168.1.213
  28. The authenticity of host '192.168.1.213 (192.168.1.213)' can't be established.
  29. RSA key fingerprint is d5:12:f2:92:34:28:22:06:20:a3:1d:56:9e:cc:d6:b7.
  30. Are you sure you want to continue connecting (yes/no)? yes
  31. Warning: Permanently added '192.168.1.213' (RSA) to the list of known hosts.
  32. root@192.168.1.213's password:
  33. Last login: Mon Aug 31 17:06:49 2015 from 10.45.34.73
  34. [root@cell02 ~]# useradd -d /home/ceph -m ceph
  35. [root@cell02 ~]# passwd ceph
  36. Changing password for user ceph.
  37. New password:
  38. BAD PASSWORD: it is based on a dictionary word
  39. BAD PASSWORD: is too simple
  40. Retype new password:
  41. passwd: all authentication tokens updated successfully.
  42. [root@cell02 ~]# echo "ceph ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph
  43. ceph ALL = (root) NOPASSWD:ALL
  44. [root@cell02 ~]# chmod 0440 /etc/sudoers.d/ceph
  45. [root@cell02 ~]# exit
  46. logout
  47. Connection to 192.168.1.213 closed.
  48. [root@cell01 /]# ssh 192.168.1.214
  49. root@192.168.1.214's password:
  50. Last login: Mon Aug 31 16:50:39 2015 from 192.168.1.212
  51. [root@cell03 ~]# useradd -d /home/ceph -m ceph
  52. [root@cell03 ~]# passwd ceph
  53. Changing password for user ceph.
  54. New password:
  55. BAD PASSWORD: it is based on a dictionary word
  56. BAD PASSWORD: is too simple
  57. Retype new password:
  58. passwd: all authentication tokens updated successfully.
  59. [root@cell03 ~]# echo "ceph ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph
  60. ceph ALL = (root) NOPASSWD:ALL
  61. [root@cell03 ~]# chmod 0440 /etc/sudoers.d/ceph
  62. [root@cell03 ~]# exit
  63. logout
  64. Connection to 192.168.1.214 closed.
      在添加用户之后,需要实现节点之间的无密码ssh操作,由于采用ceph用户操作,因此在ceph的用户目录下执行ssh,先产生公钥,然后将对应的钥匙复制到其他的节点。

点击(此处)折叠或打开

  1. [root@cell01 /]# su - ceph
  2. [ceph@cell01 ~]$ ssh-keygen <<----产生公钥秘钥(管理节点上操作)
  3. Generating public/private rsa key pair.
  4. Enter file in which to save the key (/home/ceph/.ssh/id_rsa):
  5. Created directory '/home/ceph/.ssh'.
  6. Enter passphrase (empty for no passphrase):
  7. Enter same passphrase again:
  8. Your identification has been saved in /home/ceph/.ssh/id_rsa.
  9. Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
  10. The key fingerprint is:
  11. 9b:ea:55:fd:a0:a8:34:18:e0:3d:1f:1e:bb:8c:de:9a ceph@cell01
  12. The key's randomart image is:
  13. +--[ RSA 2048]----+
  14. | |
  15. | |
  16. | . |
  17. | . o . |
  18. | . + o S . o |
  19. | * + = . o |
  20. | . * = . . |
  21. | * * |
  22. | .EoB |
  23. +-----------------+
  24. [ceph@cell01 ~]$ ssh-copy-id ceph@cell02 <<------拷贝公钥到其他的服务器节点中
  25. ssh: Could not resolve hostname cell02: Name or service not known <<----没有添加对端其他节点的信息
  26. [ceph@cell01 ~]$ exit
  27. logout
  28. [root@cell01 /]# vi /etc/hosts <<-----添加主机节点信息
  29. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  30. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  31. 192.168.1.212 cell01 ireadmin
  32. 192.168.1.213 cell02
  33. 192.168.1.214 cell03
  34. ~
  35. "/etc/hosts" 6L, 225C written
  36. [root@cell01 /]# su - ceph
  37. [ceph@cell01 ~]$ ssh-copy-id ceph@cell02 <<------拷贝公钥到其他的服务器节点中
  38. The authenticity of host 'cell02 (192.168.1.213)' can't be established.
  39. RSA key fingerprint is d5:12:f2:92:34:28:22:06:20:a3:1d:56:9e:cc:d6:b7.
  40. Are you sure you want to continue connecting (yes/no)? yes
  41. Warning: Permanently added 'cell02,192.168.1.213' (RSA) to the list of known hosts.
  42. ceph@cell02's password:
  43. Now try logging into the machine, with "ssh 'ceph@cell02'", and check in:
  44. .ssh/authorized_keys
  45. to make sure we haven't added extra keys that you weren't expecting.
  46. [ceph@cell01 ~]$ ssh-copy-id ceph@cell03 <<------拷贝公钥到其他的服务器节点中
  47. The authenticity of host 'cell03 (192.168.1.214)' can't be established.
  48. RSA key fingerprint is 04:bc:35:fd:e5:3b:dd:d1:3a:7a:15:06:05:b4:95:5e.
  49. Are you sure you want to continue connecting (yes/no)? yes
  50. Warning: Permanently added 'cell03,192.168.1.214' (RSA) to the list of known hosts.
  51. ceph@cell03's password:
  52. Now try logging into the machine, with "ssh 'ceph@cell03'", and check in:
  53. .ssh/authorized_keys
  54. to make sure we haven't added extra keys that you weren't expecting.
  55. [ceph@cell01 ~]$ exit
  56. logout
  57. [root@cell01 /]# vi /etc/sudoers <<-----将ssh登陆的密码输入取消,从Defaults requiretty,改为Defaults:ceph !requiretty
  58. ## Sudoers allows particular users to run various commands as
  59. ## the root user, without needing the root password.
  60. ##
  61. ## Examples are provided at the bottom of the file for collections
  62. ## of related commands, which can then be delegated out to particular
  63. ## users or groups.
  64. ##
  65. ## This file must be edited with the 'visudo' command.
  66. ## Host Aliases
  67. ## Groups of machines. You may prefer to use hostnames (perhaps using
  68. ## wildcards for entire domains) or IP addresses instead.
  69. # Host_Alias FILESERVERS = fs1, fs2
  70. /Defaults
  71. ## Processes
  72. # Cmnd_Alias PROCESSES = /bin/nice, /bin/kill, /usr/bin/kill, /usr/bin/killall
  73. ## Drivers
  74. # Cmnd_Alias DRIVERS = /sbin/modprobe
  75. # Defaults specification
  76. #
  77. # Disable "ssh hostname sudo <cmd>", because it will show the password in clear.
  78. # You have to run "ssh -t hostname sudo <cmd>".
  79. #
  80. Defaults:ceph !requiretty
  81. #
  82. # Refuse to run if unable to disable echo on the tty. This setting should also be
  83. # changed in order to be able to use sudo without a tty. See requiretty above.
  84. #
  85. Defaults !visiblepw
  86. [root@cell01 /]# su - ceph
  87. [ceph@cell01 ~]$ vi ./.ssh/config <<------------创建ceph用户的ssh的配置文件,通过这个配置文件可通过ssh进行别名的访问
  88. Host cell01
  89. Hostname 192.168.1.212
  90. User ceph
  91. Host cell02
  92. Hostname 192.168.1.213
  93. User ceph
  94. Host cell03
  95. Hostname 192.168.1.214
  96. User ceph
  97. ~
  98. ~
  99. "./.ssh/config" 10L, 206C written
  100. [ceph@cell01 ~]$ ll ./.ssh/config
  101. -rw-rw-r-- 1 ceph ceph 206 Sep 1 10:46 ./.ssh/config
  102. [ceph@cell01 ~]$ ssh cell02 <<-----采用别名进行ssh登陆
  103. Bad owner or permissions on /home/ceph/.ssh/config
  104. [ceph@cell01 ~]$ chmod 600 /home/ceph/.ssh/config <<-------设置config文件的权限,只有600才能正确的执行
  105. [ceph@cell01 ~]$ ssh cell02
  106. [ceph@cell02 ~]$ exit
  107. logout
  108. Connection to 192.168.1.213 closed.
  109. [ceph@cell01 ~]$

      安装对应的软件,ceph、ntp等组件

点击(此处)折叠或打开

  1. [ceph@cell01 ~]$ sudo yum update && sudo yum install ceph-deploy <<----安装ceph-deploy
  2. Loaded plugins: fastestmirror, security
  3. Loading mirror speeds from cached hostfile
  4. Setting up Update Process
  5. No Packages marked for Update
  6. Loaded plugins: fastestmirror, security
  7. Loading mirror speeds from cached hostfile
  8. Setting up Install Process
  9. Package ceph-deploy-1.5.19-0.noarch already installed and latest version
  10. Nothing to do
  11. [ceph@cell01 ~]$ sudo yum install ntp ntpupdate ntp-doc
  12. Loaded plugins: fastestmirror, security
  13. Loading mirror speeds from cached hostfile
  14. Setting up Install Process
  15. Package ntp-4.2.6p5-1.el6.centos.x86_64 already installed and latest version
  16. No package ntpupdate available.
  17. Resolving Dependencies
  18. --> Running transaction check
  19. ---> Package ntp-doc.noarch 0:4.2.6p5-1.el6.centos will be installed
  20. --> Finished Dependency Resolution
  21. Dependencies Resolved
  22. =========================================================================================================
  23. Package Arch Version Repository Size
  24. =========================================================================================================
  25. Installing:
  26. ntp-doc noarch 4.2.6p5-1.el6.centos addons 1.0 M
  27. Transaction Summary
  28. =========================================================================================================
  29. Install 1 Package(s)
  30. Total download size: 1.0 M
  31. Installed size: 1.6 M
  32. Is this ok [y/N]: y
  33. Downloading Packages:
  34. ntp-doc-4.2.6p5-1.el6.centos.noarch.rpm | 1.0 MB 00:00
  35. Running rpm_check_debug
  36. Running Transaction Test
  37. Transaction Test Succeeded
  38. Running Transaction
  39. Installing : ntp-doc-4.2.6p5-1.el6.centos.noarch 1/1
  40. Verifying : ntp-doc-4.2.6p5-1.el6.centos.noarch 1/1
  41. Installed:
  42. ntp-doc.noarch 0:4.2.6p5-1.el6.centos
  43. Complete!
      经过以上的准备之后,可以进行Rados集群的部署啦。

监控器的部署

     首先在管理节点上创建目录/home/ceph/mkdir my-cluster, 并进入到该目录中cd ./my-cluster,然后创建集群,初始化监控器。

点击(此处)折叠或打开

  1. [ceph@cell01 my-cluster]$ cd ./my-cluster <<-----空的管理目录
  2. [ceph@cell01 my-cluster]$ ll
  3. total 0
  4. [ceph@cell01 my-cluster]$ ceph-deploy new cell01 cell02 cell03 <<-----创建集群,参数通常是待部署监控器的节点
  5. [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
  6. [ceph_deploy.cli][INFO ] Invoked (1.5.19): /usr/bin/ceph-deploy new cell01 cell02 cell03
  7. [ceph_deploy.new][DEBUG ] Creating new cluster named ceph
  8. [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
  9. [cell01][DEBUG ] connection detected need for sudo
  10. [cell01][DEBUG ] connected to host: cell01
  11. [cell01][DEBUG ] detect platform information from remote host
  12. [cell01][DEBUG ] detect machine type
  13. [cell01][DEBUG ] find the location of an executable
  14. [cell01][INFO ] Running command: sudo /sbin/ip link show
  15. [cell01][INFO ] Running command: sudo /sbin/ip addr show
  16. [cell01][DEBUG ] IP addresses found: ['192.168.1.212']
  17. [ceph_deploy.new][DEBUG ] Resolving host cell01
  18. [ceph_deploy.new][DEBUG ] Monitor cell01 at 192.168.1.212
  19. [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
  20. [cell02][DEBUG ] connected to host: cell01
  21. [cell02][INFO ] Running command: ssh -CT -o BatchMode=yes cell02
  22. [cell02][DEBUG ] connection detected need for sudo
  23. [cell02][DEBUG ] connected to host: cell02
  24. [cell02][DEBUG ] detect platform information from remote host
  25. [cell02][DEBUG ] detect machine type
  26. [cell02][DEBUG ] find the location of an executable
  27. [cell02][INFO ] Running command: sudo /sbin/ip link show
  28. [cell02][INFO ] Running command: sudo /sbin/ip addr show
  29. [cell02][DEBUG ] IP addresses found: ['172.16.10.213', '192.168.1.213']
  30. [ceph_deploy.new][DEBUG ] Resolving host cell02
  31. [ceph_deploy.new][DEBUG ] Monitor cell02 at 192.168.1.213
  32. [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
  33. [cell03][DEBUG ] connected to host: cell01
  34. [cell03][INFO ] Running command: ssh -CT -o BatchMode=yes cell03
  35. [cell03][DEBUG ] connection detected need for sudo
  36. [cell03][DEBUG ] connected to host: cell03
  37. [cell03][DEBUG ] detect platform information from remote host
  38. [cell03][DEBUG ] detect machine type
  39. [cell03][DEBUG ] find the location of an executable
  40. [cell03][INFO ] Running command: sudo /sbin/ip link show
  41. [cell03][INFO ] Running command: sudo /sbin/ip addr show
  42. [cell03][DEBUG ] IP addresses found: ['192.168.1.214']
  43. [ceph_deploy.new][DEBUG ] Resolving host cell03
  44. [ceph_deploy.new][DEBUG ] Monitor cell03 at 192.168.1.214
  45. [ceph_deploy.new][DEBUG ] Monitor initial members are ['cell01', 'cell02', 'cell03']
  46. [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.1.212', '192.168.1.213', '192.168.1.214']
  47. [ceph_deploy.new][DEBUG ] Creating a random mon key...
  48. [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
  49. [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
  50. Error in sys.exitfunc:
  51. [ceph@cell01 my-cluster]$ ll <<-------------生成了全局的配置文件,监控器的钥匙链等文件
  52. total 12
  53. -rw-rw-r-- 1 ceph ceph 276 Sep 1 17:09 ceph.conf
  54. -rw-rw-r-- 1 ceph ceph 2689 Sep 1 17:09 ceph.log
  55. -rw-rw-r-- 1 ceph ceph 73 Sep 1 17:09 ceph.mon.keyring
  56. [ceph@cell01 my-cluster]$ ceph-deploy --overwrite-conf mon create cell01 cell02 cell03 <<-------创建监控器,对应的参数为mon {monitor-node1} [{monitor-node2}...]
  57. [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
  58. [ceph_deploy.cli][INFO ] Invoked (1.5.19): /usr/bin/ceph-deploy --overwrite-conf mon create cell01 cell02 cell03
  59. [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts cell01 cell02 cell03
  60. [ceph_deploy.mon][DEBUG ] detecting platform for host cell01 ...
  61. [cell01][DEBUG ] connection detected need for sudo
  62. [cell01][DEBUG ] connected to host: cell01 <<----------开始cell01监控器的部署
  63. [cell01][DEBUG ] detect platform information from remote host
  64. [cell01][DEBUG ] detect machine type
  65. [ceph_deploy.mon][INFO ] distro info: CentOS 6.5 Final
  66. [cell01][DEBUG ] determining if provided host has same hostname in remote
  67. [cell01][DEBUG ] get remote short hostname
  68. [cell01][DEBUG ] deploying mon to cell01 <<----部署监控器
  69. [cell01][DEBUG ] get remote short hostname
  70. [cell01][DEBUG ] remote hostname: cell01
  71. [cell01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf <<---写监控器的配置文件
  72. [cell01][DEBUG ] create the mon path if it does not exist
  73. [cell01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-cell01/done
  74. [cell01][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-cell01/done
  75. [cell01][INFO ] creating tmp path: /var/lib/ceph/tmp
  76. [cell01][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-cell01.mon.keyring
  77. [cell01][DEBUG ] create the monitor keyring file
  78. [cell01][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i cell01 --keyring /var/lib/ceph/tmp/ceph-cell01.mon.keyring <<---创建mon的fs,可查看对应的目录下包含的数据,主要包括一个db,一个keyring等
  79. [cell01][DEBUG ] ceph-mon: mon.noname-a 192.168.1.212:6789/0 is local, renaming to mon.cell01 <<----重命名mon名
  80. [cell01][DEBUG ] ceph-mon: set fsid to 9061096f-d9f9-4946-94f1-296ab5080a97 <<----多个监控器对应的fsid是一致的
  81. [cell01][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-cell01 for mon.cell01
  82. [cell01][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-cell01.mon.keyring <<----删除临时文件
  83. [cell01][DEBUG ] create a done file to avoid re-doing the mon deployment
  84. [cell01][DEBUG ] create the init path if it does not exist
  85. [cell01][DEBUG ] locating the `service` executable...
  86. [cell01][INFO ] Running command: sudo /sbin/service ceph -c /etc/ceph/ceph.conf start mon.cell01 <<----启动监控器
  87. [cell01][DEBUG ] === mon.cell01 ===
  88. [cell01][DEBUG ] Starting Ceph mon.cell01 on cell01...
  89. [cell01][DEBUG ] Starting ceph-create-keys on cell01...
  90. [cell01][WARNIN] No data was received after 7 seconds, disconnecting...
  91. [cell01][INFO ] Running command: sudo chkconfig ceph on
  92. [cell01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell01.asok mon_status <<---获取当前的集群状态
  93. [cell01][DEBUG ] ********************************************************************************
  94. [cell01][DEBUG ] status for monitor: mon.cell01
  95. [cell01][DEBUG ] {
  96. [cell01][DEBUG ] "election_epoch": 0,
  97. [cell01][DEBUG ] "extra_probe_peers": [
  98. [cell01][DEBUG ] "192.168.1.213:6789/0",
  99. [cell01][DEBUG ] "192.168.1.214:6789/0"
  100. [cell01][DEBUG ] ],
  101. [cell01][DEBUG ] "monmap": {
  102. [cell01][DEBUG ] "created": "0.000000",
  103. [cell01][DEBUG ] "epoch": 0,
  104. [cell01][DEBUG ] "fsid": "9061096f-d9f9-4946-94f1-296ab5080a97",
  105. [cell01][DEBUG ] "modified": "0.000000",
  106. [cell01][DEBUG ] "mons": [
  107. [cell01][DEBUG ] {
  108. [cell01][DEBUG ] "addr": "192.168.1.212:6789/0",
  109. [cell01][DEBUG ] "name": "cell01",
  110. [cell01][DEBUG ] "rank": 0
  111. [cell01][DEBUG ] },
  112. [cell01][DEBUG ] {
  113. [cell01][DEBUG ] "addr": "0.0.0.0:0/1",
  114. [cell01][DEBUG ] "name": "cell02",
  115. [cell01][DEBUG ] "rank": 1
  116. [cell01][DEBUG ] },
  117. [cell01][DEBUG ] {
  118. [cell01][DEBUG ] "addr": "0.0.0.0:0/2",
  119. [cell01][DEBUG ] "name": "cell03",
  120. [cell01][DEBUG ] "rank": 2
  121. [cell01][DEBUG ] }
  122. [cell01][DEBUG ] ]
  123. [cell01][DEBUG ] },
  124. [cell01][DEBUG ] "name": "cell01",
  125. [cell01][DEBUG ] "outside_quorum": [
  126. [cell01][DEBUG ] "cell01"
  127. [cell01][DEBUG ] ],
  128. [cell01][DEBUG ] "quorum": [],
  129. [cell01][DEBUG ] "rank": 0,
  130. [cell01][DEBUG ] "state": "probing",
  131. [cell01][DEBUG ] "sync_provider": []
  132. [cell01][DEBUG ] }
  133. [cell01][DEBUG ] ********************************************************************************
  134. [cell01][INFO ] monitor: mon.cell01 is running
  135. [cell01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell01.asok mon_status
  136. [ceph_deploy.mon][DEBUG ] detecting platform for host cell02 ...
  137. [cell02][DEBUG ] connection detected need for sudo
  138. [cell02][DEBUG ] connected to host: cell02 <<------部署cell02的监控器
  139. [cell02][DEBUG ] detect platform information from remote host
  140. [cell02][DEBUG ] detect machine type
  141. [ceph_deploy.mon][INFO ] distro info: CentOS 6.5 Final
  142. [cell02][DEBUG ] determining if provided host has same hostname in remote
  143. [cell02][DEBUG ] get remote short hostname
  144. [cell02][DEBUG ] deploying mon to cell02
  145. [cell02][DEBUG ] get remote short hostname
  146. [cell02][DEBUG ] remote hostname: cell02
  147. [cell02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  148. [cell02][DEBUG ] create the mon path if it does not exist
  149. [cell02][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-cell02/done
  150. [cell02][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-cell02/done
  151. [cell02][INFO ] creating tmp path: /var/lib/ceph/tmp
  152. [cell02][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-cell02.mon.keyring
  153. [cell02][DEBUG ] create the monitor keyring file
  154. [cell02][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i cell02 --keyring /var/lib/ceph/tmp/ceph-cell02.mon.keyring <<-----在对应的目录下创建对应的mkfs
  155. [cell02][DEBUG ] ceph-mon: mon.noname-b 192.168.1.213:6789/0 is local, renaming to mon.cell02
  156. [cell02][DEBUG ] ceph-mon: set fsid to 9061096f-d9f9-4946-94f1-296ab5080a97
  157. [cell02][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-cell02 for mon.cell02 <<-----创建mon的fs,可查看对应的目录下包含的数据,主要包括一个db,一个keyring等
  158. [cell02][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-cell02.mon.keyring
  159. [cell02][DEBUG ] create a done file to avoid re-doing the mon deployment
  160. [cell02][DEBUG ] create the init path if it does not exist
  161. [cell02][DEBUG ] locating the `service` executable...
  162. [cell02][INFO ] Running command: sudo /sbin/service ceph -c /etc/ceph/ceph.conf start mon.cell02
  163. [cell02][DEBUG ] === mon.cell02 ===
  164. [cell02][DEBUG ] Starting Ceph mon.cell02 on cell02...
  165. [cell02][DEBUG ] Starting ceph-create-keys on cell02...
  166. [cell02][WARNIN] No data was received after 7 seconds, disconnecting...
  167. [cell02][INFO ] Running command: sudo chkconfig ceph on
  168. [cell02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
  169. [cell02][DEBUG ] ********************************************************************************
  170. [cell02][DEBUG ] status for monitor: mon.cell02
  171. [cell02][DEBUG ] {
  172. [cell02][DEBUG ] "election_epoch": 1,
  173. [cell02][DEBUG ] "extra_probe_peers": [
  174. [cell02][DEBUG ] "192.168.1.212:6789/0",
  175. [cell02][DEBUG ] "192.168.1.214:6789/0"
  176. [cell02][DEBUG ] ],
  177. [cell02][DEBUG ] "monmap": {
  178. [cell02][DEBUG ] "created": "0.000000",
  179. [cell02][DEBUG ] "epoch": 0,
  180. [cell02][DEBUG ] "fsid": "9061096f-d9f9-4946-94f1-296ab5080a97",
  181. [cell02][DEBUG ] "modified": "0.000000",
  182. [cell02][DEBUG ] "mons": [
  183. [cell02][DEBUG ] {
  184. [cell02][DEBUG ] "addr": "192.168.1.212:6789/0",
  185. [cell02][DEBUG ] "name": "cell01",
  186. [cell02][DEBUG ] "rank": 0
  187. [cell02][DEBUG ] },
  188. [cell02][DEBUG ] {
  189. [cell02][DEBUG ] "addr": "192.168.1.213:6789/0",
  190. [cell02][DEBUG ] "name": "cell02",
  191. [cell02][DEBUG ] "rank": 1
  192. [cell02][DEBUG ] },
  193. [cell02][DEBUG ] {
  194. [cell02][DEBUG ] "addr": "0.0.0.0:0/2",
  195. [cell02][DEBUG ] "name": "cell03",
  196. [cell02][DEBUG ] "rank": 2
  197. [cell02][DEBUG ] }
  198. [cell02][DEBUG ] ]
  199. [cell02][DEBUG ] },
  200. [cell02][DEBUG ] "name": "cell02",
  201. [cell02][DEBUG ] "outside_quorum": [],
  202. [cell02][DEBUG ] "quorum": [],
  203. [cell02][DEBUG ] "rank": 1,
  204. [cell02][DEBUG ] "state": "electing",
  205. [cell02][DEBUG ] "sync_provider": []
  206. [cell02][DEBUG ] }
  207. [cell02][DEBUG ] ********************************************************************************
  208. [cell02][INFO ] monitor: mon.cell02 is running
  209. [cell02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
  210. [ceph_deploy.mon][DEBUG ] detecting platform for host cell03 ...
  211. [cell03][DEBUG ] connection detected need for sudo
  212. [cell03][DEBUG ] connected to host: cell03 <<------部署cell03上的监控器信息
  213. [cell03][DEBUG ] detect platform information from remote host
  214. [cell03][DEBUG ] detect machine type
  215. [ceph_deploy.mon][INFO ] distro info: CentOS 6.5 Final
  216. [cell03][DEBUG ] determining if provided host has same hostname in remote
  217. [cell03][DEBUG ] get remote short hostname
  218. [cell03][DEBUG ] deploying mon to cell03
  219. [cell03][DEBUG ] get remote short hostname
  220. [cell03][DEBUG ] remote hostname: cell03
  221. [cell03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  222. [cell03][DEBUG ] create the mon path if it does not exist
  223. [cell03][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-cell03/done
  224. [cell03][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-cell03/done
  225. [cell03][INFO ] creating tmp path: /var/lib/ceph/tmp
  226. [cell03][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-cell03.mon.keyring
  227. [cell03][DEBUG ] create the monitor keyring file
  228. [cell03][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i cell03 --keyring /var/lib/ceph/tmp/ceph-cell03.mon.keyring <<-----mkfs
  229. [cell03][DEBUG ] ceph-mon: mon.noname-c 192.168.1.214:6789/0 is local, renaming to mon.cell03
  230. [cell03][DEBUG ] ceph-mon: set fsid to 9061096f-d9f9-4946-94f1-296ab5080a97
  231. [cell03][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-cell03 for mon.cell03
  232. [cell03][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-cell03.mon.keyring
  233. [cell03][DEBUG ] create a done file to avoid re-doing the mon deployment
  234. [cell03][DEBUG ] create the init path if it does not exist
  235. [cell03][DEBUG ] locating the `service` executable...
  236. [cell03][INFO ] Running command: sudo /sbin/service ceph -c /etc/ceph/ceph.conf start mon.cell03
  237. [cell03][DEBUG ] === mon.cell03 ===
  238. [cell03][DEBUG ] Starting Ceph mon.cell03 on cell03...
  239. [cell03][DEBUG ] Starting ceph-create-keys on cell03...
  240. [cell03][WARNIN] No data was received after 7 seconds, disconnecting...
  241. [cell03][INFO ] Running command: sudo chkconfig ceph on
  242. [cell03][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell03.asok mon_status
  243. [cell03][DEBUG ] ********************************************************************************
  244. [cell03][DEBUG ] status for monitor: mon.cell03
  245. [cell03][DEBUG ] {
  246. [cell03][DEBUG ] "election_epoch": 5,
  247. [cell03][DEBUG ] "extra_probe_peers": [
  248. [cell03][DEBUG ] "192.168.1.212:6789/0",
  249. [cell03][DEBUG ] "192.168.1.213:6789/0"
  250. [cell03][DEBUG ] ],
  251. [cell03][DEBUG ] "monmap": {
  252. [cell03][DEBUG ] "created": "0.000000",
  253. [cell03][DEBUG ] "epoch": 1,
  254. [cell03][DEBUG ] "fsid": "9061096f-d9f9-4946-94f1-296ab5080a97",
  255. [cell03][DEBUG ] "modified": "0.000000",
  256. [cell03][DEBUG ] "mons": [
  257. [cell03][DEBUG ] {
  258. [cell03][DEBUG ] "addr": "192.168.1.212:6789/0",
  259. [cell03][DEBUG ] "name": "cell01",
  260. [cell03][DEBUG ] "rank": 0
  261. [cell03][DEBUG ] },
  262. [cell03][DEBUG ] {
  263. [cell03][DEBUG ] "addr": "192.168.1.213:6789/0",
  264. [cell03][DEBUG ] "name": "cell02",
  265. [cell03][DEBUG ] "rank": 1
  266. [cell03][DEBUG ] },
  267. [cell03][DEBUG ] {
  268. [cell03][DEBUG ] "addr": "192.168.1.214:6789/0",
  269. [cell03][DEBUG ] "name": "cell03",
  270. [cell03][DEBUG ] "rank": 2
  271. [cell03][DEBUG ] }
  272. [cell03][DEBUG ] ]
  273. [cell03][DEBUG ] },
  274. [cell03][DEBUG ] "name": "cell03",
  275. [cell03][DEBUG ] "outside_quorum": [],
  276. [cell03][DEBUG ] "quorum": [],
  277. [cell03][DEBUG ] "rank": 2,
  278. [cell03][DEBUG ] "state": "electing",
  279. [cell03][DEBUG ] "sync_provider": []
  280. [cell03][DEBUG ] }
  281. [cell03][DEBUG ] ********************************************************************************
  282. [cell03][INFO ] monitor: mon.cell03 is running
  283. [cell03][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell03.asok mon_status
  284. Error in sys.exitfunc:
  285. [ceph@cell01 my-cluster]$ ll
  286. total 24
  287. -rw-rw-r-- 1 ceph ceph 276 Sep 1 17:09 ceph.conf
  288. -rw-rw-r-- 1 ceph ceph 15344 Sep 1 17:10 ceph.log
  289. -rw-rw-r-- 1 ceph ceph 73 Sep 1 17:09 ceph.mon.keyring
  290. [ceph@cell01 my-cluster]$ ceph-deploy mon create-initial <<----监控器的初始化,根据部署的配置文件进行监控器的初始化
  291. [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
  292. [ceph_deploy.cli][INFO ] Invoked (1.5.19): /usr/bin/ceph-deploy mon create-initial
  293. [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts cell01 cell02 cell03
  294. [ceph_deploy.mon][DEBUG ] detecting platform for host cell01 ...
  295. [cell01][DEBUG ] connection detected need for sudo
  296. [cell01][DEBUG ] connected to host: cell01
  297. [cell01][DEBUG ] detect platform information from remote host
  298. [cell01][DEBUG ] detect machine type
  299. [ceph_deploy.mon][INFO ] distro info: CentOS 6.5 Final
  300. [cell01][DEBUG ] determining if provided host has same hostname in remote
  301. [cell01][DEBUG ] get remote short hostname
  302. [cell01][DEBUG ] deploying mon to cell01
  303. [cell01][DEBUG ] get remote short hostname
  304. [cell01][DEBUG ] remote hostname: cell01
  305. [cell01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  306. [cell01][DEBUG ] create the mon path if it does not exist
  307. [cell01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-cell01/done
  308. [cell01][DEBUG ] create a done file to avoid re-doing the mon deployment
  309. [cell01][DEBUG ] create the init path if it does not exist
  310. [cell01][DEBUG ] locating the `service` executable...
  311. [cell01][INFO ] Running command: sudo /sbin/service ceph -c /etc/ceph/ceph.conf start mon.cell01
  312. [cell01][DEBUG ] === mon.cell01 ===
  313. [cell01][DEBUG ] Starting Ceph mon.cell01 on cell01...already running
  314. [cell01][INFO ] Running command: sudo chkconfig ceph on
  315. [cell01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell01.asok mon_status
  316. [cell01][DEBUG ] ********************************************************************************
  317. [cell01][DEBUG ] status for monitor: mon.cell01
  318. [cell01][DEBUG ] {
  319. [cell01][DEBUG ] "election_epoch": 8,
  320. [cell01][DEBUG ] "extra_probe_peers": [
  321. [cell01][DEBUG ] "192.168.1.213:6789/0",
  322. [cell01][DEBUG ] "192.168.1.214:6789/0"
  323. [cell01][DEBUG ] ],
  324. [cell01][DEBUG ] "monmap": {
  325. [cell01][DEBUG ] "created": "0.000000",
  326. [cell01][DEBUG ] "epoch": 1,
  327. [cell01][DEBUG ] "fsid": "9061096f-d9f9-4946-94f1-296ab5080a97",
  328. [cell01][DEBUG ] "modified": "0.000000",
  329. [cell01][DEBUG ] "mons": [
  330. [cell01][DEBUG ] {
  331. [cell01][DEBUG ] "addr": "192.168.1.212:6789/0",
  332. [cell01][DEBUG ] "name": "cell01",
  333. [cell01][DEBUG ] "rank": 0
  334. [cell01][DEBUG ] },
  335. [cell01][DEBUG ] {
  336. [cell01][DEBUG ] "addr": "192.168.1.213:6789/0",
  337. [cell01][DEBUG ] "name": "cell02",
  338. [cell01][DEBUG ] "rank": 1
  339. [cell01][DEBUG ] },
  340. [cell01][DEBUG ] {
  341. [cell01][DEBUG ] "addr": "192.168.1.214:6789/0",
  342. [cell01][DEBUG ] "name": "cell03",
  343. [cell01][DEBUG ] "rank": 2
  344. [cell01][DEBUG ] }
  345. [cell01][DEBUG ] ]
  346. [cell01][DEBUG ] },
  347. [cell01][DEBUG ] "name": "cell01",
  348. [cell01][DEBUG ] "outside_quorum": [],
  349. [cell01][DEBUG ] "quorum": [
  350. [cell01][DEBUG ] 0,
  351. [cell01][DEBUG ] 1,
  352. [cell01][DEBUG ] 2
  353. [cell01][DEBUG ] ],
  354. [cell01][DEBUG ] "rank": 0,
  355. [cell01][DEBUG ] "state": "leader",
  356. [cell01][DEBUG ] "sync_provider": []
  357. [cell01][DEBUG ] }
  358. [cell01][DEBUG ] ********************************************************************************
  359. [cell01][INFO ] monitor: mon.cell01 is running
  360. [cell01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell01.asok mon_status
  361. [ceph_deploy.mon][DEBUG ] detecting platform for host cell02 ...
  362. [cell02][DEBUG ] connection detected need for sudo
  363. [cell02][DEBUG ] connected to host: cell02
  364. [cell02][DEBUG ] detect platform information from remote host
  365. [cell02][DEBUG ] detect machine type
  366. [ceph_deploy.mon][INFO ] distro info: CentOS 6.5 Final
  367. [cell02][DEBUG ] determining if provided host has same hostname in remote
  368. [cell02][DEBUG ] get remote short hostname
  369. [cell02][DEBUG ] deploying mon to cell02
  370. [cell02][DEBUG ] get remote short hostname
  371. [cell02][DEBUG ] remote hostname: cell02
  372. [cell02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  373. [cell02][DEBUG ] create the mon path if it does not exist
  374. [cell02][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-cell02/done
  375. [cell02][DEBUG ] create a done file to avoid re-doing the mon deployment
  376. [cell02][DEBUG ] create the init path if it does not exist
  377. [cell02][DEBUG ] locating the `service` executable...
  378. [cell02][INFO ] Running command: sudo /sbin/service ceph -c /etc/ceph/ceph.conf start mon.cell02
  379. [cell02][DEBUG ] === mon.cell02 ===
  380. [cell02][DEBUG ] Starting Ceph mon.cell02 on cell02...already running
  381. [cell02][INFO ] Running command: sudo chkconfig ceph on
  382. [cell02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
  383. [cell02][DEBUG ] ********************************************************************************
  384. [cell02][DEBUG ] status for monitor: mon.cell02
  385. [cell02][DEBUG ] {
  386. [cell02][DEBUG ] "election_epoch": 8,
  387. [cell02][DEBUG ] "extra_probe_peers": [
  388. [cell02][DEBUG ] "192.168.1.212:6789/0",
  389. [cell02][DEBUG ] "192.168.1.214:6789/0"
  390. [cell02][DEBUG ] ],
  391. [cell02][DEBUG ] "monmap": {
  392. [cell02][DEBUG ] "created": "0.000000",
  393. [cell02][DEBUG ] "epoch": 1,
  394. [cell02][DEBUG ] "fsid": "9061096f-d9f9-4946-94f1-296ab5080a97",
  395. [cell02][DEBUG ] "modified": "0.000000",
  396. [cell02][DEBUG ] "mons": [
  397. [cell02][DEBUG ] {
  398. [cell02][DEBUG ] "addr": "192.168.1.212:6789/0",
  399. [cell02][DEBUG ] "name": "cell01",
  400. [cell02][DEBUG ] "rank": 0
  401. [cell02][DEBUG ] },
  402. [cell02][DEBUG ] {
  403. [cell02][DEBUG ] "addr": "192.168.1.213:6789/0",
  404. [cell02][DEBUG ] "name": "cell02",
  405. [cell02][DEBUG ] "rank": 1
  406. [cell02][DEBUG ] },
  407. [cell02][DEBUG ] {
  408. [cell02][DEBUG ] "addr": "192.168.1.214:6789/0",
  409. [cell02][DEBUG ] "name": "cell03",
  410. [cell02][DEBUG ] "rank": 2
  411. [cell02][DEBUG ] }
  412. [cell02][DEBUG ] ]
  413. [cell02][DEBUG ] },
  414. [cell02][DEBUG ] "name": "cell02",
  415. [cell02][DEBUG ] "outside_quorum": [],
  416. [cell02][DEBUG ] "quorum": [
  417. [cell02][DEBUG ] 0,
  418. [cell02][DEBUG ] 1,
  419. [cell02][DEBUG ] 2
  420. [cell02][DEBUG ] ],
  421. [cell02][DEBUG ] "rank": 1,
  422. [cell02][DEBUG ] "state": "peon",
  423. [cell02][DEBUG ] "sync_provider": []
  424. [cell02][DEBUG ] }
  425. [cell02][DEBUG ] ********************************************************************************
  426. [cell02][INFO ] monitor: mon.cell02 is running
  427. [cell02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
  428. [ceph_deploy.mon][DEBUG ] detecting platform for host cell03 ...
  429. [cell03][DEBUG ] connection detected need for sudo
  430. [cell03][DEBUG ] connected to host: cell03
  431. [cell03][DEBUG ] detect platform information from remote host
  432. [cell03][DEBUG ] detect machine type
  433. [ceph_deploy.mon][INFO ] distro info: CentOS 6.5 Final
  434. [cell03][DEBUG ] determining if provided host has same hostname in remote
  435. [cell03][DEBUG ] get remote short hostname
  436. [cell03][DEBUG ] deploying mon to cell03
  437. [cell03][DEBUG ] get remote short hostname
  438. [cell03][DEBUG ] remote hostname: cell03
  439. [cell03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  440. [cell03][DEBUG ] create the mon path if it does not exist
  441. [cell03][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-cell03/done
  442. [cell03][DEBUG ] create a done file to avoid re-doing the mon deployment
  443. [cell03][DEBUG ] create the init path if it does not exist
  444. [cell03][DEBUG ] locating the `service` executable...
  445. [cell03][INFO ] Running command: sudo /sbin/service ceph -c /etc/ceph/ceph.conf start mon.cell03
  446. [cell03][DEBUG ] === mon.cell03 ===
  447. [cell03][DEBUG ] Starting Ceph mon.cell03 on cell03...already running
  448. [cell03][INFO ] Running command: sudo chkconfig ceph on
  449. [cell03][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell03.asok mon_status
  450. [cell03][DEBUG ] ********************************************************************************
  451. [cell03][DEBUG ] status for monitor: mon.cell03
  452. [cell03][DEBUG ] {
  453. [cell03][DEBUG ] "election_epoch": 8,
  454. [cell03][DEBUG ] "extra_probe_peers": [
  455. [cell03][DEBUG ] "192.168.1.212:6789/0",
  456. [cell03][DEBUG ] "192.168.1.213:6789/0"
  457. [cell03][DEBUG ] ],
  458. [cell03][DEBUG ] "monmap": {
  459. [cell03][DEBUG ] "created": "0.000000",
  460. [cell03][DEBUG ] "epoch": 1,
  461. [cell03][DEBUG ] "fsid": "9061096f-d9f9-4946-94f1-296ab5080a97",
  462. [cell03][DEBUG ] "modified": "0.000000",
  463. [cell03][DEBUG ] "mons": [
  464. [cell03][DEBUG ] {
  465. [cell03][DEBUG ] "addr": "192.168.1.212:6789/0",
  466. [cell03][DEBUG ] "name": "cell01",
  467. [cell03][DEBUG ] "rank": 0
  468. [cell03][DEBUG ] },
  469. [cell03][DEBUG ] {
  470. [cell03][DEBUG ] "addr": "192.168.1.213:6789/0",
  471. [cell03][DEBUG ] "name": "cell02",
  472. [cell03][DEBUG ] "rank": 1
  473. [cell03][DEBUG ] },
  474. [cell03][DEBUG ] {
  475. [cell03][DEBUG ] "addr": "192.168.1.214:6789/0",
  476. [cell03][DEBUG ] "name": "cell03",
  477. [cell03][DEBUG ] "rank": 2
  478. [cell03][DEBUG ] }
  479. [cell03][DEBUG ] ]
  480. [cell03][DEBUG ] },
  481. [cell03][DEBUG ] "name": "cell03",
  482. [cell03][DEBUG ] "outside_quorum": [],
  483. [cell03][DEBUG ] "quorum": [
  484. [cell03][DEBUG ] 0,
  485. [cell03][DEBUG ] 1,
  486. [cell03][DEBUG ] 2
  487. [cell03][DEBUG ] ],
  488. [cell03][DEBUG ] "rank": 2,
  489. [cell03][DEBUG ] "state": "peon",
  490. [cell03][DEBUG ] "sync_provider": []
  491. [cell03][DEBUG ] }
  492. [cell03][DEBUG ] ********************************************************************************
  493. [cell03][INFO ] monitor: mon.cell03 is running
  494. [cell03][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell03.asok mon_status
  495. [ceph_deploy.mon][INFO ] processing monitor mon.cell01
  496. [cell01][DEBUG ] connection detected need for sudo
  497. [cell01][DEBUG ] connected to host: cell01
  498. [cell01][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell01.asok mon_status
  499. [ceph_deploy.mon][INFO ] mon.cell01 monitor has reached quorum!
  500. [ceph_deploy.mon][INFO ] processing monitor mon.cell02
  501. [cell02][DEBUG ] connection detected need for sudo
  502. [cell02][DEBUG ] connected to host: cell02
  503. [cell02][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
  504. [ceph_deploy.mon][INFO ] mon.cell02 monitor has reached quorum!
  505. [ceph_deploy.mon][INFO ] processing monitor mon.cell03
  506. [cell03][DEBUG ] connection detected need for sudo
  507. [cell03][DEBUG ] connected to host: cell03
  508. [cell03][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell03.asok mon_status
  509. [ceph_deploy.mon][INFO ] mon.cell03 monitor has reached quorum!
  510. [ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
  511. [ceph_deploy.mon][INFO ] Running gatherkeys...
  512. [ceph_deploy.gatherkeys][DEBUG ] Checking cell01 for /etc/ceph/ceph.client.admin.keyring
  513. [cell01][DEBUG ] connection detected need for sudo
  514. [cell01][DEBUG ] connected to host: cell01
  515. [cell01][DEBUG ] detect platform information from remote host
  516. [cell01][DEBUG ] detect machine type
  517. [cell01][DEBUG ] fetch remote file
  518. [ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from cell01.
  519. [ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring //获取到了keyring
  520. [ceph_deploy.gatherkeys][DEBUG ] Checking cell01 for /var/lib/ceph/bootstrap-osd/ceph.keyring
  521. [cell01][DEBUG ] connection detected need for sudo
  522. [cell01][DEBUG ] connected to host: cell01
  523. [cell01][DEBUG ] detect platform information from remote host
  524. [cell01][DEBUG ] detect machine type
  525. [cell01][DEBUG ] fetch remote file
  526. [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from cell01.
  527. [ceph_deploy.gatherkeys][DEBUG ] Checking cell01 for /var/lib/ceph/bootstrap-mds/ceph.keyring
  528. [cell01][DEBUG ] connection detected need for sudo
  529. [cell01][DEBUG ] connected to host: cell01
  530. [cell01][DEBUG ] detect platform information from remote host
  531. [cell01][DEBUG ] detect machine type
  532. [cell01][DEBUG ] fetch remote file
  533. [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from cell01.
  534. Error in sys.exitfunc:
  535. [ceph@cell01 my-cluster]$ ll <<-----从所有的节点中获取的认证信息keyring
  536. total 48
  537. -rw-rw-r-- 1 ceph ceph 71 Sep 1 17:11 ceph.bootstrap-mds.keyring
  538. -rw-rw-r-- 1 ceph ceph 71 Sep 1 17:11 ceph.bootstrap-osd.keyring
  539. -rw-rw-r-- 1 ceph ceph 63 Sep 1 17:11 ceph.client.admin.keyring
  540. -rw-rw-r-- 1 ceph ceph 276 Sep 1 17:09 ceph.conf
  541. -rw-rw-r-- 1 ceph ceph 28047 Sep 1 17:11 ceph.log
  542. -rw-rw-r-- 1 ceph ceph 73 Sep 1 17:09 ceph.mon.keyring
  543. [ceph@cell01 my-cluster]$ ll /var/lib/ceph/
  544. bootstrap-mds/ bootstrap-osd/ mon/ tmp/
  545. [ceph@cell01 my-cluster]$ ll /var/lib/ceph/mon/ceph-cell01/ <<----mkfs的过程中生成的文件夹信息
  546. done keyring store.db/ sysvinit
  547. [ceph@cell01 my-cluster]$ ll /var/lib/ceph/mon/ceph-cell01/
  548. done keyring store.db/ sysvinit
  549. [ceph@cell01 my-cluster]$ sudo ceph daemon mon.`hostname` mon_status <<----查看当前的监控器的状态
  550. { "name": "cell01",
  551. "rank": 0,
  552. "state": "leader",
  553. "election_epoch": 6,
  554. "quorum": [
  555. 0,
  556. 1,
  557. 2],
  558. "outside_quorum": [],
  559. "extra_probe_peers": [
  560. "192.168.1.213:6789\/0",
  561. "192.168.1.214:6789\/0"],
  562. "sync_provider": [],
  563. "monmap": { "epoch": 2,
  564. "fsid": "32a0c6a4-7076-4c31-a625-a73480746d5e",
  565. "modified": "2015-09-02 16:01:58.239429",
  566. "created": "0.000000",
  567. "mons": [
  568. { "rank": 0,
  569. "name": "cell01",
  570. "addr": "192.168.1.212:6789\/0"},
  571. { "rank": 1,
  572. "name": "cell02",
  573. "addr": "192.168.1.213:6789\/0"},
  574. { "rank": 2,
  575. "name": "cell03",
  576. "addr": "192.168.1.214:6789\/0"}]}}
     完成监控器的部署之后,接下来进行OSD节点的部署。

OSD部署

点击(此处)折叠或打开

  1. [ceph@cell01 my-cluster]$ ceph-deploy --overwrite-conf osd prepare cell01:/dev/sdb1 cell02:/dev/sdb1 cell03:/dev/sdb1 <<-----为对应的osd准备磁盘空间
  2. [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
  3. [ceph_deploy.cli][INFO ] Invoked (1.5.19): /usr/bin/ceph-deploy --overwrite-conf osd prepare cell01:/dev/sdb1 cell02:/dev/sdb1 cell03:/dev/sdb1
  4. [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks cell01:/dev/sdb1: cell02:/dev/sdb1: cell03:/dev/sdb1:
  5. [cell01][DEBUG ] connection detected need for sudo
  6. [cell01][DEBUG ] connected to host: cell01
  7. [cell01][DEBUG ] detect platform information from remote host
  8. [cell01][DEBUG ] detect machine type
  9. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  10. [ceph_deploy.osd][DEBUG ] Deploying osd to cell01
  11. [cell01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  12. [cell01][INFO ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
  13. [ceph_deploy.osd][DEBUG ] Preparing host cell01 disk /dev/sdb1 journal None activate False
  14. [cell01][INFO ] Running command: sudo ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sdb1
  15. [cell01][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=73105406 blks
  16. [cell01][DEBUG ] = sectsz=512 attr=2, projid32bit=0
  17. [cell01][DEBUG ] data = bsize=4096 blocks=292421623, imaxpct=5
  18. [cell01][DEBUG ] = sunit=0 swidth=0 blks
  19. [cell01][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0
  20. [cell01][DEBUG ] log =internal log bsize=4096 blocks=142783, version=2
  21. [cell01][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
  22. [cell01][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
  23. [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
  24. [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
  25. [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
  26. [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
  27. [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
  28. [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
  29. [cell01][WARNIN] DEBUG:ceph-disk:OSD data device /dev/sdb1 is a partition
  30. [cell01][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdb1
  31. [cell01][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1 <<---格式化对应的磁盘,格式为xfs
  32. [cell01][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.VVjTnb with options noatime,inode64
  33. [cell01][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.VVjTnb <<---挂载格式化成功的磁盘
  34. [cell01][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.VVjTnb <<---在磁盘中创建对应的数据目录
  35. [cell01][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.VVjTnb
  36. [cell01][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.VVjTnb <<---卸载对应的磁盘
  37. [cell01][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdb1
  38. [cell01][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
  39. [cell01][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdb1 <<----激活对应的分区信息
  40. [cell01][WARNIN] last arg is not the whole disk
  41. [cell01][WARNIN] call: partx -opts device wholedisk
  42. [cell01][INFO ] checking OSD status...
  43. [cell01][INFO ] Running command: sudo ceph --cluster=ceph osd stat --format=json
  44. [ceph_deploy.osd][DEBUG ] Host cell01 is now ready for osd use. <<---磁盘准备就绪
  45. [cell02][DEBUG ] connection detected need for sudo
  46. [cell02][DEBUG ] connected to host: cell02
  47. [cell02][DEBUG ] detect platform information from remote host
  48. [cell02][DEBUG ] detect machine type
  49. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  50. [ceph_deploy.osd][DEBUG ] Deploying osd to cell02
  51. [cell02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  52. [cell02][INFO ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
  53. [ceph_deploy.osd][DEBUG ] Preparing host cell02 disk /dev/sdb1 journal None activate False
  54. [cell02][INFO ] Running command: sudo ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sdb1
  55. [cell02][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=18310545 blks
  56. [cell02][DEBUG ] = sectsz=512 attr=2, projid32bit=0
  57. [cell02][DEBUG ] data = bsize=4096 blocks=73242179, imaxpct=25
  58. [cell02][DEBUG ] = sunit=0 swidth=0 blks
  59. [cell02][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0
  60. [cell02][DEBUG ] log =internal log bsize=4096 blocks=35762, version=2
  61. [cell02][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
  62. [cell02][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
  63. [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
  64. [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
  65. [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
  66. [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
  67. [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
  68. [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
  69. [cell02][WARNIN] DEBUG:ceph-disk:OSD data device /dev/sdb1 is a partition
  70. [cell02][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdb1
  71. [cell02][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
  72. [cell02][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.iBaG75 with options noatime,inode64
  73. [cell02][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.iBaG75
  74. [cell02][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.iBaG75
  75. [cell02][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.iBaG75
  76. [cell02][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.iBaG75
  77. [cell02][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdb1
  78. [cell02][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
  79. [cell02][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdb1
  80. [cell02][WARNIN] last arg is not the whole disk
  81. [cell02][WARNIN] call: partx -opts device wholedisk
  82. [cell02][INFO ] checking OSD status...
  83. [cell02][INFO ] Running command: sudo ceph --cluster=ceph osd stat --format=json
  84. [ceph_deploy.osd][DEBUG ] Host cell02 is now ready for osd use.
  85. [cell03][DEBUG ] connection detected need for sudo
  86. [cell03][DEBUG ] connected to host: cell03
  87. [cell03][DEBUG ] detect platform information from remote host
  88. [cell03][DEBUG ] detect machine type
  89. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  90. [ceph_deploy.osd][DEBUG ] Deploying osd to cell03
  91. [cell03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  92. [cell03][INFO ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
  93. [ceph_deploy.osd][DEBUG ] Preparing host cell03 disk /dev/sdb1 journal None activate False
  94. [cell03][INFO ] Running command: sudo ceph-disk -v prepare --fs-type xfs --cluster ceph -- /dev/sdb1
  95. [cell03][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=18276350 blks
  96. [cell03][DEBUG ] = sectsz=512 attr=2, projid32bit=0
  97. [cell03][DEBUG ] data = bsize=4096 blocks=73105399, imaxpct=25
  98. [cell03][DEBUG ] = sunit=0 swidth=0 blks
  99. [cell03][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0
  100. [cell03][DEBUG ] log =internal log bsize=4096 blocks=35695, version=2
  101. [cell03][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
  102. [cell03][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
  103. [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
  104. [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
  105. [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
  106. [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
  107. [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
  108. [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
  109. [cell03][WARNIN] DEBUG:ceph-disk:OSD data device /dev/sdb1 is a partition
  110. [cell03][WARNIN] DEBUG:ceph-disk:Creating xfs fs on /dev/sdb1
  111. [cell03][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
  112. [cell03][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.IqJ2rs with options noatime,inode64
  113. [cell03][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.IqJ2rs
  114. [cell03][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.IqJ2rs
  115. [cell03][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.IqJ2rs
  116. [cell03][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.IqJ2rs
  117. [cell03][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdb1
  118. [cell03][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors
  119. [cell03][WARNIN] INFO:ceph-disk:Running command: /sbin/partx -a /dev/sdb1
  120. [cell03][WARNIN] last arg is not the whole disk
  121. [cell03][WARNIN] call: partx -opts device wholedisk
  122. [cell03][INFO ] checking OSD status...
  123. [cell03][INFO ] Running command: sudo ceph --cluster=ceph osd stat --format=json
  124. [ceph_deploy.osd][DEBUG ] Host cell03 is now ready for osd use.
  125. Error in sys.exitfunc:
  126. [ceph@cell01 my-cluster]$ sudo mkdir -p /var/lib/ceph/osd/ceph-0 <<-----创建对应的osd目录,同时其他节点也创建相应的osd目录
  127. [ceph@cell01 my-cluster]$ ssh cell02
  128. [ceph@cell02 ~]$ sudo mkdir -p /var/lib/ceph/osd/ceph-1
  129. [ceph@cell02 ~]$ exit
  130. [ceph@cell01 my-cluster]$ ssh cell03
  131. [ceph@cell03 ~]$ sudo mkdir -p /var/lib/ceph/osd/ceph-2
  132. [ceph@cell03 ~]$ exit
  133. [ceph@cell01 my-cluster]$ ceph-deploy osd activate cell01:/dev/sdb1 cell02:/dev/sdb1 cell03:/dev/sdb1 <<----激活对应的osd磁盘
  134. [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
  135. [ceph_deploy.cli][INFO ] Invoked (1.5.19): /usr/bin/ceph-deploy osd activate cell01:/dev/sdb1 cell02:/dev/sdb1 cell03:/dev/sdb1
  136. [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks cell01:/dev/sdb1: cell02:/dev/sdb1: cell03:/dev/sdb1:
  137. [cell01][DEBUG ] connection detected need for sudo
  138. [cell01][DEBUG ] connected to host: cell01
  139. [cell01][DEBUG ] detect platform information from remote host
  140. [cell01][DEBUG ] detect machine type
  141. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  142. [ceph_deploy.osd][DEBUG ] activating host cell01 disk /dev/sdb1
  143. [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
  144. [cell01][INFO ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb1
  145. [cell01][DEBUG ] === osd.0 ===
  146. [cell01][DEBUG ] Starting Ceph osd.0 on cell01...
  147. [cell01][DEBUG ] starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
  148. [cell01][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdb1 <<--获取uuid和类型
  149. [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
  150. [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
  151. [cell01][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.aMjvT5 with options noatime,inode64
  152. [cell01][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.aMjvT5 <<---挂载到一个临时的目录中
  153. [cell01][WARNIN] DEBUG:ceph-disk:Cluster uuid is 32a0c6a4-7076-4c31-a625-a73480746d5e
  154. [cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid <<---ceph-osd进行处理
  155. [cell01][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
  156. [cell01][WARNIN] DEBUG:ceph-disk:OSD uuid is 333bf1d3-bb1d-4c57-b4b1-679dddbfdce8
  157. [cell01][WARNIN] DEBUG:ceph-disk:OSD id is 0
  158. [cell01][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
  159. [cell01][WARNIN] DEBUG:ceph-disk:ceph osd.0 data dir is ready at /var/lib/ceph/tmp/mnt.aMjvT5
  160. [cell01][WARNIN] DEBUG:ceph-disk:Moving mount to final location...
  161. [cell01][WARNIN] INFO:ceph-disk:Running command: /bin/mount -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/osd/ceph-0 <<----挂载对应的磁盘到对应的目录中,因此可通过该目录查看其中的数据情况
  162. [cell01][WARNIN] INFO:ceph-disk:Running command: /bin/umount -l -- /var/lib/ceph/tmp/mnt.aMjvT5 <<---取消零时挂载
  163. [cell01][WARNIN] DEBUG:ceph-disk:Starting ceph osd.0...
  164. [cell01][WARNIN] INFO:ceph-disk:Running command: /sbin/service ceph --cluster ceph start osd.0 <<---启动ceph的服务器
  165. [cell01][WARNIN] libust[18436/18436]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
  166. [cell01][WARNIN] create-or-move updating item name 'osd.0' weight 1.09 at location {host=cell01,root=default} to crush map <<---更新对应的映射中的权重
  167. [cell01][WARNIN] libust[18489/18489]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
  168. [cell01][INFO ] checking OSD status...
  169. [cell01][INFO ] Running command: sudo ceph --cluster=ceph osd stat --format=json
  170. [cell01][INFO ] Running command: sudo chkconfig ceph on
  171. [cell02][DEBUG ] connection detected need for sudo
  172. [cell02][DEBUG ] connected to host: cell02
  173. [cell02][DEBUG ] detect platform information from remote host
  174. [cell02][DEBUG ] detect machine type
  175. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  176. [ceph_deploy.osd][DEBUG ] activating host cell02 disk /dev/sdb1
  177. [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
  178. [cell02][INFO ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb1
  179. [cell02][DEBUG ] === osd.1 ===
  180. [cell02][DEBUG ] Starting Ceph osd.1 on cell02...
  181. [cell02][DEBUG ] starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
  182. [cell02][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdb1 <<---获取磁盘信息
  183. [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
  184. [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
  185. [cell02][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.4g84Gq with options noatime,inode64
  186. [cell02][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.4g84Gq <<----挂载到临时目录中
  187. [cell02][WARNIN] DEBUG:ceph-disk:Cluster uuid is 32a0c6a4-7076-4c31-a625-a73480746d5e
  188. [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
  189. [cell02][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
  190. [cell02][WARNIN] DEBUG:ceph-disk:OSD uuid is e4be0dd1-6c20-41ad-9dec-42467ba8c23a
  191. [cell02][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
  192. [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise e4be0dd1-6c20-41ad-9dec-42467ba8c23a <<---分配osd id
  193. [cell02][WARNIN] DEBUG:ceph-disk:OSD id is 1
  194. [cell02][WARNIN] DEBUG:ceph-disk:Initializing OSD...
  195. [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.4g84Gq/activate.monmap <<---获取monmap
  196. [cell02][WARNIN] got monmap epoch 2
  197. [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 1 --monmap /var/lib/ceph/tmp/mnt.4g84Gq/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.4g84Gq --osd-journal /var/lib/ceph/tmp/mnt.4g84Gq/journal --osd-uuid e4be0dd1-6c20-41ad-9dec-42467ba8c23a --keyring /var/lib/ceph/tmp/mnt.4g84Gq/keyring <<-----进行mkfs的处理
  198. [cell02][WARNIN] 2015-09-02 16:12:15.841276 7f6b50da8800 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
  199. [cell02][WARNIN] 2015-09-02 16:12:16.026779 7f6b50da8800 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
  200. [cell02][WARNIN] 2015-09-02 16:12:16.027262 7f6b50da8800 -1 filestore(/var/lib/ceph/tmp/mnt.4g84Gq) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
  201. [cell02][WARNIN] 2015-09-02 16:12:16.248841 7f6b50da8800 -1 created object store /var/lib/ceph/tmp/mnt.4g84Gq journal /var/lib/ceph/tmp/mnt.4g84Gq/journal for osd.1 fsid 32a0c6a4-7076-4c31-a625-a73480746d5e
  202. [cell02][WARNIN] 2015-09-02 16:12:16.248909 7f6b50da8800 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.4g84Gq/keyring: can't open /var/lib/ceph/tmp/mnt.4g84Gq/keyring: (2) No such file or directory
  203. [cell02][WARNIN] 2015-09-02 16:12:16.249063 7f6b50da8800 -1 created new key in keyring /var/lib/ceph/tmp/mnt.4g84Gq/keyring
  204. [cell02][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
  205. [cell02][WARNIN] DEBUG:ceph-disk:Authorizing OSD key...
  206. [cell02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.1 -i /var/lib/ceph/tmp/mnt.4g84Gq/keyring osd allow * mon allow profile osd <<----添加认证的信息
  207. [cell02][WARNIN] added key for osd.1
  208. [cell02][WARNIN] DEBUG:ceph-disk:ceph osd.1 data dir is ready at /var/lib/ceph/tmp/mnt.4g84Gq
  209. [cell02][WARNIN] DEBUG:ceph-disk:Moving mount to final location...
  210. [cell02][WARNIN] INFO:ceph-disk:Running command: /bin/mount -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/osd/ceph-1 <<----挂载到最终的目录中,具体的数据可查看该目录
  211. [cell02][WARNIN] INFO:ceph-disk:Running command: /bin/umount -l -- /var/lib/ceph/tmp/mnt.4g84Gq <<---从零时挂载点卸载
  212. [cell02][WARNIN] DEBUG:ceph-disk:Starting ceph osd.1...
  213. [cell02][WARNIN] INFO:ceph-disk:Running command: /sbin/service ceph --cluster ceph start osd.1
  214. [cell02][WARNIN] libust[30705/30705]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
  215. [cell02][WARNIN] create-or-move updating item name 'osd.1' weight 0.27 at location {host=cell02,root=default} to crush map <<----更新其中当前osd的权重
  216. [cell02][WARNIN] libust[30797/30797]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
  217. [cell02][INFO ] checking OSD status...
  218. [cell02][INFO ] Running command: sudo ceph --cluster=ceph osd stat --format=json
  219. [cell02][WARNIN] there is 1 OSD down
  220. [cell02][WARNIN] there is 1 OSD out
  221. [cell02][INFO ] Running command: sudo chkconfig ceph on
  222. [cell03][DEBUG ] connection detected need for sudo
  223. [cell03][DEBUG ] connected to host: cell03
  224. [cell03][DEBUG ] detect platform information from remote host
  225. [cell03][DEBUG ] detect machine type
  226. [ceph_deploy.osd][INFO ] Distro info: CentOS 6.5 Final
  227. [ceph_deploy.osd][DEBUG ] activating host cell03 disk /dev/sdb1
  228. [ceph_deploy.osd][DEBUG ] will use init type: sysvinit
  229. [cell03][INFO ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb1
  230. [cell03][DEBUG ] === osd.2 ===
  231. [cell03][DEBUG ] Starting Ceph osd.2 on cell03...
  232. [cell03][DEBUG ] starting osd.2 at :/0 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
  233. [cell03][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdb1
  234. [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
  235. [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
  236. [cell03][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.5n81s2 with options noatime,inode64
  237. [cell03][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.5n81s2
  238. [cell03][WARNIN] DEBUG:ceph-disk:Cluster uuid is 32a0c6a4-7076-4c31-a625-a73480746d5e
  239. [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
  240. [cell03][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
  241. [cell03][WARNIN] DEBUG:ceph-disk:OSD uuid is cd6ac8bc-5d7f-4963-aba5-43d2bf84127a
  242. [cell03][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
  243. [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise cd6ac8bc-5d7f-4963-aba5-43d2bf84127a
  244. [cell03][WARNIN] DEBUG:ceph-disk:OSD id is 2
  245. [cell03][WARNIN] DEBUG:ceph-disk:Initializing OSD...
  246. [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.5n81s2/activate.monmap
  247. [cell03][WARNIN] got monmap epoch 2
  248. [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 2 --monmap /var/lib/ceph/tmp/mnt.5n81s2/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.5n81s2 --osd-journal /var/lib/ceph/tmp/mnt.5n81s2/journal --osd-uuid cd6ac8bc-5d7f-4963-aba5-43d2bf84127a --keyring /var/lib/ceph/tmp/mnt.5n81s2/keyring
  249. [cell03][WARNIN] 2015-09-02 16:13:00.015228 7f76a277a800 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
  250. [cell03][WARNIN] 2015-09-02 16:13:00.021221 7f76a277a800 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
  251. [cell03][WARNIN] 2015-09-02 16:13:00.021866 7f76a277a800 -1 filestore(/var/lib/ceph/tmp/mnt.5n81s2) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
  252. [cell03][WARNIN] 2015-09-02 16:13:00.049203 7f76a277a800 -1 created object store /var/lib/ceph/tmp/mnt.5n81s2 journal /var/lib/ceph/tmp/mnt.5n81s2/journal for osd.2 fsid 32a0c6a4-7076-4c31-a625-a73480746d5e
  253. [cell03][WARNIN] 2015-09-02 16:13:00.049269 7f76a277a800 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.5n81s2/keyring: can't open /var/lib/ceph/tmp/mnt.5n81s2/keyring: (2) No such file or directory
  254. [cell03][WARNIN] 2015-09-02 16:13:00.049424 7f76a277a800 -1 created new key in keyring /var/lib/ceph/tmp/mnt.5n81s2/keyring
  255. [cell03][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
  256. [cell03][WARNIN] DEBUG:ceph-disk:Authorizing OSD key...
  257. [cell03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.2 -i /var/lib/ceph/tmp/mnt.5n81s2/keyring osd allow * mon allow profile osd
  258. [cell03][WARNIN] added key for osd.2
  259. [cell03][WARNIN] DEBUG:ceph-disk:ceph osd.2 data dir is ready at /var/lib/ceph/tmp/mnt.5n81s2
  260. [cell03][WARNIN] DEBUG:ceph-disk:Moving mount to final location...
  261. [cell03][WARNIN] INFO:ceph-disk:Running command: /bin/mount -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/osd/ceph-2
  262. [cell03][WARNIN] INFO:ceph-disk:Running command: /bin/umount -l -- /var/lib/ceph/tmp/mnt.5n81s2
  263. [cell03][WARNIN] DEBUG:ceph-disk:Starting ceph osd.2...
  264. [cell03][WARNIN] INFO:ceph-disk:Running command: /sbin/service ceph --cluster ceph start osd.2
  265. [cell03][WARNIN] libust[27410/27410]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
  266. [cell03][WARNIN] create-or-move updating item name 'osd.2' weight 0.27 at location {host=cell03,root=default} to crush map
  267. [cell03][WARNIN] libust[27454/27454]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
  268. [cell03][INFO ] checking OSD status...
  269. [cell03][INFO ] Running command: sudo ceph --cluster=ceph osd stat --format=json
  270. [cell03][WARNIN] there is 1 OSD down
  271. [cell03][WARNIN] there is 1 OSD out
  272. [cell03][INFO ] Running command: sudo chkconfig ceph on
  273. Error in sys.exitfunc
  274. [ceph@cell01 my-cluster]$ ceph-deploy admin ireadmin cell01 cell02 cell03 <<---采用Ceph-deploy工具复制对应的配置文件和管理钥匙到管理节点和所有的Ceph节点中
  275. [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
  276. [ceph_deploy.cli][INFO ] Invoked (1.5.19): /usr/bin/ceph-deploy admin ireadmin cell01 cell02 cell03
  277. [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ireadmin
  278. ceph@192.168.1.212's password:
  279. [ireadmin][DEBUG ] connection detected need for sudo
  280. ceph@192.168.1.212's password:
  281. [ireadmin][DEBUG ] connected to host: ireadmin
  282. [ireadmin][DEBUG ] detect platform information from remote host
  283. [ireadmin][DEBUG ] detect machine type
  284. [ireadmin][DEBUG ] get remote short hostname
  285. [ireadmin][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  286. [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cell01
  287. [cell01][DEBUG ] connection detected need for sudo
  288. [cell01][DEBUG ] connected to host: cell01
  289. [cell01][DEBUG ] detect platform information from remote host
  290. [cell01][DEBUG ] detect machine type
  291. [cell01][DEBUG ] get remote short hostname
  292. [cell01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf <<----写集群信息到配置文件中
  293. [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cell02 <<----将管理的秘钥配置保存到对应的设备中
  294. [cell02][DEBUG ] connection detected need for sudo
  295. [cell02][DEBUG ] connected to host: cell02
  296. [cell02][DEBUG ] detect platform information from remote host
  297. [cell02][DEBUG ] detect machine type
  298. [cell02][DEBUG ] get remote short hostname
  299. [cell02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  300. [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cell03
  301. [cell03][DEBUG ] connection detected need for sudo
  302. [cell03][DEBUG ] connected to host: cell03
  303. [cell03][DEBUG ] detect platform information from remote host
  304. [cell03][DEBUG ] detect machine type
  305. [cell03][DEBUG ] get remote short hostname
  306. [cell03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
  307. Error in sys.exitfunc:
  308. [ceph@cell01 my-cluster]$ chmod +r /etc/ceph/ceph.client.admin.keyring <<----修改客户端管理秘钥的权限
  309. chmod: changing permissions of `/etc/ceph/ceph.client.admin.keyring': Operation not permitted
  310. [ceph@cell01 my-cluster]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
  311. [ceph@cell01 my-cluster]$ ll
  312. total 88
  313. -rw-rw-r-- 1 ceph ceph 71 Sep 2 16:02 ceph.bootstrap-mds.keyring
  314. -rw-rw-r-- 1 ceph ceph 71 Sep 2 16:02 ceph.bootstrap-osd.keyring
  315. -rw-rw-r-- 1 ceph ceph 63 Sep 2 16:02 ceph.client.admin.keyring
  316. -rw-rw-r-- 1 ceph ceph 302 Sep 2 16:01 ceph.conf
  317. -rw-rw-r-- 1 ceph ceph 61825 Sep 2 16:26 ceph.log
  318. -rw-rw-r-- 1 ceph ceph 73 Sep 2 16:00 ceph.mon.keyring
  319. [ceph@cell01 my-cluster]$ ceph -s <<-----查看当前集群的状态
  320. cluster 32a0c6a4-7076-4c31-a625-a73480746d5e
  321. health HEALTH_WARN clock skew detected on mon.cell02, mon.cell03 <<----说明当前存在时间偏移,未设置时间同步导致
  322. monmap e2: 3 mons at {cell01=192.168.1.212:6789/0,cell02=192.168.1.213:6789/0,cell03=192.168.1.214:6789/0}, election epoch 6, quorum 0,1,2 cell01,cell02,cell03
  323. osdmap e14: 3 osds: 3 up, 3 in
  324. pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects
  325. 15459 MB used, 1657 GB / 1672 GB avail
  326. 64 active+clean
  327. [ceph@cell01 my-cluster]$ sudo ceph -s
  328. cluster 32a0c6a4-7076-4c31-a625-a73480746d5e
  329. health HEALTH_WARN clock skew detected on mon.cell02, mon.cell03
  330. monmap e2: 3 mons at {cell01=192.168.1.212:6789/0,cell02=192.168.1.213:6789/0,cell03=192.168.1.214:6789/0}, election epoch 6, quorum 0,1,2 cell01,cell02,cell03
  331. osdmap e14: 3 osds: 3 up, 3 in
  332. pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects
  333. 15459 MB used, 1657 GB / 1672 GB avail
  334. 64 active+clean
     在创建块设备之前先对osd进行分析,由上述的日志可知,在osd部署成功之后,实际上将对应的osd挂载到了/var/lib/ceph/osd/osd-xx/目录下,因此关于osd的数据信息,可查看该目录。

点击(此处)折叠或打开

  1. [ceph@cell01 my-cluster]$ ll /var/lib/ceph/osd/ceph-0/
  2. total 5242920
  3. -rw-r--r-- 1 root root 490 Sep 2 16:08 activate.monmap <<----monmap的信息
  4. -rw-r--r-- 1 root root 3 Sep 2 16:08 active
  5. -rw-r--r-- 1 root root 37 Sep 2 16:06 ceph_fsid <<----集群的fsid
  6. drwxr-xr-x 69 root root 1080 Sep 2 16:28 current <<----当前的数据目录,有效的数据保存在其中
  7. -rw-r--r-- 1 root root 37 Sep 2 16:06 fsid <<----当前磁盘的fsid
  8. -rw-r--r-- 1 root root 5368709120 Sep 6 09:50 journal <<----日志信息,在部署中为了提高系统性能通常将日志写入到其他的磁盘中
  9. -rw------- 1 root root 56 Sep 2 16:08 keyring <<-----认证信息
  10. -rw-r--r-- 1 root root 21 Sep 2 16:06 magic
  11. -rw-r--r-- 1 root root 6 Sep 2 16:08 ready
  12. -rw-r--r-- 1 root root 4 Sep 2 16:08 store_version
  13. -rw-r--r-- 1 root root 53 Sep 2 16:08 superblock
  14. -rw-r--r-- 1 root root 0 Sep 2 16:11 sysvinit
  15. -rw-r--r-- 1 root root 2 Sep 2 16:08 whoami
  16. [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/c
  17. ceph_fsid current/
  18. [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/ceph_fsid
  19. 32a0c6a4-7076-4c31-a625-a73480746d5e
  20. [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/fsid
  21. 333bf1d3-bb1d-4c57-b4b1-679dddbfdce8
  22. [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/magic
  23. ceph osd volume v026
  24. [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/ready
  25. ready
  26. [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/whoami
  27. 0
  28. [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/store_version
  29. [ceph@cell01 my-cluster]$
  30. [ceph@cell01 my-cluster]$
  31. [ceph@cell01 my-cluster]$ sudo cat /var/lib/ceph/osd/ceph-0/active
  32. ok
  33. [ceph@cell01 my-cluster]$

块设备的创建

点击(此处)折叠或打开

  1. [ceph@cell01 my-cluster]$ rados mkpool xxx <<----创建xxx的池
  2. successfully created pool xxx
  3. [ceph@cell01 my-cluster]$ rados df <<----查看状态
  4. rados ls xxxpool name category KB objects clones degraded unfound rd rd KB wr wr KB
  5. xxx - 0 0 0 0 0 0 0 0 0
  6. rbd - 0 0 0 0 0 0 0 0 0
  7. total used 15831216 0
  8. total avail 1738388628
  9. total space 1754219844
  10. [ceph@cell01 my-cluster]$ rados ls xxx
  11. pool name was not specified
  12. [ceph@cell01 my-cluster]$ ceph osd lspools <<----查看当前的池信息
  13. 0 rbd,1 xxx,
  14. [ceph@cell01 my-cluster]
  15. [ceph@cell01 my-cluster]$ rbd -p xxx create node01 --size 124000 <<----创建块设备node01,124000M
  16. [ceph@cell01 my-cluster]$ rbd -p xxx create node02 --size 124000
  17. [ceph@cell01 my-cluster]$ rbd -p xxx create node03 --size 124000
  18. [ceph@cell01 my-cluster]$ rbd -p xxx create node04 --size 124000
  19. [ceph@cell01 my-cluster]$ rbd -p xxx create node05 --size 124000
  20. [ceph@cell01 my-cluster]$ rbd ls xxx <<-----查看当前池中的块设备信息
  21. node01
  22. node02
  23. node03
  24. node04
  25. node05
  26. [ceph@cell01 my-cluster]$ rbd info xxx/node01 <<----查看具体的块设备信息
  27. rbd image 'node01':
  28. size 121 GB in 31000 objects
  29. order 22 (4096 kB objects)
  30. block_name_prefix: rb.0.1057.74b0dc51
  31. format: 1
  32. [ceph@cell01 my-cluster]$ rbd info node01
  33. 2015-09-06 16:54:03.237243 7fe04f93a7e0 -1 librbd::ImageCtx: error finding header: (2) No such file or directory
  34. rbd: error opening image node01: (2) No such file or directory
  35. [ceph@cell01 my-cluster]$ rbd info xxx/node01
  36. rbd image 'node01':
  37. size 121 GB in 31000 objects <<---块的大小,包含的对象数
  38. order 22 (4096 kB objects) <<---对象的大小
  39. block_name_prefix: rb.0.1057.74b0dc51 <<----对象的前缀
  40. format: 1 <<---格式,1为旧格式
  41. [ceph@cell01 my-cluster]$ rbd info xxx/node02
  42. rbd image 'node02':
  43. size 121 GB in 31000 objects
  44. order 22 (4096 kB objects)
  45. block_name_prefix: rb.0.105a.74b0dc51
  46. format: 1
  47. [ceph@cell01 my-cluster]$ rbd info xxx/node03
  48. rbd image 'node03':
  49. size 121 GB in 31000 objects
  50. order 22 (4096 kB objects)
  51. block_name_prefix: rb.0.109d.74b0dc51
  52. format: 1
  53. [ceph@cell01 my-cluster]$ rbd info xxx/node04
  54. rbd image 'node04':
  55. size 121 GB in 31000 objects
  56. order 22 (4096 kB objects)
  57. block_name_prefix: rb.0.105d.2ae8944a
  58. format: 1
  59. [ceph@cell01 my-cluster]$ rbd info xxx/node05
  60. rbd image 'node05':
  61. size 121 GB in 31000 objects
  62. order 22 (4096 kB objects)
  63. block_name_prefix: rb.0.10ce.74b0dc51
  64. format: 1

问题解决

     (1) 初始化过程中的监控器未进入仲裁,导致管理节点的部署目录中无法生成秘钥,通常会提示某个节点无法进入仲裁。如下所示
[cell02][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
[ceph_deploy.mon][WARNIN] mon.cell02 monitor is not yet in quorum, tries left: 2
[ceph_deploy.mon][WARNIN] waiting 15 seconds before retrying
[cell02][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell02.asok mon_status
[ceph_deploy.mon][WARNIN] mon.cell02 monitor is not yet in quorum, tries left: 1
[ceph_deploy.mon][WARNIN] waiting 20 seconds before retrying
[ceph_deploy.mon][INFO  ] processing monitor mon.cell03
[cell03][DEBUG ] connection detected need for sudo
[cell03][DEBUG ] connected to host: cell03 
[cell03][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.cell03.asok mon_status
[ceph_deploy.mon][INFO  ] mon.cell03 monitor has reached quorum!
[ceph_deploy.mon][ERROR ] Some monitors have still not reached quorum:
[ceph_deploy.mon][ERROR ] cell02
       出现这种问题通常是对应的节点中存在原来的配置,导致新部署过程中无法生成认证秘钥。此时遍历待部署的所有节点将/etc/ceph,和/var/lib/ceph下的目录清除掉,然后再部署,通常就能解决。

      (2) 在部署OSD的过程中出现 OSError: [Errno 2] No such file or directory: '/var/lib/ceph/osd/ceph-0'这类的错误,这种问题的解决方法是手动创建对应的目录,然后再进行激活。这部分应该在部署时自动添加更加方便,暂不清楚原因(ceph-disk)。
      (3) 通常是采用完整的磁盘作为osd设备,但有时候会采用磁盘的某个分区作为osd设备,此时这种部署可能会出现如下的问题:
[ceph@cell01 my-cluster]$ ceph-deploy osd activate cell01:/dev/sdb1 cell02:/dev/sdb1 cell03:/dev/sdb1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.19): /usr/bin/ceph-deploy osd activate cell01:/dev/sdb1 cell02:/dev/sdb1 cell03:/dev/sdb1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks cell01:/dev/sdb1: cell02:/dev/sdb1: cell03:/dev/sdb1:
[cell01][DEBUG ] connection detected need for sudo
[cell01][DEBUG ] connected to host: cell01 
[cell01][DEBUG ] detect platform information from remote host
[cell01][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final
[ceph_deploy.osd][DEBUG ] activating host cell01 disk /dev/sdb1
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[cell01][INFO  ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb1
[cell01][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdb1
[cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[cell01][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.rRQkAk with options noatime,inode64
[cell01][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.rRQkAk
[cell01][WARNIN] DEBUG:ceph-disk:Cluster uuid is 9061096f-d9f9-4946-94f1-296ab5080a97
[cell01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[cell01][WARNIN] ERROR:ceph-disk:Failed to activate
[cell01][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.rRQkAk
[cell01][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.rRQkAk
[cell01][WARNIN] ceph-disk: Error: No cluster conf found in /etc/ceph with fsid 9061096f-d9f9-4946-94f1-296ab5080a97
[cell01][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init sysvinit --mount /dev/sdb1
       在部署中我规避了该问题,采用单个磁盘作为osd,如果采用分区作为osd设备,可参考。

       (4) 由于各个节点之间未进行时间的同步,导致监控器之间的时间差较大,在查看过程中可能会出现如下的问题:
[ceph@cell01 my-cluster]$ ceph health
HEALTH_WARN clock skew detected on mon.cell02, mon.cell03
此时采用ntp进行节点间时间的同步,一段时间后对应的ceph集群就会进入健康的状态
[ceph@cell01 my-cluster]$ ceph -s
    cluster 32a0c6a4-7076-4c31-a625-a73480746d5e
     health HEALTH_OK
     monmap e2: 3 mons at {cell01=192.168.1.212:6789/0,cell02=192.168.1.213:6789/0,cell03=192.168.1.214:6789/0}, election epoch 10, quorum 0,1,2 cell01,cell02,cell03
     osdmap e16: 3 osds: 3 up, 3 in
      pgmap v244: 72 pgs, 2 pools, 8 bytes data, 1 objects
            15460 MB used, 1657 GB / 1672 GB avail
                  72 active+clean
[ceph@cell01 my-cluster]$ ceph health
HEALTH_OK
[ceph@cell01 my-cluster]$



阅读(5753) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~