Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1769832
  • 博文数量: 276
  • 博客积分: 1574
  • 博客等级: 上尉
  • 技术积分: 2894
  • 用 户 组: 普通用户
  • 注册时间: 2010-05-26 23:23
个人简介

生活的美妙在于,不知道一下秒是惊艳还是伤神,时光流转,珍惜现在的拥有的时光

文章分类

全部博文(276)

文章存档

2017年(17)

2016年(131)

2015年(63)

2013年(2)

2012年(32)

2011年(31)

分类: 服务器与存储

2015-09-14 15:18:20

[talen@ceph_admin ~]$ ceph-deploy -v --username talen purge ceph_admin ceph_node1 ceph_node2 ceph_monitor             
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /bin/ceph-deploy -v --username talen purge ceph_admin ceph_node1 ceph_node2 ceph_monitor
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : talen
[ceph_deploy.cli][INFO  ]  verbose                       : True
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['ceph_admin', 'ceph_node1', 'ceph_node2', 'ceph_monitor']
[ceph_deploy.cli][INFO  ]  func                          :
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.install][INFO  ] note that some dependencies *will not* be removed because they can cause issues with qemu-kvm
[ceph_deploy.install][INFO  ] like: librbd1 and librados2
[ceph_deploy.install][DEBUG ] Purging from cluster ceph hosts ceph_admin ceph_node1 ceph_node2 ceph_monitor
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph_admin ...
[ceph_admin][DEBUG ] connection detected need for sudo
[ceph_admin][DEBUG ] connected to host: talen@ceph_admin
[ceph_admin][DEBUG ] detect platform information from remote host
[ceph_admin][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_admin][INFO  ] purging host ... ceph_admin
[ceph_admin][INFO  ] Running command: sudo yum -y -q remove ceph ceph-release ceph-common ceph-radosgw
[ceph_admin][WARNIN] No Match for argument: ceph
[ceph_admin][WARNIN] No Match for argument: ceph-release
[ceph_admin][WARNIN] No Match for argument: ceph-common
[ceph_admin][WARNIN] No Match for argument: ceph-radosgw
[ceph_admin][INFO  ] Running command: sudo yum clean all
[ceph_admin][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_admin][DEBUG ] Cleaning repos: base epel extras updates
[ceph_admin][DEBUG ] Cleaning up everything
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph_node1 ...
[ceph_node1][DEBUG ] connection detected need for sudo
[ceph_node1][DEBUG ] connected to host: talen@ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_node1][INFO  ] purging host ... ceph_node1
[ceph_node1][INFO  ] Running command: sudo yum -y -q remove ceph ceph-release ceph-common ceph-radosgw
[ceph_node1][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave
[ceph_node1][INFO  ] Running command: sudo yum clean all
[ceph_node1][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_node1][DEBUG ] Cleaning repos: base epel extras updates
[ceph_node1][DEBUG ] Cleaning up everything
[ceph_node1][DEBUG ] Cleaning up list of fastest mirrors
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph_node2 ...
[ceph_node2][DEBUG ] connection detected need for sudo
[ceph_node2][DEBUG ] connected to host: talen@ceph_node2
[ceph_node2][DEBUG ] detect platform information from remote host
[ceph_node2][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_node2][INFO  ] purging host ... ceph_node2
[ceph_node2][INFO  ] Running command: sudo yum -y -q remove ceph ceph-release ceph-common ceph-radosgw
[ceph_node2][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave
[ceph_node2][INFO  ] Running command: sudo yum clean all
[ceph_node2][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_node2][DEBUG ] Cleaning repos: base epel extras updates
[ceph_node2][DEBUG ] Cleaning up everything
[ceph_node2][DEBUG ] Cleaning up list of fastest mirrors
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph_monitor ...
[ceph_monitor][DEBUG ] connection detected need for sudo
[ceph_monitor][DEBUG ] connected to host: talen@ceph_monitor
[ceph_monitor][DEBUG ] detect platform information from remote host
[ceph_monitor][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_monitor][INFO  ] purging host ... ceph_monitor
[ceph_monitor][INFO  ] Running command: sudo yum -y -q remove ceph ceph-release ceph-common ceph-radosgw
[ceph_monitor][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave
[ceph_monitor][INFO  ] Running command: sudo yum clean all
[ceph_monitor][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_monitor][DEBUG ] Cleaning repos: base epel extras updates
[ceph_monitor][DEBUG ] Cleaning up everything
[ceph_monitor][DEBUG ] Cleaning up list of fastest mirrors
[talen@ceph_admin ~]$
[talen@ceph_admin ~]$ ceph-deploy --username talen -v forgetkeys                                                      
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /bin/ceph-deploy --username talen -v forgetkeys
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : talen
[ceph_deploy.cli][INFO  ]  verbose                       : True
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          :
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[talen@ceph_admin ~]$ [talen@ceph_admin ~]$ ceph-deploy new ceph_monitor                 
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /bin/ceph-deploy new ceph_monitor
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          :
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph_monitor']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph_monitor][DEBUG ] connected to host: ceph_admin
[ceph_monitor][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph_monitor
[ceph_monitor][DEBUG ] connection detected need for sudo
[ceph_monitor][DEBUG ] connected to host: ceph_monitor
[ceph_monitor][DEBUG ] detect platform information from remote host
[ceph_monitor][DEBUG ] detect machine type
[ceph_monitor][DEBUG ] find the location of an executable
[ceph_monitor][INFO  ] Running command: sudo /usr/sbin/ip link show
[ceph_monitor][INFO  ] Running command: sudo /usr/sbin/ip addr show
[ceph_monitor][DEBUG ] IP addresses found: ['10.0.2.33', '192.168.100.141']
[ceph_deploy.new][DEBUG ] Resolving host ceph_monitor
[ceph_deploy.new][DEBUG ] Monitor ceph_monitor at 10.0.2.33
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph_monitor']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.0.2.33']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[talen@ceph_admin ~]$ vim ceph.conf
[talen@ceph_admin ~]$ cat ceph.conf
[global]
fsid = 58514e13-d332-4a7e-9760-e3fccb9e2c76
mon_initial_members = ceph_monitor
mon_host = 10.0.2.33
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true

osd_pool_default_size = 2

[talen@ceph_admin ~]$
[talen@ceph_admin ~]$ ceph-deploy install ceph_node1 ceph_node2 ceph_monitor ceph_admin
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /bin/ceph-deploy install ceph_node1 ceph_node2 ceph_monitor ceph_admin
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       :
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : True
[ceph_deploy.cli][INFO  ]  func                          :
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : ['ceph_node1', 'ceph_node2', 'ceph_monitor', 'ceph_admin']
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : None
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version hammer on cluster ceph hosts ceph_node1 ceph_node2 ceph_monitor ceph_admin
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph_node1 ...
[ceph_node1][DEBUG ] connection detected need for sudo
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_node1][INFO  ] installing Ceph on ceph_node1
[ceph_node1][INFO  ] Running command: sudo yum clean all
[ceph_node1][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_node1][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[ceph_node1][DEBUG ] Cleaning up everything
[ceph_node1][INFO  ] Running command: sudo yum -y install epel-release
[ceph_node1][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_node1][DEBUG ] Determining fastest mirrors
[ceph_node1][DEBUG ]  * base: mirrors.skyshe.cn
[ceph_node1][DEBUG ]  * epel: mirrors.ustc.edu.cn
[ceph_node1][DEBUG ]  * extras: mirrors.skyshe.cn
[ceph_node1][DEBUG ]  * updates: mirrors.163.com
[ceph_node1][DEBUG ] 36 packages excluded due to repository priority protections
[ceph_node1][DEBUG ] Package epel-release-7-5.noarch already installed and latest version
[ceph_node1][DEBUG ] Nothing to do
[ceph_node1][INFO  ] Running command: sudo yum -y install yum-plugin-priorities
[ceph_node1][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_node1][DEBUG ] Loading mirror speeds from cached hostfile
[ceph_node1][DEBUG ]  * base: mirrors.skyshe.cn
[ceph_node1][DEBUG ]  * epel: mirrors.ustc.edu.cn
[ceph_node1][DEBUG ]  * extras: mirrors.skyshe.cn
[ceph_node1][DEBUG ]  * updates: mirrors.163.com
[ceph_node1][DEBUG ] 36 packages excluded due to repository priority protections
[ceph_node1][DEBUG ] Package yum-plugin-priorities-1.1.31-29.el7.noarch already installed and latest version
[ceph_node1][DEBUG ] Nothing to do
[ceph_node1][DEBUG ] Configure Yum priorities to include obsoletes
[ceph_node1][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[ceph_node1][INFO  ] Running command: sudo rpm --import
[ceph_node1][INFO  ] Running command: sudo rpm -Uvh --replacepkgs
[ceph_node1][DEBUG ] Retrieving
[ceph_node1][DEBUG ] Preparing...                          ########################################
[ceph_node1][DEBUG ] Updating / installing...
[ceph_node1][DEBUG ] ceph-release-1-1.el7                  ########################################
[ceph_node1][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[ceph_node1][WARNIN] altered ceph.repo priorities to contain: priority=1
[ceph_node1][INFO  ] Running command: sudo yum -y install ceph ceph-radosgw
[ceph_node1][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_node1][DEBUG ] Loading mirror speeds from cached hostfile
[ceph_node1][DEBUG ]  * base: mirrors.skyshe.cn
[ceph_node1][DEBUG ]  * epel: mirrors.ustc.edu.cn
[ceph_node1][DEBUG ]  * extras: mirrors.skyshe.cn
[ceph_node1][DEBUG ]  * updates: mirrors.163.com
[ceph_node1][DEBUG ] 36 packages excluded due to repository priority protections
[ceph_node1][DEBUG ] Package 1:ceph-0.94.3-0.el7.x86_64 already installed and latest version
[ceph_node1][DEBUG ] Package 1:ceph-radosgw-0.94.3-0.el7.x86_64 already installed and latest version
[ceph_node1][DEBUG ] Nothing to do
[ceph_node1][INFO  ] Running command: sudo ceph --version
[ceph_node1][DEBUG ] ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph_node2 ...
[ceph_node2][DEBUG ] connection detected need for sudo
[ceph_node2][DEBUG ] connected to host: ceph_node2
[ceph_node2][DEBUG ] detect platform information from remote host
[ceph_node2][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_node2][INFO  ] installing Ceph on ceph_node2
[ceph_node2][INFO  ] Running command: sudo yum clean all
[ceph_node2][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_node2][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[ceph_node2][DEBUG ] Cleaning up everything
[ceph_node2][DEBUG ] Cleaning up list of fastest mirrors
[ceph_node2][INFO  ] Running command: sudo yum -y install epel-release
[ceph_node2][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_node2][DEBUG ] Determining fastest mirrors
[ceph_node2][DEBUG ]  * base: mirrors.163.com
[ceph_node2][DEBUG ]  * epel: ftp.cuhk.edu.hk
[ceph_node2][DEBUG ]  * extras: mirrors.skyshe.cn
[ceph_node2][DEBUG ]  * updates: mirrors.163.com
[ceph_node2][DEBUG ] Package epel-release-7-5.noarch already installed and latest version
[ceph_node2][DEBUG ] Nothing to do
[ceph_node2][INFO  ] Running command: sudo yum -y install yum-plugin-priorities
[ceph_node2][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_node2][DEBUG ] Loading mirror speeds from cached hostfile
[ceph_node2][DEBUG ]  * base: mirrors.163.com
[ceph_node2][DEBUG ]  * epel: ftp.cuhk.edu.hk
[ceph_node2][DEBUG ]  * extras: mirrors.skyshe.cn
[ceph_node2][DEBUG ]  * updates: mirrors.163.com
[ceph_node2][DEBUG ] Package yum-plugin-priorities-1.1.31-29.el7.noarch already installed and latest version
[ceph_node2][DEBUG ] Nothing to do
[ceph_node2][DEBUG ] Configure Yum priorities to include obsoletes
[ceph_node2][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[ceph_node2][INFO  ] Running command: sudo rpm --import
[ceph_node2][INFO  ] Running command: sudo rpm -Uvh --replacepkgs
[ceph_node2][DEBUG ] Retrieving
[ceph_node2][DEBUG ] Preparing...                          ########################################
[ceph_node2][DEBUG ] Updating / installing...
[ceph_node2][DEBUG ] ceph-release-1-1.el7                  ########################################
[ceph_node2][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[ceph_node2][WARNIN] altered ceph.repo priorities to contain: priority=1
[ceph_node2][INFO  ] Running command: sudo yum -y install ceph ceph-radosgw
[ceph_node2][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_node2][DEBUG ] Loading mirror speeds from cached hostfile
[ceph_node2][DEBUG ]  * base: mirrors.163.com
[ceph_node2][DEBUG ]  * epel: ftp.cuhk.edu.hk
[ceph_node2][DEBUG ]  * extras: mirrors.skyshe.cn
[ceph_node2][DEBUG ]  * updates: mirrors.163.com
[ceph_node2][DEBUG ] 36 packages excluded due to repository priority protections
[ceph_node2][DEBUG ] Package 1:ceph-0.94.3-0.el7.x86_64 already installed and latest version
[ceph_node2][DEBUG ] Resolving Dependencies
[ceph_node2][DEBUG ] --> Running transaction check
[ceph_node2][DEBUG ] ---> Package ceph-radosgw.x86_64 1:0.94.3-0.el7 will be installed
[ceph_node2][DEBUG ] --> Finished Dependency Resolution
[ceph_node2][DEBUG ]
[ceph_node2][DEBUG ] Dependencies Resolved
[ceph_node2][DEBUG ]
[ceph_node2][DEBUG ] ================================================================================
[ceph_node2][DEBUG ]  Package              Arch           Version                 Repository    Size
[ceph_node2][DEBUG ] ================================================================================
[ceph_node2][DEBUG ] Installing:
[ceph_node2][DEBUG ]  ceph-radosgw         x86_64         1:0.94.3-0.el7          Ceph         2.3 M
[ceph_node2][DEBUG ]
[ceph_node2][DEBUG ] Transaction Summary
[ceph_node2][DEBUG ] ================================================================================
[ceph_node2][DEBUG ] Install  1 Package
[ceph_node2][DEBUG ]
[ceph_node2][DEBUG ] Total download size: 2.3 M
[ceph_node2][DEBUG ] Installed size: 8.3 M
[ceph_node2][DEBUG ] Downloading packages:
[ceph_node2][DEBUG ] Running transaction check
[ceph_node2][DEBUG ] Running transaction test
[ceph_node2][DEBUG ] Transaction test succeeded
[ceph_node2][DEBUG ] Running transaction
[ceph_node2][DEBUG ]   Installing : 1:ceph-radosgw-0.94.3-0.el7.x86_64                           1/1
[ceph_node2][DEBUG ]   Verifying  : 1:ceph-radosgw-0.94.3-0.el7.x86_64                           1/1
[ceph_node2][DEBUG ]
[ceph_node2][DEBUG ] Installed:
[ceph_node2][DEBUG ]   ceph-radosgw.x86_64 1:0.94.3-0.el7                                            
[ceph_node2][DEBUG ]
[ceph_node2][DEBUG ] Complete!
[ceph_node2][INFO  ] Running command: sudo ceph --version
[ceph_node2][DEBUG ] ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph_monitor ...
[ceph_monitor][DEBUG ] connection detected need for sudo
[ceph_monitor][DEBUG ] connected to host: ceph_monitor
[ceph_monitor][DEBUG ] detect platform information from remote host
[ceph_monitor][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_monitor][INFO  ] installing Ceph on ceph_monitor
[ceph_monitor][INFO  ] Running command: sudo yum clean all
[ceph_monitor][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_monitor][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[ceph_monitor][DEBUG ] Cleaning up everything
[ceph_monitor][DEBUG ] Cleaning up list of fastest mirrors
[ceph_monitor][INFO  ] Running command: sudo yum -y install epel-release
[ceph_monitor][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_monitor][DEBUG ] Determining fastest mirrors
[ceph_monitor][DEBUG ]  * base: mirrors.skyshe.cn
[ceph_monitor][DEBUG ]  * epel: mirrors.ustc.edu.cn
[ceph_monitor][DEBUG ]  * extras: mirrors.skyshe.cn
[ceph_monitor][DEBUG ]  * updates: mirrors.163.com
[ceph_monitor][DEBUG ] Package epel-release-7-5.noarch already installed and latest version
[ceph_monitor][DEBUG ] Nothing to do
[ceph_monitor][INFO  ] Running command: sudo yum -y install yum-plugin-priorities
[ceph_monitor][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_monitor][DEBUG ] Loading mirror speeds from cached hostfile
[ceph_monitor][DEBUG ]  * base: mirrors.skyshe.cn
[ceph_monitor][DEBUG ]  * epel: mirrors.ustc.edu.cn
[ceph_monitor][DEBUG ]  * extras: mirrors.skyshe.cn
[ceph_monitor][DEBUG ]  * updates: mirrors.163.com
[ceph_monitor][DEBUG ] Package yum-plugin-priorities-1.1.31-29.el7.noarch already installed and latest version
[ceph_monitor][DEBUG ] Nothing to do
[ceph_monitor][DEBUG ] Configure Yum priorities to include obsoletes
[ceph_monitor][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[ceph_monitor][INFO  ] Running command: sudo rpm --import
[ceph_monitor][INFO  ] Running command: sudo rpm -Uvh --replacepkgs
[ceph_monitor][DEBUG ] Retrieving
[ceph_monitor][DEBUG ] Preparing...                          ########################################
[ceph_monitor][DEBUG ] Updating / installing...
[ceph_monitor][DEBUG ] ceph-release-1-1.el7                  ########################################
[ceph_monitor][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[ceph_monitor][WARNIN] altered ceph.repo priorities to contain: priority=1
[ceph_monitor][INFO  ] Running command: sudo yum -y install ceph ceph-radosgw
[ceph_monitor][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_monitor][DEBUG ] Loading mirror speeds from cached hostfile
[ceph_monitor][DEBUG ]  * base: mirrors.skyshe.cn
[ceph_monitor][DEBUG ]  * epel: mirrors.ustc.edu.cn
[ceph_monitor][DEBUG ]  * extras: mirrors.skyshe.cn
[ceph_monitor][DEBUG ]  * updates: mirrors.163.com
[ceph_monitor][DEBUG ] 36 packages excluded due to repository priority protections
[ceph_monitor][DEBUG ] Package 1:ceph-0.94.3-0.el7.x86_64 already installed and latest version
[ceph_monitor][DEBUG ] Resolving Dependencies
[ceph_monitor][DEBUG ] --> Running transaction check
[ceph_monitor][DEBUG ] ---> Package ceph-radosgw.x86_64 1:0.94.3-0.el7 will be installed
[ceph_monitor][DEBUG ] --> Finished Dependency Resolution
[ceph_monitor][DEBUG ]
[ceph_monitor][DEBUG ] Dependencies Resolved
[ceph_monitor][DEBUG ]
[ceph_monitor][DEBUG ] ================================================================================
[ceph_monitor][DEBUG ]  Package              Arch           Version                 Repository    Size
[ceph_monitor][DEBUG ] ================================================================================
[ceph_monitor][DEBUG ] Installing:
[ceph_monitor][DEBUG ]  ceph-radosgw         x86_64         1:0.94.3-0.el7          Ceph         2.3 M
[ceph_monitor][DEBUG ]
[ceph_monitor][DEBUG ] Transaction Summary
[ceph_monitor][DEBUG ] ================================================================================
[ceph_monitor][DEBUG ] Install  1 Package
[ceph_monitor][DEBUG ]
[ceph_monitor][DEBUG ] Total download size: 2.3 M
[ceph_monitor][DEBUG ] Installed size: 8.3 M
[ceph_monitor][DEBUG ] Downloading packages:
[ceph_monitor][DEBUG ] Running transaction check
[ceph_monitor][DEBUG ] Running transaction test
[ceph_monitor][DEBUG ] Transaction test succeeded
[ceph_monitor][DEBUG ] Running transaction
[ceph_monitor][DEBUG ]   Installing : 1:ceph-radosgw-0.94.3-0.el7.x86_64                           1/1
[ceph_monitor][DEBUG ]   Verifying  : 1:ceph-radosgw-0.94.3-0.el7.x86_64                           1/1
[ceph_monitor][DEBUG ]
[ceph_monitor][DEBUG ] Installed:
[ceph_monitor][DEBUG ]   ceph-radosgw.x86_64 1:0.94.3-0.el7                                            
[ceph_monitor][DEBUG ]
[ceph_monitor][DEBUG ] Complete!
[ceph_monitor][INFO  ] Running command: sudo ceph --version
[ceph_monitor][DEBUG ] ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph_admin ...
[ceph_admin][DEBUG ] connection detected need for sudo
[ceph_admin][DEBUG ] connected to host: ceph_admin
[ceph_admin][DEBUG ] detect platform information from remote host
[ceph_admin][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_admin][INFO  ] installing Ceph on ceph_admin
[ceph_admin][INFO  ] Running command: sudo yum clean all
[ceph_admin][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_admin][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[ceph_admin][DEBUG ] Cleaning up everything
[ceph_admin][DEBUG ] Cleaning up list of fastest mirrors
[ceph_admin][INFO  ] Running command: sudo yum -y install epel-release
[ceph_admin][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_admin][DEBUG ] Determining fastest mirrors
[ceph_admin][DEBUG ]  * base: mirrors.163.com
[ceph_admin][DEBUG ]  * epel: ftp.cuhk.edu.hk
[ceph_admin][DEBUG ]  * extras: mirrors.skyshe.cn
[ceph_admin][DEBUG ]  * updates: mirrors.163.com
[ceph_admin][DEBUG ] Package epel-release-7-5.noarch already installed and latest version
[ceph_admin][DEBUG ] Nothing to do
[ceph_admin][INFO  ] Running command: sudo yum -y install yum-plugin-priorities
[ceph_admin][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_admin][DEBUG ] Loading mirror speeds from cached hostfile
[ceph_admin][DEBUG ]  * base: mirrors.163.com
[ceph_admin][DEBUG ]  * epel: ftp.cuhk.edu.hk
[ceph_admin][DEBUG ]  * extras: mirrors.skyshe.cn
[ceph_admin][DEBUG ]  * updates: mirrors.163.com
[ceph_admin][DEBUG ] Package yum-plugin-priorities-1.1.31-29.el7.noarch already installed and latest version
[ceph_admin][DEBUG ] Nothing to do
[ceph_admin][DEBUG ] Configure Yum priorities to include obsoletes
[ceph_admin][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[ceph_admin][INFO  ] Running command: sudo rpm --import
[ceph_admin][INFO  ] Running command: sudo rpm -Uvh --replacepkgs
[ceph_admin][DEBUG ] Retrieving
[ceph_admin][DEBUG ] Preparing...                          ########################################
[ceph_admin][DEBUG ] Updating / installing...
[ceph_admin][DEBUG ] ceph-release-1-1.el7                  ########################################
[ceph_admin][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[ceph_admin][WARNIN] altered ceph.repo priorities to contain: priority=1
[ceph_admin][INFO  ] Running command: sudo yum -y install ceph ceph-radosgw
[ceph_admin][DEBUG ] Loaded plugins: fastestmirror, priorities
[ceph_admin][DEBUG ] Loading mirror speeds from cached hostfile
[ceph_admin][DEBUG ]  * base: mirrors.163.com
[ceph_admin][DEBUG ]  * epel: ftp.cuhk.edu.hk
[ceph_admin][DEBUG ]  * extras: mirrors.skyshe.cn
[ceph_admin][DEBUG ]  * updates: mirrors.163.com
[ceph_admin][DEBUG ] 36 packages excluded due to repository priority protections
[ceph_admin][DEBUG ] Package 1:ceph-0.94.3-0.el7.x86_64 already installed and latest version
[ceph_admin][DEBUG ] Resolving Dependencies
[ceph_admin][DEBUG ] --> Running transaction check
[ceph_admin][DEBUG ] ---> Package ceph-radosgw.x86_64 1:0.94.3-0.el7 will be installed
[ceph_admin][DEBUG ] --> Finished Dependency Resolution
[ceph_admin][DEBUG ]
[ceph_admin][DEBUG ] Dependencies Resolved
[ceph_admin][DEBUG ]
[ceph_admin][DEBUG ] ================================================================================
[ceph_admin][DEBUG ]  Package              Arch           Version                 Repository    Size
[ceph_admin][DEBUG ] ================================================================================
[ceph_admin][DEBUG ] Installing:
[ceph_admin][DEBUG ]  ceph-radosgw         x86_64         1:0.94.3-0.el7          Ceph         2.3 M
[ceph_admin][DEBUG ]
[ceph_admin][DEBUG ] Transaction Summary
[ceph_admin][DEBUG ] ================================================================================
[ceph_admin][DEBUG ] Install  1 Package
[ceph_admin][DEBUG ]
[ceph_admin][DEBUG ] Total download size: 2.3 M
[ceph_admin][DEBUG ] Installed size: 8.3 M
[ceph_admin][DEBUG ] Downloading packages:
[ceph_admin][DEBUG ] Running transaction check
[ceph_admin][DEBUG ] Running transaction test
[ceph_admin][DEBUG ] Transaction test succeeded
[ceph_admin][DEBUG ] Running transaction
[ceph_admin][DEBUG ]   Installing : 1:ceph-radosgw-0.94.3-0.el7.x86_64                           1/1
[ceph_admin][DEBUG ]   Verifying  : 1:ceph-radosgw-0.94.3-0.el7.x86_64                           1/1
[ceph_admin][DEBUG ]
[ceph_admin][DEBUG ] Installed:
[ceph_admin][DEBUG ]   ceph-radosgw.x86_64 1:0.94.3-0.el7                                            
[ceph_admin][DEBUG ]
[ceph_admin][DEBUG ] Complete!
[ceph_admin][INFO  ] Running command: sudo ceph --version
[ceph_admin][DEBUG ] ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
[talen@ceph_admin ~]$

[talen@ceph_admin ~]$ ceph-deploy mon create-initial                                        
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          :
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph_monitor
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph_monitor ...
[ceph_monitor][DEBUG ] connection detected need for sudo
[ceph_monitor][DEBUG ] connected to host: ceph_monitor
[ceph_monitor][DEBUG ] detect platform information from remote host
[ceph_monitor][DEBUG ] detect machine type
[ceph_monitor][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.1.1503 Core
[ceph_monitor][DEBUG ] determining if provided host has same hostname in remote
[ceph_monitor][DEBUG ] get remote short hostname
[ceph_monitor][DEBUG ] deploying mon to ceph_monitor
[ceph_monitor][DEBUG ] get remote short hostname
[ceph_monitor][DEBUG ] remote hostname: ceph_monitor
[ceph_monitor][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_monitor][DEBUG ] create the mon path if it does not exist
[ceph_monitor][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph_monitor/done
[ceph_monitor][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph_monitor][DEBUG ] create the init path if it does not exist
[ceph_monitor][DEBUG ] locating the `service` executable...
[ceph_monitor][INFO  ] Running command: sudo /usr/sbin/service ceph -c /etc/ceph/ceph.conf start mon.ceph_monitor
[ceph_monitor][DEBUG ] === mon.ceph_monitor ===
[ceph_monitor][DEBUG ] Starting Ceph mon.ceph_monitor on ceph_monitor...
[ceph_monitor][WARNIN] Running as unit run-16324.service.
[ceph_monitor][DEBUG ] Starting ceph-create-keys on ceph_monitor...
[ceph_monitor][INFO  ] Running command: sudo systemctl enable ceph
[ceph_monitor][WARNIN] ceph.service is not a native service, redirecting to /sbin/chkconfig.
[ceph_monitor][WARNIN] Executing /sbin/chkconfig ceph on
[ceph_monitor][WARNIN] The unit files have no [Install] section. They are not meant to be enabled
[ceph_monitor][WARNIN] using systemctl.
[ceph_monitor][WARNIN] Possible reasons for having this kind of units are:
[ceph_monitor][WARNIN] 1) A unit may be statically enabled by being symlinked from another unit's
[ceph_monitor][WARNIN]    .wants/ or .requires/ directory.
[ceph_monitor][WARNIN] 2) A unit's purpose may be to act as a helper for some other unit which has
[ceph_monitor][WARNIN]    a requirement dependency on it.
[ceph_monitor][WARNIN] 3) A unit may be started when needed via activation (socket, path, timer,
[ceph_monitor][WARNIN]    D-Bus, udev, scripted systemctl call, ...).
[ceph_monitor][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph_monitor.asok mon_status
[ceph_monitor][DEBUG ] ********************************************************************************
[ceph_monitor][DEBUG ] status for monitor: mon.ceph_monitor
[ceph_monitor][DEBUG ] {
[ceph_monitor][DEBUG ]   "election_epoch": 1,
[ceph_monitor][DEBUG ]   "extra_probe_peers": [],
[ceph_monitor][DEBUG ]   "monmap": {
[ceph_monitor][DEBUG ]     "created": "0.000000",
[ceph_monitor][DEBUG ]     "epoch": 1,
[ceph_monitor][DEBUG ]     "fsid": "f35a65ad-1a6a-4e8d-8f7e-cb5f113c0a02",
[ceph_monitor][DEBUG ]     "modified": "0.000000",
[ceph_monitor][DEBUG ]     "mons": [
[ceph_monitor][DEBUG ]       {
[ceph_monitor][DEBUG ]         "addr": "10.0.2.33:6789/0",
[ceph_monitor][DEBUG ]         "name": "ceph_monitor",
[ceph_monitor][DEBUG ]         "rank": 0
[ceph_monitor][DEBUG ]       }
[ceph_monitor][DEBUG ]     ]
[ceph_monitor][DEBUG ]   },
[ceph_monitor][DEBUG ]   "name": "ceph_monitor",
[ceph_monitor][DEBUG ]   "outside_quorum": [],
[ceph_monitor][DEBUG ]   "quorum": [
[ceph_monitor][DEBUG ]     0
[ceph_monitor][DEBUG ]   ],
[ceph_monitor][DEBUG ]   "rank": 0,
[ceph_monitor][DEBUG ]   "state": "leader",
[ceph_monitor][DEBUG ]   "sync_provider": []
[ceph_monitor][DEBUG ] }
[ceph_monitor][DEBUG ] ********************************************************************************
[ceph_monitor][INFO  ] monitor: mon.ceph_monitor is running
[ceph_monitor][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph_monitor.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.ceph_monitor
[ceph_monitor][DEBUG ] connection detected need for sudo
[ceph_monitor][DEBUG ] connected to host: ceph_monitor
[ceph_monitor][DEBUG ] detect platform information from remote host
[ceph_monitor][DEBUG ] detect machine type
[ceph_monitor][DEBUG ] find the location of an executable
[ceph_monitor][INFO  ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph_monitor.asok mon_status
[ceph_deploy.mon][INFO  ] mon.ceph_monitor monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][DEBUG ] Checking ceph_monitor for /etc/ceph/ceph.client.admin.keyring
[ceph_monitor][DEBUG ] connection detected need for sudo
[ceph_monitor][DEBUG ] connected to host: ceph_monitor
[ceph_monitor][DEBUG ] detect platform information from remote host
[ceph_monitor][DEBUG ] detect machine type
[ceph_monitor][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from ceph_monitor.
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
[ceph_deploy.gatherkeys][DEBUG ] Checking ceph_monitor for /var/lib/ceph/bootstrap-osd/ceph.keyring
[ceph_monitor][DEBUG ] connection detected need for sudo
[ceph_monitor][DEBUG ] connected to host: ceph_monitor
[ceph_monitor][DEBUG ] detect platform information from remote host
[ceph_monitor][DEBUG ] detect machine type
[ceph_monitor][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from ceph_monitor.
[ceph_deploy.gatherkeys][DEBUG ] Checking ceph_monitor for /var/lib/ceph/bootstrap-mds/ceph.keyring
[ceph_monitor][DEBUG ] connection detected need for sudo
[ceph_monitor][DEBUG ] connected to host: ceph_monitor
[ceph_monitor][DEBUG ] detect platform information from remote host
[ceph_monitor][DEBUG ] detect machine type
[ceph_monitor][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from ceph_monitor.
[ceph_deploy.gatherkeys][DEBUG ] Checking ceph_monitor for /var/lib/ceph/bootstrap-rgw/ceph.keyring
[ceph_monitor][DEBUG ] connection detected need for sudo
[ceph_monitor][DEBUG ] connected to host: ceph_monitor
[ceph_monitor][DEBUG ] detect platform information from remote host
[ceph_monitor][DEBUG ] detect machine type
[ceph_monitor][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-rgw.keyring key from ceph_monitor.
[talen@ceph_admin ~]$
[talen@ceph_admin ~]$ ceph-deploy osd prepare ceph_node1:/osd ceph_node2:/osd
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /bin/ceph-deploy osd prepare ceph_node1:/osd ceph_node2:/osd
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('ceph_node1', '/osd', None), ('ceph_node2', '/osd', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  func                          :
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph_node1:/osd: ceph_node2:/osd:
[ceph_node1][DEBUG ] connection detected need for sudo
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph_node1
[ceph_node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_node1][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph_node1 disk /osd journal None activate False
[ceph_node1][INFO  ] Running command: sudo ceph-disk -v prepare --cluster ceph --fs-type xfs -- /osd
[ceph_node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph_node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph_node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph_node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph_node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph_node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[ceph_node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[ceph_node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[ceph_node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[ceph_node1][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /osd
[ceph_node1][INFO  ] checking OSD status...
[ceph_node1][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph_node1 is now ready for osd use.
[ceph_node2][DEBUG ] connection detected need for sudo
[ceph_node2][DEBUG ] connected to host: ceph_node2
[ceph_node2][DEBUG ] detect platform information from remote host
[ceph_node2][DEBUG ] detect machine type
[ceph_node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph_node2
[ceph_node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_node2][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph_node2 disk /osd journal None activate False
[ceph_node2][INFO  ] Running command: sudo ceph-disk -v prepare --cluster ceph --fs-type xfs -- /osd
[ceph_node2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph_node2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph_node2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph_node2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph_node2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph_node2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[ceph_node2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[ceph_node2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[ceph_node2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[ceph_node2][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /osd
[ceph_node2][INFO  ] checking OSD status...
[ceph_node2][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph_node2 is now ready for osd use.
[talen@ceph_admin ~]$
[talen@ceph_admin ~]$ ceph-deploy osd activate ceph_node1:/osd ceph_node2:/osd       
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /bin/ceph-deploy osd activate ceph_node1:/osd ceph_node2:/osd
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          :
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('ceph_node1', '/osd', None), ('ceph_node2', '/osd', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph_node1:/osd: ceph_node2:/osd:
[ceph_node1][DEBUG ] connection detected need for sudo
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_deploy.osd][DEBUG ] activating host ceph_node1 disk /osd
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ceph_node1][INFO  ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /osd
[ceph_node1][WARNIN] DEBUG:ceph-disk:Cluster uuid is 58514e13-d332-4a7e-9760-e3fccb9e2c76
[ceph_node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph_node1][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[ceph_node1][WARNIN] DEBUG:ceph-disk:OSD uuid is 6eda93ce-54d5-4d1b-a80c-eb1acd1308b7
[ceph_node1][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
[ceph_node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 6eda93ce-54d5-4d1b-a80c-eb1acd1308b7
[ceph_node1][WARNIN] DEBUG:ceph-disk:OSD id is 5
[ceph_node1][WARNIN] DEBUG:ceph-disk:Initializing OSD...
[ceph_node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /osd/activate.monmap
[ceph_node1][WARNIN] got monmap epoch 1
[ceph_node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 5 --monmap /osd/activate.monmap --osd-data /osd --osd-journal /osd/journal --osd-uuid 6eda93ce-54d5-4d1b-a80c-eb1acd1308b7 --keyring /osd/keyring
[ceph_node1][WARNIN] 2015-09-14 15:09:22.545380 7f5c1386e880 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
[ceph_node1][WARNIN] 2015-09-14 15:09:22.749323 7f5c1386e880 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
[ceph_node1][WARNIN] 2015-09-14 15:09:22.750085 7f5c1386e880 -1 filestore(/osd) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
[ceph_node1][WARNIN] 2015-09-14 15:09:22.960137 7f5c1386e880 -1 created object store /osd journal /osd/journal for osd.5 fsid f35a65ad-1a6a-4e8d-8f7e-cb5f113c0a02
[ceph_node1][WARNIN] 2015-09-14 15:09:22.960233 7f5c1386e880 -1 auth: error reading file: /osd/keyring: can't open /osd/keyring: (2) No such file or directory
[ceph_node1][WARNIN] 2015-09-14 15:09:22.960471 7f5c1386e880 -1 created new key in keyring /osd/keyring
[ceph_node1][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
[ceph_node1][WARNIN] DEBUG:ceph-disk:Authorizing OSD key...
[ceph_node1][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.5 -i /osd/keyring osd allow * mon allow profile osd
[ceph_node1][WARNIN] added key for osd.5
[ceph_node1][WARNIN] DEBUG:ceph-disk:ceph osd.5 data dir is ready at /osd
[ceph_node1][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/osd/ceph-5 -> /osd
[ceph_node1][WARNIN] DEBUG:ceph-disk:Starting ceph osd.5...
[ceph_node1][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/service ceph --cluster ceph start osd.5
[ceph_node1][DEBUG ] === osd.5 ===
[ceph_node1][WARNIN] create-or-move updating item name 'osd.5' weight 0.01 at location {host=ceph_node1,root=default} to crush map
[ceph_node1][DEBUG ] Starting Ceph osd.5 on ceph_node1...
[ceph_node1][WARNIN] Running as unit run-12687.service.
[ceph_node1][INFO  ] checking OSD status...
[ceph_node1][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[ceph_node1][INFO  ] Running command: sudo systemctl enable ceph
[ceph_node1][WARNIN] ceph.service is not a native service, redirecting to /sbin/chkconfig.
[ceph_node1][WARNIN] Executing /sbin/chkconfig ceph on
[ceph_node1][WARNIN] The unit files have no [Install] section. They are not meant to be enabled
[ceph_node1][WARNIN] using systemctl.
[ceph_node1][WARNIN] Possible reasons for having this kind of units are:
[ceph_node1][WARNIN] 1) A unit may be statically enabled by being symlinked from another unit's
[ceph_node1][WARNIN]    .wants/ or .requires/ directory.
[ceph_node1][WARNIN] 2) A unit's purpose may be to act as a helper for some other unit which has
[ceph_node1][WARNIN]    a requirement dependency on it.
[ceph_node1][WARNIN] 3) A unit may be started when needed via activation (socket, path, timer,
[ceph_node1][WARNIN]    D-Bus, udev, scripted systemctl call, ...).
[ceph_node2][DEBUG ] connection detected need for sudo
[ceph_node2][DEBUG ] connected to host: ceph_node2
[ceph_node2][DEBUG ] detect platform information from remote host
[ceph_node2][DEBUG ] detect machine type
[ceph_node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_deploy.osd][DEBUG ] activating host ceph_node2 disk /osd
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ceph_node2][INFO  ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /osd
[ceph_node2][WARNIN] DEBUG:ceph-disk:Cluster uuid is 58514e13-d332-4a7e-9760-e3fccb9e2c76
[ceph_node2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph_node2][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[ceph_node2][WARNIN] DEBUG:ceph-disk:OSD uuid is bf596f68-8b31-4e98-a861-f7d25949cd84
[ceph_node2][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
[ceph_node2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise bf596f68-8b31-4e98-a861-f7d25949cd84
[ceph_node2][WARNIN] DEBUG:ceph-disk:OSD id is 6
[ceph_node2][WARNIN] DEBUG:ceph-disk:Initializing OSD...
[ceph_node2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /osd/activate.monmap
[ceph_node2][WARNIN] got monmap epoch 1
[ceph_node2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 6 --monmap /osd/activate.monmap --osd-data /osd --osd-journal /osd/journal --osd-uuid bf596f68-8b31-4e98-a861-f7d25949cd84 --keyring /osd/keyring
[ceph_node2][WARNIN] 2015-09-14 15:09:33.949827 7f5914f19880 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
[ceph_node2][WARNIN] 2015-09-14 15:09:34.249571 7f5914f19880 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
[ceph_node2][WARNIN] 2015-09-14 15:09:34.252323 7f5914f19880 -1 filestore(/osd) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
[ceph_node2][WARNIN] 2015-09-14 15:09:34.448365 7f5914f19880 -1 created object store /osd journal /osd/journal for osd.6 fsid f35a65ad-1a6a-4e8d-8f7e-cb5f113c0a02
[ceph_node2][WARNIN] 2015-09-14 15:09:34.448459 7f5914f19880 -1 auth: error reading file: /osd/keyring: can't open /osd/keyring: (2) No such file or directory
[ceph_node2][WARNIN] 2015-09-14 15:09:34.448632 7f5914f19880 -1 created new key in keyring /osd/keyring
[ceph_node2][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
[ceph_node2][WARNIN] DEBUG:ceph-disk:Authorizing OSD key...
[ceph_node2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.6 -i /osd/keyring osd allow * mon allow profile osd
[ceph_node2][WARNIN] added key for osd.6
[ceph_node2][WARNIN] DEBUG:ceph-disk:ceph osd.6 data dir is ready at /osd
[ceph_node2][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/osd/ceph-6 -> /osd
[ceph_node2][WARNIN] DEBUG:ceph-disk:Starting ceph osd.6...
[ceph_node2][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/service ceph --cluster ceph start osd.6
[ceph_node2][DEBUG ] === osd.6 ===
[ceph_node2][WARNIN] create-or-move updating item name 'osd.6' weight 0.01 at location {host=ceph_node2,root=default} to crush map
[ceph_node2][DEBUG ] Starting Ceph osd.6 on ceph_node2...
[ceph_node2][WARNIN] Running as unit run-12526.service.
[ceph_node2][INFO  ] checking OSD status...
[ceph_node2][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[ceph_node2][INFO  ] Running command: sudo systemctl enable ceph
[ceph_node2][WARNIN] ceph.service is not a native service, redirecting to /sbin/chkconfig.
[ceph_node2][WARNIN] Executing /sbin/chkconfig ceph on
[ceph_node2][WARNIN] The unit files have no [Install] section. They are not meant to be enabled
[ceph_node2][WARNIN] using systemctl.
[ceph_node2][WARNIN] Possible reasons for having this kind of units are:
[ceph_node2][WARNIN] 1) A unit may be statically enabled by being symlinked from another unit's
[ceph_node2][WARNIN]    .wants/ or .requires/ directory.
[ceph_node2][WARNIN] 2) A unit's purpose may be to act as a helper for some other unit which has
[ceph_node2][WARNIN]    a requirement dependency on it.
[ceph_node2][WARNIN] 3) A unit may be started when needed via activation (socket, path, timer,
[ceph_node2][WARNIN]    D-Bus, udev, scripted systemctl call, ...).
[talen@ceph_admin ~]$  
[talen@ceph_admin ~]$ ceph-deploy admin ceph_admin ceph_node1 ceph_node2 ceph_monitor              
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /bin/ceph-deploy admin ceph_admin ceph_node1 ceph_node2 ceph_monitor
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph_admin', 'ceph_node1', 'ceph_node2', 'ceph_monitor']
[ceph_deploy.cli][INFO  ]  func                          :
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph_admin
[ceph_admin][DEBUG ] connection detected need for sudo
[ceph_admin][DEBUG ] connected to host: ceph_admin
[ceph_admin][DEBUG ] detect platform information from remote host
[ceph_admin][DEBUG ] detect machine type
[ceph_admin][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph_node1
[ceph_node1][DEBUG ] connection detected need for sudo
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph_node2
[ceph_node2][DEBUG ] connection detected need for sudo
[ceph_node2][DEBUG ] connected to host: ceph_node2
[ceph_node2][DEBUG ] detect platform information from remote host
[ceph_node2][DEBUG ] detect machine type
[ceph_node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph_monitor
[ceph_monitor][DEBUG ] connection detected need for sudo
[ceph_monitor][DEBUG ] connected to host: ceph_monitor
[ceph_monitor][DEBUG ] detect platform information from remote host
[ceph_monitor][DEBUG ] detect machine type
[ceph_monitor][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[talen@ceph_admin ~]$

[talen@ceph_admin ~]$ ceph health
2015-09-14 15:11:55.503854 7fdc2297f700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2015-09-14 15:11:55.503876 7fdc2297f700  0 librados: client.admin initialization error (2) No such file or directory
Error connecting to cluster: ObjectNotFound
[talen@ceph_admin ~]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
[talen@ceph_admin ~]$ ceph health
HEALTH_WARN 64 pgs stale; 64 pgs stuck stale
[talen@ceph_admin ~]$
阅读(8484) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~