Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1785695
  • 博文数量: 276
  • 博客积分: 1574
  • 博客等级: 上尉
  • 技术积分: 2894
  • 用 户 组: 普通用户
  • 注册时间: 2010-05-26 23:23
个人简介

生活的美妙在于,不知道一下秒是惊艳还是伤神,时光流转,珍惜现在的拥有的时光

文章分类

全部博文(276)

文章存档

2017年(17)

2016年(131)

2015年(63)

2013年(2)

2012年(32)

2011年(31)

分类: 服务器与存储

2015-09-14 15:28:50

[talen@ceph_admin ~]$ ceph-deploy osd prepare ceph_monitor:/osd
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /bin/ceph-deploy osd prepare ceph_monitor:/osd
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('ceph_monitor', '/osd', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  func                          :
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph_monitor:/osd:
[ceph_monitor][DEBUG ] connection detected need for sudo
[ceph_monitor][DEBUG ] connected to host: ceph_monitor
[ceph_monitor][DEBUG ] detect platform information from remote host
[ceph_monitor][DEBUG ] detect machine type
[ceph_monitor][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph_monitor
[ceph_monitor][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_monitor][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph_monitor disk /osd journal None activate False
[ceph_monitor][INFO  ] Running command: sudo ceph-disk -v prepare --cluster ceph --fs-type xfs -- /osd
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /osd
[ceph_monitor][INFO  ] checking OSD status...
[ceph_monitor][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[ceph_monitor][WARNIN] there are 5 OSDs down
[ceph_monitor][WARNIN] there are 5 OSDs out
[ceph_deploy.osd][DEBUG ] Host ceph_monitor is now ready for osd use.
[talen@ceph_admin ~]$

[talen@ceph_admin ~]$ ceph-deploy osd activate ceph_monitor:/osd
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /bin/ceph-deploy osd activate ceph_monitor:/osd
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          :
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('ceph_monitor', '/osd', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph_monitor:/osd:
[ceph_monitor][DEBUG ] connection detected need for sudo
[ceph_monitor][DEBUG ] connected to host: ceph_monitor
[ceph_monitor][DEBUG ] detect platform information from remote host
[ceph_monitor][DEBUG ] detect machine type
[ceph_monitor][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.1.1503 Core
[ceph_deploy.osd][DEBUG ] activating host ceph_monitor disk /osd
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ceph_monitor][INFO  ] Running command: sudo ceph-disk -v activate --mark-init sysvinit --mount /osd
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Cluster uuid is 58514e13-d332-4a7e-9760-e3fccb9e2c76
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[ceph_monitor][WARNIN] DEBUG:ceph-disk:OSD uuid is 86491f86-a204-4cee-acb5-9d7a7c26f784
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 86491f86-a204-4cee-acb5-9d7a7c26f784
[ceph_monitor][WARNIN] DEBUG:ceph-disk:OSD id is 7
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Initializing OSD...
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /osd/activate.monmap
[ceph_monitor][WARNIN] got monmap epoch 1
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 7 --monmap /osd/activate.monmap --osd-data /osd --osd-journal /osd/journal --osd-uuid 86491f86-a204-4cee-acb5-9d7a7c26f784 --keyring /osd/keyring
[ceph_monitor][WARNIN] 2015-09-14 15:22:49.300887 7f3da9997880 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
[ceph_monitor][WARNIN] 2015-09-14 15:22:49.667066 7f3da9997880 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
[ceph_monitor][WARNIN] 2015-09-14 15:22:49.669354 7f3da9997880 -1 filestore(/osd) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
[ceph_monitor][WARNIN] 2015-09-14 15:22:49.917080 7f3da9997880 -1 created object store /osd journal /osd/journal for osd.7 fsid f35a65ad-1a6a-4e8d-8f7e-cb5f113c0a02
[ceph_monitor][WARNIN] 2015-09-14 15:22:49.917200 7f3da9997880 -1 auth: error reading file: /osd/keyring: can't open /osd/keyring: (2) No such file or directory
[ceph_monitor][WARNIN] 2015-09-14 15:22:49.917469 7f3da9997880 -1 created new key in keyring /osd/keyring
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Marking with init system sysvinit
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Authorizing OSD key...
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.7 -i /osd/keyring osd allow * mon allow profile osd
[ceph_monitor][WARNIN] added key for osd.7
[ceph_monitor][WARNIN] DEBUG:ceph-disk:ceph osd.7 data dir is ready at /osd
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/osd/ceph-7 -> /osd
[ceph_monitor][WARNIN] DEBUG:ceph-disk:Starting ceph osd.7...
[ceph_monitor][WARNIN] INFO:ceph-disk:Running command: /usr/sbin/service ceph --cluster ceph start osd.7
[ceph_monitor][DEBUG ] === osd.7 ===
[ceph_monitor][WARNIN] create-or-move updating item name 'osd.7' weight 0.01 at location {host=ceph_monitor,root=default} to crush map
[ceph_monitor][DEBUG ] Starting Ceph osd.7 on ceph_monitor...
[ceph_monitor][WARNIN] Running as unit run-12576.service.
[ceph_monitor][INFO  ] checking OSD status...
[ceph_monitor][INFO  ] Running command: sudo ceph --cluster=ceph osd stat --format=json
[ceph_monitor][WARNIN] there are 5 OSDs down
[ceph_monitor][WARNIN] there are 5 OSDs out
[ceph_monitor][INFO  ] Running command: sudo systemctl enable ceph
[ceph_monitor][WARNIN] ceph.service is not a native service, redirecting to /sbin/chkconfig.
[ceph_monitor][WARNIN] Executing /sbin/chkconfig ceph on
[ceph_monitor][WARNIN] The unit files have no [Install] section. They are not meant to be enabled
[ceph_monitor][WARNIN] using systemctl.
[ceph_monitor][WARNIN] Possible reasons for having this kind of units are:
[ceph_monitor][WARNIN] 1) A unit may be statically enabled by being symlinked from another unit's
[ceph_monitor][WARNIN]    .wants/ or .requires/ directory.
[ceph_monitor][WARNIN] 2) A unit's purpose may be to act as a helper for some other unit which has
[ceph_monitor][WARNIN]    a requirement dependency on it.
[ceph_monitor][WARNIN] 3) A unit may be started when needed via activation (socket, path, timer,
[ceph_monitor][WARNIN]    D-Bus, udev, scripted systemctl call, ...).
[talen@ceph_admin ~]$


[talen@ceph_admin ~]$ ceph -w
    cluster f35a65ad-1a6a-4e8d-8f7e-cb5f113c0a02
     health HEALTH_WARN
            64 pgs stale
            64 pgs stuck stale
     monmap e1: 1 mons at {ceph_monitor=10.0.2.33:6789/0}
            election epoch 1, quorum 0 ceph_monitor
     osdmap e47: 7 osds: 2 up, 2 in
      pgmap v1677: 64 pgs, 1 pools, 0 bytes data, 0 objects
            10305 MB used, 6056 MB / 16362 MB avail
                  64 stale+active+clean

2015-09-14 15:11:44.150240 mon.0 [INF] pgmap v1677: 64 pgs: 64 stale+active+clean; 0 bytes data, 10305 MB used, 6056 MB / 16362 MB avail
2015-09-14 15:22:48.442026 mon.0 [INF] from='client.? 10.0.2.33:0/1012278' entity='client.bootstrap-osd' cmd=[{"prefix": "osd create", "uuid": "86491f86-a204-4cee-acb5-9d7a7c26f784"}]: dispatch
2015-09-14 15:22:48.552414 mon.0 [INF] from='client.? 10.0.2.33:0/1012278' entity='client.bootstrap-osd' cmd='[{"prefix": "osd create", "uuid": "86491f86-a204-4cee-acb5-9d7a7c26f784"}]': finished
2015-09-14 15:22:48.665616 mon.0 [INF] osdmap e48: 8 osds: 2 up, 2 in
2015-09-14 15:22:48.685799 mon.0 [INF] pgmap v1678: 64 pgs: 64 stale+active+clean; 0 bytes data, 10305 MB used, 6056 MB / 16362 MB avail
2015-09-14 15:22:50.271971 mon.0 [INF] from='client.? 10.0.2.33:0/1012361' entity='client.bootstrap-osd' cmd=[{"prefix": "auth add", "entity": "osd.7", "caps": ["osd", "allow *", "mon", "allow profile osd"]}]: dispatch
2015-09-14 15:22:50.310775 mon.0 [INF] from='client.? 10.0.2.33:0/1012361' entity='client.bootstrap-osd' cmd='[{"prefix": "auth add", "entity": "osd.7", "caps": ["osd", "allow *", "mon", "allow profile osd"]}]': finished
2015-09-14 15:22:51.148684 mon.0 [INF] from='client.? 10.0.2.33:0/1012532' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "args": ["host=ceph_monitor", "root=default"], "id": 7, "weight": 0.01}]: dispatch
2015-09-14 15:22:51.895119 mon.0 [INF] from='client.? 10.0.2.33:0/1012532' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "args": ["host=ceph_monitor", "root=default"], "id": 7, "weight": 0.01}]': finished
2015-09-14 15:22:51.936453 mon.0 [INF] osdmap e49: 8 osds: 2 up, 2 in
2015-09-14 15:22:52.015062 mon.0 [INF] pgmap v1679: 64 pgs: 64 stale+active+clean; 0 bytes data, 10305 MB used, 6056 MB / 16362 MB avail
2015-09-14 15:22:53.077570 mon.0 [INF] osd.7 10.0.2.33:6800/12581 boot
2015-09-14 15:22:53.178958 mon.0 [INF] osdmap e50: 8 osds: 3 up, 3 in
2015-09-14 15:22:53.197124 mon.0 [INF] pgmap v1680: 64 pgs: 64 stale+active+clean; 0 bytes data, 10305 MB used, 6056 MB / 16362 MB avail
2015-09-14 15:22:58.593083 mon.0 [INF] pgmap v1681: 64 pgs: 64 stale+active+clean; 0 bytes data, 15460 MB used, 9082 MB / 24543 MB avail
阅读(1414) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~