CEPH monitor 信息存储空间不够造成的.清理磁盘空间后解决.
[talen@ceph_admin mycluster]$ ceph-deploy mon add ceph_node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /bin/ceph-deploy mon add ceph_node1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : add
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] mon : ['ceph_node1']
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] address : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mon][INFO ] ensuring configuration of new mon host: ceph_node1
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph_node1
[ceph_node1][DEBUG ] connection detected need for sudo
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.mon][DEBUG ] Adding mon to cluster ceph, host ceph_node1
[ceph_deploy.mon][DEBUG ] using mon address by resolving host: 10.0.2.31
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph_node1 ...
[ceph_node1][DEBUG ] connection detected need for sudo
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_node1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.1.1503 Core
[ceph_node1][DEBUG ] determining if provided host has same hostname in remote
[ceph_node1][DEBUG ] get remote short hostname
[ceph_node1][DEBUG ] adding mon to ceph_node1
[ceph_node1][DEBUG ] get remote short hostname
[ceph_node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_node1][DEBUG ] create the mon path if it does not exist
[ceph_node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph_node1/done
[ceph_node1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph_node1/done
[ceph_node1][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph_node1.mon.keyring
[ceph_node1][DEBUG ] create the monitor keyring file
[ceph_node1][INFO ] Running command: sudo ceph mon getmap -o /var/lib/ceph/tmp/ceph.ceph_node1.monmap
[ceph_node1][WARNIN] got monmap epoch 1
[ceph_node1][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph_node1 --monmap /var/lib/ceph/tmp/ceph.ceph_node1.monmap --keyring /var/lib/ceph/tmp/ceph-ceph_node1.mon.keyring
[ceph_node1][DEBUG ] ceph-mon: set fsid to 08416be1-f6e7-4c5a-b7b3-7eb148b0c467
[ceph_node1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph_node1 for mon.ceph_node1
[ceph_node1][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph_node1.mon.keyring
[ceph_node1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph_node1][DEBUG ] create the init path if it does not exist
[ceph_node1][INFO ] Running command: sudo ceph-mon -i ceph_node1 --public-addr 10.0.2.31
[ceph_node1][WARNIN] error: monitor data filesystem reached concerning levels of available storage space (available: 3% 226 MB)
[ceph_node1][WARNIN] you may adjust 'mon data avail crit' to a lower value to make this go away (default: 5%)
[ceph_node1][WARNIN]
[ceph_node1][ERROR ] RuntimeError: command returned non-zero exit status: 28
[ceph_deploy.mon][ERROR ] Failed to execute command: ceph-mon -i ceph_node1 --public-addr 10.0.2.31
[ceph_deploy][ERROR ] GenericError: Failed to add monitor to host: ceph_node1
[talen@ceph_admin mycluster]$ ceph-deploy mon add ceph_node2
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /bin/ceph-deploy mon add ceph_node2
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : add
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] mon : ['ceph_node2']
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] address : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mon][INFO ] ensuring configuration of new mon host: ceph_node2
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph_node2
[ceph_node2][DEBUG ] connection detected need for sudo
[ceph_node2][DEBUG ] connected to host: ceph_node2
[ceph_node2][DEBUG ] detect platform information from remote host
[ceph_node2][DEBUG ] detect machine type
[ceph_node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.mon][DEBUG ] Adding mon to cluster ceph, host ceph_node2
[ceph_deploy.mon][DEBUG ] using mon address by resolving host: 10.0.2.32
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph_node2 ...
[ceph_node2][DEBUG ] connection detected need for sudo
[ceph_node2][DEBUG ] connected to host: ceph_node2
[ceph_node2][DEBUG ] detect platform information from remote host
[ceph_node2][DEBUG ] detect machine type
[ceph_node2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.1.1503 Core
[ceph_node2][DEBUG ] determining if provided host has same hostname in remote
[ceph_node2][DEBUG ] get remote short hostname
[ceph_node2][DEBUG ] adding mon to ceph_node2
[ceph_node2][DEBUG ] get remote short hostname
[ceph_node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_node2][DEBUG ] create the mon path if it does not exist
[ceph_node2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph_node2/done
[ceph_node2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph_node2/done
[ceph_node2][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph_node2.mon.keyring
[ceph_node2][DEBUG ] create the monitor keyring file
[ceph_node2][INFO ] Running command: sudo ceph mon getmap -o /var/lib/ceph/tmp/ceph.ceph_node2.monmap
[ceph_node2][WARNIN] got monmap epoch 1
[ceph_node2][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i ceph_node2 --monmap /var/lib/ceph/tmp/ceph.ceph_node2.monmap --keyring /var/lib/ceph/tmp/ceph-ceph_node2.mon.keyring
[ceph_node2][DEBUG ] ceph-mon: set fsid to 08416be1-f6e7-4c5a-b7b3-7eb148b0c467
[ceph_node2][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph_node2 for mon.ceph_node2
[ceph_node2][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph_node2.mon.keyring
[ceph_node2][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph_node2][DEBUG ] create the init path if it does not exist
[ceph_node2][INFO ] Running command: sudo ceph-mon -i ceph_node2 --public-addr 10.0.2.32
[ceph_node2][WARNIN] error: monitor data filesystem reached concerning levels of available storage space (available: 3% 227 MB)
[ceph_node2][WARNIN] you may adjust 'mon data avail crit' to a lower value to make this go away (default: 5%)
[ceph_node2][WARNIN]
[ceph_node2][ERROR ] RuntimeError: command returned non-zero exit status: 28
[ceph_deploy.mon][ERROR ] Failed to execute command: ceph-mon -i ceph_node2 --public-addr 10.0.2.32
[ceph_deploy][ERROR ] GenericError: Failed to add monitor to host: ceph_node2
[talen@ceph_admin mycluster]$ ceph-deploy mon add ceph_node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/talen/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /bin/ceph-deploy mon add ceph_node1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : add
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] mon : ['ceph_node1']
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] address : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mon][INFO ] ensuring configuration of new mon host: ceph_node1
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph_node1
[ceph_node1][DEBUG ] connection detected need for sudo
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.mon][DEBUG ] Adding mon to cluster ceph, host ceph_node1
[ceph_deploy.mon][DEBUG ] using mon address by resolving host: 10.0.2.31
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph_node1 ...
[ceph_node1][DEBUG ] connection detected need for sudo
[ceph_node1][DEBUG ] connected to host: ceph_node1
[ceph_node1][DEBUG ] detect platform information from remote host
[ceph_node1][DEBUG ] detect machine type
[ceph_node1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.1.1503 Core
[ceph_node1][DEBUG ] determining if provided host has same hostname in remote
[ceph_node1][DEBUG ] get remote short hostname
[ceph_node1][DEBUG ] adding mon to ceph_node1
[ceph_node1][DEBUG ] get remote short hostname
[ceph_node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_node1][DEBUG ] create the mon path if it does not exist
[ceph_node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph_node1/done
[ceph_node1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph_node1][DEBUG ] create the init path if it does not exist
[ceph_node1][INFO ] Running command: sudo ceph-mon -i ceph_node1 --public-addr 10.0.2.31
[ceph_node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph_node1.asok mon_status
[ceph_node1][WARNIN] ceph_node1 is not defined in `mon initial members`
[ceph_node1][WARNIN] monitor ceph_node1 does not exist in monmap
[ceph_node1][WARNIN] neither `public_addr` nor `public_network` keys are defined for monitors
[ceph_node1][WARNIN] monitors may not be able to form quorum
[ceph_node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph_node1.asok mon_status
[ceph_node1][DEBUG ] ********************************************************************************
[ceph_node1][DEBUG ] status for monitor: mon.ceph_node1
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "election_epoch": 1,
[ceph_node1][DEBUG ] "extra_probe_peers": [],
[ceph_node1][DEBUG ] "monmap": {
[ceph_node1][DEBUG ] "created": "0.000000",
[ceph_node1][DEBUG ] "epoch": 2,
[ceph_node1][DEBUG ] "fsid": "08416be1-f6e7-4c5a-b7b3-7eb148b0c467",
[ceph_node1][DEBUG ] "modified": "2015-09-14 18:51:20.885745",
[ceph_node1][DEBUG ] "mons": [
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addr": "10.0.2.31:6789/0",
[ceph_node1][DEBUG ] "name": "ceph_node1",
[ceph_node1][DEBUG ] "rank": 0
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] {
[ceph_node1][DEBUG ] "addr": "10.0.2.33:6789/0",
[ceph_node1][DEBUG ] "name": "ceph_monitor",
[ceph_node1][DEBUG ] "rank": 1
[ceph_node1][DEBUG ] }
[ceph_node1][DEBUG ] ]
[ceph_node1][DEBUG ] },
[ceph_node1][DEBUG ] "name": "ceph_node1",
[ceph_node1][DEBUG ] "outside_quorum": [],
[ceph_node1][DEBUG ] "quorum": [],
[ceph_node1][DEBUG ] "rank": 0,
[ceph_node1][DEBUG ] "state": "electing",
[ceph_node1][DEBUG ] "sync_provider": []
[ceph_node1][DEBUG ] }
[ceph_node1][DEBUG ] ********************************************************************************
[ceph_node1][INFO ] monitor: mon.ceph_node1 is running
[ceph_node2][INFO ] monitor: mon.ceph_node2 is running
[talen@ceph_admin mycluster]$ ceph quorum_status --format json-pretty
{
"election_epoch": 6,
"quorum": [
0,
1,
2
],
"quorum_names": [
"ceph_node1",
"ceph_node2",
"ceph_monitor"
],
"quorum_leader_name": "ceph_node1",
"monmap": {
"epoch": 3,
"fsid": "08416be1-f6e7-4c5a-b7b3-7eb148b0c467",
"modified": "2015-09-14 18:52:15.332101",
"created": "0.000000",
"mons": [
{
"rank": 0,
"name": "ceph_node1",
"addr": "10.0.2.31:6789\/0"
},
{
"rank": 1,
"name": "ceph_node2",
"addr": "10.0.2.32:6789\/0"
},
{
"rank": 2,
"name": "ceph_monitor",
"addr": "10.0.2.33:6789\/0"
}
]
}
}
[talen@ceph_admin mycluster]$
阅读(13153) | 评论(0) | 转发(0) |