全部博文(150)
分类: 服务器与存储
2017-03-30 14:40:00
## ceph monitor 192.168.1.86 mon1 mds01 192.168.1.172 swarm2 192.168.1.164 mon3 mds02 ## ceph mds 192.168.1.86 mds01 192.168.1.164 mds02 ### ceph osd 192.168.1.210 osd1 192.168.1.230 osd2 192.168.1.174 osd3 192.168.1.175 osd4 ## ceph admin 192.168.1.171 swarm1 ## ceph clients 192.168.1.171 swarm1 192.168.1.172 swarm2 192.168.1.173 swarm3
ceph-deploy osd prepare osd3:/dev/sdb osd3:/dev/sdc osd3:/dev/sdd osd3:/dev/sde ## ceph-deploy会自动在每块盘上做两个分区,1为data,2为索引
ceph-deploy osd activate osd3:/dev/sdb1 osd3:/dev/sdc1 osd3:/dev/sdd1 osd3:/dev/sde1
osdmap e85: 10 osds: 10 up, 10 in ## osd已经从6个增加到10 运行 sudo ceph osd tree 显示 ...... -4 1.65735 host osd3 6 0.12799 osd.6 up 1.00000 1.00000 7 0.44919 osd.7 up 1.00000 1.00000 8 0.54008 osd.8 up 1.00000 1.00000 9 0.54008 osd.9 up 1.00000 1.00000
ceph-deploy install --osd osd4 ceph-deploy osd prepare osd4:/dev/sdb osd4:/dev/sdc osd4:/dev/sdd ceph-deploy osd activate osd4:/dev/sdb1 osd4:/dev/sdc1 osd4:/dev/sdd1
ceph-deploy mds create mon3:mds02 后查看集群状态: fsmap e8: 1/1/1 up {0=mds01=up:active}, 1 up:standby
手动停止 mds01 : systemctl stop ceph-mds.target 查看集群状态: sudo ceph -s fsmap e13: 1/1/1 up {0=mds02=up:active} 可以看到 mds02 已经 active了 重启启动mds01后再查看集群状态 fsmap e14: 1/1/1 up {0=mds02=up:active}, 1 up:standby 又恢复到一主一备的热备状态