storage R&D guy.
全部博文(1000)
分类: 服务器与存储
2015-10-29 15:41:34
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
[ceph-noarch] name=Ceph noarch packages baseurl={ceph-release}/{distro}/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=
sudo yum install ceph-deploy
在/etc/hosts文件中改成如下内容
#127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 127.0.0.1 dataprovider 192.168.40.107 mdsnode 192.168.40.108 osdnode1 192.168.40.148 osdnode2 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
yum install -y ntp ntpdate ntp-doc yum install -y openssh-server
为了使用简单,在每个节点上都创建leadorceph用户,并且密码都设置为leadorceph
sudo useradd -d /home/leadorceph -m leadorceph sudo passwd leadorceph
echo "leadorceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/leadorceph sudo chmod 0440 /etc/sudoers.d/leadorceph
配置正确会出现如下截图
[leadorceph@dataprovider root]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/leadorceph/.ssh/id_rsa): Created directory '/home/leadorceph/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/leadorceph/.ssh/id_rsa. Your public key has been saved in /home/leadorceph/.ssh/id_rsa.pub. The key fingerprint is: a8:fb:b8:3a:bb:62:dc:25:a6:58:7b:9d:97:00:30:ec leadorceph@dataprovider The key's randomart image is: +--[ RSA 2048]----+ | . | | + | | . o | | E . . | | .. S | | .o o. | |o.+.+. o . | |o+o..oo o | |..+*+o.. | +-----------------+拷贝到每个节点
ssh-copy-id leadorceph@mdsnode ssh-copy-id leadorceph@osdnode1 ssh-copy-id leadorceph@osdnode2
在deploy节点上创建配置目录,下面的所有deploy节点上的操作都在该目录进行
mkdir my-cluster cd my-cluster
如果部署失败了可以用下面的命令恢复
ceph-deploy purgedata {ceph-node} [{ceph-node}] ceph-deploy forgetkeys
在admin node 上用ceph-deploy创建集群,new后面跟的是mds节点的hostname
ceph-deploy new mdsnode
执行成功后该目录下会增加三个文件
修改ceph.conf,使osd_pool_default_size的值为2
ceph-deploy install deploynode mdsnode osdnode1 osdnode2
安装成功:
ceph-deploy mon create-initial ceph-deploy mon create mdsnode ceph-deploy gatherkeys mdsnode执行成功后会出现下面的文件
ssh osdnode1 sudo mkdir /var/local/osd0 exit ssh osdnode2 sudo mkdir /var/local/osd1 exit
ceph-deploy osd prepare osdnode1:/var/local/osd0 osdnode2:/var/local/osd1 ceph-deploy osd activate osdnode1:/var/local/osd0 osdnode2:/var/local/osd1
ceph-deploy admin dataprovider mdsnode osdnode1 osdnode2
sudo chmod +r /etc/ceph/ceph.client.admin.keyring