分类: LINUX
2015-09-04 22:29:54
更多 http://www.magedu.com/
转载请注明出处: 马哥Linux KeFu QQ:2813150588、1661815153、1660809109
前言
相信你一定对“云主机”一词并不陌生吧,通过在Web页面选择所需主机配置,即可快速定制一台属于自己的虚拟主机,并实现登陆操作,大大节省了物理资源。但这一过程是如何实现的呢?本文带来OpenStack Icehouse私有云实战部署。
OpenStack
简介
OpenStack是由网络主机服务商Rackspace和美国宇航局联合推出的一个开源项目,OpenStack的目标是为所有类型的云提供一个易于实施,可大规模扩展,且功能丰富的解决方案,任何公司或个人都可以搭建自己的云计算环境(IaaS),从此打破了Amazon等少数公司的垄断。
架构
工作流程
OpenStack部署
实验环境
角色 | 主机名 | 网卡 | 系统环境 |
Controller Node | controller.scholar.com |
管理接口eth0:192.168.10.123 外部接口eth1:172.16.10.123 |
CentOS6.6 |
Compute Node | compute.scholar.com |
管理接口eth0:192.168.10.124 隧道接口eth1:10.0.10.124 |
CentOS6.6 |
Network Node |
network.scholar.com |
管理接口eth0:192.168.10.125 外部接口eth1:172.16.0.0/16 隧道接口eth2:10.0.10.125 |
CentOS6.6 |
Block Storage Node | block.scholar.com |
管理接口eth0:192.168.10.126 外部接口eth1:172.16.10.126 |
CentOS6.6 |
实验拓扑
点击(此处)折叠或打开
点击(此处)折叠或打开
openstac yum源安装
点击(此处)折叠或打开
点击(此处)折叠或打开
安装配置Identity 服务
点击(此处)折叠或打开
编辑keystone主配置文件,使得其使用MySQL做为数据存储池
点击(此处)折叠或打开
配置token
点击(此处)折叠或打开
设定openstack用到的证书服务
点击(此处)折叠或打开
启动服务
点击(此处)折叠或打开
点击(此处)折叠或打开
设定Keystone为API endpoint
点击(此处)折叠或打开
启用基于用户名认证
点击(此处)折叠或打开
安装相关软件包
点击(此处)折叠或打开
初始化glance数据库
点击(此处)折叠或打开
点击(此处)折叠或打开
点击(此处)折叠或打开
创建glance管理用户
点击(此处)折叠或打开
配置Glance服务使用Identity服务认证
点击(此处)折叠或打开
在keystone注册glance服务
点击(此处)折叠或打开
启动服务
点击(此处)折叠或打开
点击(此处)折叠或打开
Compute服务
Compute服务安装配置
安装启动qpid
点击(此处)折叠或打开
安装配置compute service
安装所需软件包
点击(此处)折叠或打开
配置nova服务
初始化nova数据库
点击(此处)折叠或打开
配置nova连入数据库相关信息
点击(此处)折叠或打开
点击(此处)折叠或打开
接着将 my_ip、vncserver_listen 和vncserver_proxyclient_address参数的值设定为所属“管理网络”接口地址
点击(此处)折叠或打开
创建nova用户账号
点击(此处)折叠或打开
点击(此处)折叠或打开
点击(此处)折叠或打开
启动服务
点击(此处)折叠或打开
安装所需软件包
点击(此处)折叠或打开
配置nova服务
点击(此处)折叠或打开
设置本机支持的hypervisor
这里建议使用kvm虚拟化技术,但其要求计算节点的CPU支持硬件辅助的虚拟化技术。如果正在配置的测试节点不支持三件辅助的虚拟化,则需要将其指定为使用qemu类型的hypervisor
点击(此处)折叠或打开
启动服务
点击(此处)折叠或打开
在控制端验证添加的compute节点是否已经能够使用
点击(此处)折叠或打开
Networking服务
neutron server节点
在实际部署的架构中,neutron的部署架构可以分为三个角色,即neutron server(neutron服务器)、network node(网络节点)和compute node(计算节点),这里先部署neutron服务器。
安装所需软件包
此处配置的为neutron server服务,根据此前的规划,这里将其部署在控制节点上。
点击(此处)折叠或打开
创建neutron数据库
点击(此处)折叠或打开
在keystone中创建neutron 用户
1
2
3
4
5
6
7
8
9
10
11
|
[root@controller ~] # keystone user-create --name neutron --pass neutron --email neutron@scholar.com
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | neutron@scholar.com |
| enabled | True |
| id | cf9145eebce046c09e6255b4fced91b9 |
| name | neutron |
| username | neutron |
+----------+----------------------------------+
[root@controller ~] # keystone user-role-add --user neutron --tenant service --role admin
|
创建neutron服务及访问端点
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
|
[root@controller ~] # keystone service-create --name neutron --type network --description "OpenStack Networking"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 4edd4521801a4e40829c11b5c0b379f8 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
[root@controller ~] # keystone endpoint-create \
> --service- id $(keystone service-list | awk '/ network / {print $2}' ) \
> --publicurl http: //controller :9696 \
> --adminurl http: //controller :9696 \
> --internalurl http: //controller :9696
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http: //controller :9696 |
| id | 41307aad4b2e4ce8a62144c79a4da632 |
| internalurl | http: //controller :9696 |
| publicurl | http: //controller :9696 |
| region | regionOne |
| service_id | 4edd4521801a4e40829c11b5c0b379f8 |
+-------------+----------------------------------+
|
配置neutron server
配置 neutron连接数据库的URL
1
2
|
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf database connection \
> mysql: //neutron :neutron@controller /neutron
|
配置neutron server连入keystone
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> auth_strategy keystone
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> auth_uri http: //controller :5000
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> auth_host controller
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> auth_protocol http
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> auth_port 35357
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> admin_tenant_name service
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> admin_user neutron
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> admin_password neutron
|
配置neutron server使用的消息队列服务
1
2
3
4
|
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> rpc_backend neutron.openstack.common.rpc.impl_qpid
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> qpid_hostname controller
|
配置neutron server通知compute节点相关网络定义的改变
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> notify_nova_on_port_status_changes True
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> notify_nova_on_port_data_changes True
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> nova_url http: //controller :8774 /v2
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> nova_admin_username nova
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> nova_admin_tenant_id $(keystone tenant-list | awk '/ service / { print $2 }' )
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> nova_admin_password nova
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> nova_admin_auth_url http: //controller :35357 /v2 .0
|
配置使用Modular Layer 2 (ML2)插件及相关服务
1
2
3
4
|
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> core_plugin ml2
[root@controller ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> service_plugins router
|
配置ML2(Modular Layer 2)插件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
[root@controller ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
> type_drivers gre
[root@controller ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
> tenant_network_types gre
[root@controller ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
> mechanism_drivers openvswitch
[root@controller ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \
> tunnel_id_ranges 1:1000
[root@controller ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \
> firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[root@controller ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \
> enable_security_group True
#注意:如果需要ml2支持更多的驱动类型,可将上面一组中的命令的第一个和第二个分别更换为:
openstack-config -- set /etc/neutron/plugins/ml2/ml2_conf .ini ml2 type_drivers local ,flat,vlan,gre,vxlan
openstack-config -- set /etc/neutron/plugins/ml2/ml2_conf .ini ml2 tenant_network_types vlan,gre,vxlan
|
配置Compute服务
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
[root@controller ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> network_api_class nova.network.neutronv2.api.API
[root@controller ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> neutron_url http: //controller :9696
[root@controller ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> neutron_auth_strategy keystone
[root@controller ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> neutron_admin_tenant_name service
[root@controller ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> neutron_admin_username neutron
[root@controller ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> neutron_admin_password neutron
[root@controller ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> neutron_admin_auth_url http: //controller :35357 /v2 .0
[root@controller ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
[root@controller ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> firewall_driver nova.virt.firewall.NoopFirewallDriver
[root@controller ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> security_group_api neutron
|
创建连接文件
Networking服务初始化脚本需要通过符号链接文件/etc/neutron/plugin.ini链接至选择使用的插件
1
|
[root@controller neutron] # ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
|
重启服务
1
2
|
[root@controller ~] # for svc in api scheduler conductor; \
> do service openstack-nova-${svc} restart; done
|
启动服务
1
2
3
|
[root@controller ~] # service neutron-server start
Starting neutron: [ OK ]
[root@controller ~] # chkconfig neutron-server on
|
Network节点
配置内核网络参数
1
2
3
4
5
6
7
|
[root@network ~] # vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
[root@network ~] # sysctl -p
|
安装所需软件包
1
|
[root@network ~] # yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
|
配置连入keystone
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[root@network ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> auth_strategy keystone
[root@network ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> auth_uri http: //controller :5000
[root@network ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> auth_host controller
[root@network ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> auth_protocol http
[root@network ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> auth_port 35357
[root@network ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> admin_tenant_name service
[root@network ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> admin_user neutron
[root@network ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> admin_password neutron
|
配置其使用的消息队列服务
1
2
3
4
|
[root@network ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> rpc_backend neutron.openstack.common.rpc.impl_qpid
[root@network ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> qpid_hostname controller
|
配置使用ML2
1
2
3
4
|
[root@network ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> core_plugin ml2
[root@network ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> service_plugins router
|
配置Layer-3 (L3) agent
1
2
3
4
|
[root@network ~] # openstack-config --set /etc/neutron/l3_agent.ini DEFAULT \
> interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
[root@network ~] # openstack-config --set /etc/neutron/l3_agent.ini DEFAULT \
> use_namespaces True
|
配置DHCP agent
1
2
3
4
5
6
|
[root@network ~] # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \
> interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
[root@network ~] # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \
> dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
[root@network ~] # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \
> use_namespaces True
|
配置neutron中dhcp服务使用自定义配置文件
1
2
3
4
5
6
|
[root@network ~] # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \
> dnsmasq_config_file /etc/neutron/dnsmasq-neutron .conf
#创建配置文件
[root@network ~] # vim /etc/neutron/dnsmasq-neutron.conf
dhcp-option-force=26,1454
|
配置metadata agent
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
[root@network ~] # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \
> auth_url http: //controller :5000 /v2 .0
[root@network ~] # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \
> auth_region regionOne
[root@network ~] # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \
> admin_tenant_name service
[root@network ~] # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \
> admin_user neutron
[root@network ~] # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \
> admin_password neutron
[root@network ~] # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \
> nova_metadata_ip controller
[root@network ~] # openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT \
> metadata_proxy_shared_secret METADATA_SECRET
|
在控制节点上执行如下命令,其中的METADATA_SECRET要替换成与前面选择的相关的密码
1
2
3
4
5
6
7
|
[root@controller ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> service_neutron_metadata_proxy true
[root@controller ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> neutron_metadata_proxy_shared_secret METADATA_SECRET
[root@controller ~] # service openstack-nova-api restart
Stopping openstack-nova-api: [ OK ]
Starting openstack-nova-api: [ OK ]
|
配置ML2插件
运行如下命令配置ML2插件,其中10.0.10.125为隧道接口的地址
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
[root@network ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
> type_drivers gre
[root@network ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
> tenant_network_types gre
[root@network ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
> mechanism_drivers openvswitch
[root@network ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \
> tunnel_id_ranges 1:1000
[root@network ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \
> local_ip 10.0.10.125
[root@network ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \
> tunnel_type gre
[root@network ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \
> enable_tunneling True
[root@network ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \
> firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[root@network ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \
> enable_security_group True
|
配置Open vSwitch服务
1
2
3
4
5
6
7
8
9
10
11
|
#启动服务
[root@network ~] # service openvswitch start
[root@network ~] # chkconfig openvswitch on
#添加桥设备
[root@network ~] # ovs-vsctl add-br br-int
#添加外部桥
[root@network ~] # ovs-vsctl add-br br-ex
#为外部桥添加外部网络接口,其中eth1为实际的外部物理接口
[root@network ~] # ovs-vsctl add-port br-ex eth1
#修改桥设备br-ex的bridge-id的属性值为br-ex
[root@network ~] # ovs-vsctl br-set-external-id br-ex bridge-id br-ex
|
配置并启动服务
1
2
3
4
5
6
7
8
9
10
|
[root@network ~] # cd /etc/neutron/
[root@network neutron] # ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@network ~] # cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig
[root@network ~] # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
[root@network ~] # for svc in openvswitch-agent l3-agent dhcp-agent metadata-agent; \
> do service neutron-${svc} start; chkconfig neutron-${svc} on; done
Starting neutron-openvswitch-agent: [ OK ]
Starting neutron-l3-agent: [ OK ]
Starting neutron-dhcp-agent: [ OK ]
Starting neutron-metadata-agent: [ OK ]
|
Compute节点
配置内核网络参数
1
2
3
4
5
6
|
[root@compute ~] # vim /etc/sysctl.conf
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
[root@compute ~] # sysctl -p
|
安装所需软件包
1
|
[root@compute ~] # yum install openstack-neutron-ml2 openstack-neutron-openvswitch
|
配置连入keystone
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[root@compute ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> auth_strategy keystone
[root@compute ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> auth_uri http: //controller :5000
[root@compute ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> auth_host controller
[root@compute ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> auth_protocol http
[root@compute ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> auth_port 35357
[root@compute ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> admin_tenant_name service
[root@compute ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> admin_user neutron
[root@compute ~] # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \
> admin_password neutron
|
配置其使用消息队列服务
1
2
3
4
|
[root@compute ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> rpc_backend neutron.openstack.common.rpc.impl_qpid
[root@compute ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> qpid_hostname controller
|
配置使用Modular Layer 2 (ML2)插件及相关服务
1
2
3
4
|
[root@compute ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> core_plugin ml2
[root@compute ~] # openstack-config --set /etc/neutron/neutron.conf DEFAULT \
> service_plugins router
|
配置ML2插件
如下命令配置 ML2 插件,其中10.0.10.124为本节点用于“隧道接口”的地址
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
[root@compute ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
> type_drivers gre
[root@compute ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
> tenant_network_types gre
[root@compute ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 \
> mechanism_drivers openvswitch
[root@compute ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre \
> tunnel_id_ranges 1:1000
[root@compute ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \
> local_ip 10.0.10.124
[root@compute ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \
> tunnel_type gre
[root@compute ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs \
> enable_tunneling True
[root@compute ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \
> firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[root@compute ~] # openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup \
> enable_security_group True
|
配置Open vSwitch服务
1
2
3
|
[root@compute ~] # service openvswitch start
[root@compute ~] # chkconfig openvswitch on
[root@compute ~] # ovs-vsctl add-br br-int
|
配置Compute使用Networking服务
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
[root@compute ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> network_api_class nova.network.neutronv2.api.API
[root@compute ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> neutron_url http: //controller :9696
[root@compute ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> neutron_auth_strategy keystone
[root@compute ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> neutron_admin_tenant_name service
[root@compute ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> neutron_admin_username neutron
[root@compute ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> neutron_admin_password neutron
[root@compute ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> neutron_admin_auth_url http: //controller :35357 /v2 .0
[root@compute ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
[root@compute ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> firewall_driver nova.virt.firewall.NoopFirewallDriver
[root@compute ~] # openstack-config --set /etc/nova/nova.conf DEFAULT \
> security_group_api neutron
|
配置并启动服务
1
2
3
4
5
6
7
8
9
10
|
[root@compute ~] # cd /etc/neutron/
[root@compute neutron] # ln -s plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@compute ~] # cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig
[root@compute ~] # sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
root@compute ~] # service openstack-nova-compute restart
Stopping openstack-nova-compute: [ OK ]
Starting openstack-nova-compute: [ OK ]
[root@compute ~] # service neutron-openvswitch-agent start
Starting neutron-openvswitch-agent: [ OK ]
[root@compute ~] # chkconfig neutron-openvswitch-agent on
|
创建外部网络
在 Contoller上执行如下命令
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
[root@controller ~] # . admin-openrc.sh
[root@controller ~] # neutron net-create ext-net --shared --router:external=True
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | d44c19c2-2fe1-40e8-b07d-094111fe1a5e |
| name | ext-net |
| provider:network_type | gre |
| provider:physical_network | |
| provider:segmentation_id | 1 |
| router:external | True |
| shared | True |
| status | ACTIVE |
| subnets | |
| tenant_id | 684ae003069d41d883f9cd0fcb252ae7 |
+---------------------------+--------------------------------------+
|
在外部网络中创建一个子网
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[root@controller ~] # neutron subnet-create ext-net --name ext-subnet \
> --allocation-pool start=172.16.20.12,end=172.16.20.61 \
> --disable-dhcp --gateway 172.16.0.1 172.16.0.0 /16
Created a new subnet:
+------------------+--------------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------------+
| allocation_pools | { "start" : "172.16.20.12" , "end" : "172.16.20.61" } |
| cidr | 172.16.0.0 /16 |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 172.16.0.1 |
| host_routes | |
| id | 07fe3ef7-118a-483f-b53e-df7f6629454c |
| ip_version | 4 |
| name | ext-subnet |
| network_id | d44c19c2-2fe1-40e8-b07d-094111fe1a5e |
| tenant_id | 684ae003069d41d883f9cd0fcb252ae7 |
+------------------+--------------------------------------------------+
|
Tenant network
tenant network为各instance之间提供了内部互访的通道,此机制用于实现各tenant 网络之间的隔离
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[root@controller ~] # neutron net-create demo-net
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | a71cc567-08ad-4000-b273-e1b300fa642b |
| name | demo-net |
| provider:network_type | gre |
| provider:physical_network | |
| provider:segmentation_id | 2 |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | 684ae003069d41d883f9cd0fcb252ae7 |
+---------------------------+--------------------------------------+
|
为demo-net网络创建一个子网
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
[root@controller ~] # neutron subnet-create demo-net --name demo-subnet \
> --gateway 192.168.22.1 192.168.22.0 /24
Created a new subnet:
+------------------+----------------------------------------------------+
| Field | Value |
+------------------+----------------------------------------------------+
| allocation_pools | { "start" : "192.168.22.2" , "end" : "192.168.22.254" } |
| cidr | 192.168.22.0 /24 |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 192.168.22.1 |
| host_routes | |
| id | 5aa02cca-4c51-4606-939f-5f5623374ce0 |
| ip_version | 4 |
| name | demo-subnet |
| network_id | a71cc567-08ad-4000-b273-e1b300fa642b |
| tenant_id | 684ae003069d41d883f9cd0fcb252ae7 |
+------------------+----------------------------------------------------+
|
为demo net创建一个router,并将其附加至外部网络和demo net
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[root@controller ~] # neutron router-create demo-router
Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| external_gateway_info | |
| id | a8752270-67da-4118-a053-2858b0ba1762 |
| name | demo-router |
| status | ACTIVE |
| tenant_id | 684ae003069d41d883f9cd0fcb252ae7 |
+-----------------------+--------------------------------------+
[root@controller ~] # neutron router-interface-add demo-router demo-subnet
Added interface 7a619ab8-91fd-4f55-be0c-94603afbfbcb to router demo-router.
[root@controller ~] # neutron router-gateway-set demo-router ext-net
Set gateway for router demo-router
|
dashboard
安装所需软件包
1
|
[root@controller ~] # yum install memcached python-memcached mod_wsgi openstack-dashboard
|
配置dashboard
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
[root@controller ~] # vim /etc/openstack-dashboard/local_settings
#配置使用本机上的memcached作为会话缓存
CACHES = {
'default' : {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache' ,
'LOCATION' : '127.0.0.1:11211' ,
}
}
#配置访问权限
ALLOWED_HOSTS = [ '*' , 'localhost' ]
#指定controller节点
OPENSTACK_HOST = "controller"
#设置时区
TIME_ZONE = "Asia/Shanghai"
|
启动服务
1
2
3
4
5
6
|
[root@controller ~] # service memcached start
Starting memcached: [ OK ]
[root@controller ~] # service httpd start
Starting httpd: [ OK ]
[root@controller ~] # chkconfig memcached on
[root@controller ~] # chkconfig httpd on
|
测试
查看网络拓扑
启动实例
SSH公钥注入
1
2
3
4
5
6
7
8
|
[root@controller ~] # ssh-keygen
[root@controller ~] # nova keypair-add --pub-key ~/.ssh/id_rsa.pub demo-key
[root@controller ~] # nova keypair-list
+----------+-------------------------------------------------+
| Name | Fingerprint |
+----------+-------------------------------------------------+
| demo-key | e1:36:ed:57:2c:26:96:6c:81:8c:2d:63:d2:15:2f:09 |
+----------+-------------------------------------------------+
|
启动一个实例
在OpenStack中启动实例需要指定一个VM 配置模板,首先查看可用模板
1
2
3
4
5
6
7
8
9
10
|
[root@controller ~] # nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
创建一个拥有较小的内存设置的flavor,供启动cirror测试使用
1
2
3
4
5
6
|
[root@controller ~] # nova flavor-create --is-public true m1.cirros 6 256 1 1
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 6 | m1.cirros | 256 | 1 | 0 | | 1 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
|
获取所有可用的image文件列表
1
2
3
4
5
6
|
[root@controller ~] # nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| 6a820f7e-dcb8-40c8-af8b-27297f2673a3 | cirros-0.3.4-x86_64 | ACTIVE | |
+--------------------------------------+---------------------+--------+--------+
|
获取所有可用的网络列表
1
2
3
4
5
6
7
|
[root@controller ~] # neutron net-list
+--------------------------------------+----------+------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+----------+------------------------------------------------------+
| a71cc567-08ad-4000-b273-e1b300fa642b | demo-net | 5aa02cca-4c51-4606-939f-5f5623374ce0 192.168.22.0 /24 |
| d44c19c2-2fe1-40e8-b07d-094111fe1a5e | ext-net | 07fe3ef7-118a-483f-b53e-df7f6629454c 172.16.0.0 /16 |
+--------------------------------------+----------+------------------------------------------------------+
|
启动
1
2
|
[root@controller ~] # nova boot --flavor m1.cirros --image cirros-0.3.4-x86_64 --nic net-id=a71cc567-08ad-4000-b273-e1b300fa642b \
> --security-group default --key-name demokey demo-i1
|
查看实例
1
2
3
4
5
6
|
[root@controller ~] # nova list
+--------------------------------------+---------+--------+------------+-------------+-----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+-----------------------+
| 15a35c37-2be2-4998-b98e-e2e472df0142 | demo-i1 | ACTIVE | - | Running | demo-net=192.168.22.2 |
+--------------------------------------+---------+--------+------------+-------------+-----------------------+
|
打开控制台登陆
登陆之后发现云主机并没有获取到IP,不知何故,哎呀不管了,直接手动配置
进行网络连通性测试
依次ping虚拟内部网关,虚拟外部网关,真实外部网关
通过以上测试发现,云主机网络正常,但是外部主机能否跟云主机通信呢?
由此可以看出,外部主机还不可以与云主机通信,要想解决这一问题就需要用到floating ip机制
floating ip
简单来讲,floating ip 就是通过网络名称空间虚拟出一台路由器设备,其外部接口桥接至可通过物理接口与外部网络通信的网桥设备,而内部接口则做为内部网桥设备上关联的各虚拟机的网关接口,而后在外部网络接口上配置一个ip地址,并通过DNAT的方式转换至内部某指定的主机上,反过来,从内部某指定的主机上发出的报文则由路由器通过SNAT机制转发至外部接口上某特定的地址,从而实现了外部网络与内部VM的通信。
创建floating ip
依旧在Controller节点配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
[root@controller ~] # neutron floatingip-create ext-net
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 172.16.20.13 |
| floating_network_id | d44c19c2-2fe1-40e8-b07d-094111fe1a5e |
| id | de133088-d319-4094-9a2e-0b1762c85061 |
| port_id | |
| router_id | |
| status | DOWN |
| tenant_id | 684ae003069d41d883f9cd0fcb252ae7 |
+---------------------+--------------------------------------+
|
将floating ip绑定至目标实例
1
2
3
4
5
6
7
|
[root@controller ~] # nova floating-ip-associate demo-i1 172.16.20.13
[root@controller ~] # nova list
+--------------------------------------+---------+--------+------------+-------------+-------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------------+
| 15a35c37-2be2-4998-b98e-e2e472df0142 | demo-i1 | ACTIVE | - | Running | demo-net=192.168.22.2, 172.16.20.13 |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------------+
|
修改默认安全策略
1
2
3
4
5
6
|
[root@controller ~] # nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | -1 | 0.0.0.0 /0 | |
+-------------+-----------+---------+-----------+--------------+
|
现在外部网络中的主机即可通过172.16.20.13进行访问
其实是由192.168.22.2进行响应的,这里就不抓包分析了
如里还需要通过ssh方式远程连接172.16.20.13,还需要执行如下命令
1
|
#nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
|
至此,私有云基本搭建成功,接下来再说一下另一核心组件cinder,即存储服务
Block Storage服务
在没有共享存储的前提下终止实例就意味删除实例,映像文件也会被删除,要想实现用户在实例上创建的文件在实例重新创建后依然存在,只要在众compute节后背后使用共享存储即可。
Controller节点
安装所需软件包
1
|
[root@controller ~] # yum install openstack-cinder
|
初始化cinder数据库
1
|
[root@controller ~] # openstack-db --init --service cinder --password cinder
|
配置cinder服务
配置连入数据库的URL
1
2
|
[root@controller ~] # openstack-config --set /etc/cinder/cinder.conf \
> database connection mysql: //cinder :cinder@controller /cinder
|
在keystone中创建cinder用户
1
2
3
4
5
6
7
8
9
10
11
|
[root@controller ~] # keystone user-create --name cinder --pass cinder --email cinder@scholar.com
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | cinder@scholar.com |
| enabled | True |
| id | 57ec93556e744300a1f0217c26fd912b |
| name | cinder |
| username | cinder |
+----------+----------------------------------+
[root@controller ~] # keystone user-role-add --user=cinder --tenant=service --role=admin
|
连入keystone配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[root@controller ~] # openstack-config --set /etc/cinder/cinder.conf DEFAULT \
> auth_strategy keystone
[root@controller ~] # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \
> auth_uri http: //controller :5000
[root@controller ~] # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \
> auth_host controller
[root@controller ~] # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \
> auth_protocol http
[root@controller ~] # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \
> auth_port 35357
[root@controller ~] # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \
> admin_user cinder
[root@controller ~] # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \
> admin_tenant_name service
[root@controller ~] # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \
> admin_password cinder
|
配置其使用消息队列
1
2
3
4
|
[root@controller ~] # openstack-config --set /etc/cinder/cinder.conf \
> DEFAULT rpc_backend qpid
[root@controller ~] # openstack-config --set /etc/cinder/cinder.conf \
> DEFAULT qpid_hostname controller
|
在keystone中注册cinder服务
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
|
[root@controller ~] # keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 15cbd46094f541e49f5d7a717d65101a |
| name | cinder |
| type | volume |
+-------------+----------------------------------+
[root@controller ~] # keystone endpoint-create \
> --service- id =$(keystone service-list | awk '/ volume / {print $2}' ) \
> --publicurl=http: //controller :8776 /v1/ %\(tenant_id\)s \
> --internalurl=http: //controller :8776 /v1/ %\(tenant_id\)s \
> --adminurl=http: //controller :8776 /v1/ %\(tenant_id\)s
+-------------+-----------------------------------------+
| Property | Value |
+-------------+-----------------------------------------+
| adminurl | http: //controller :8776 /v1/ %(tenant_id)s |
| id | 0e71b9f2dad24f699dce6be1ce8f40be |
| internalurl | http: //controller :8776 /v1/ %(tenant_id)s |
| publicurl | http: //controller :8776 /v1/ %(tenant_id)s |
| region | regionOne |
| service_id | 15cbd46094f541e49f5d7a717d65101a |
+-------------+-----------------------------------------+
[root@controller ~] # keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage v2 |
| enabled | True |
| id | dbd3b5d766f546cfb54dfc8a75f56a8e |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+
[root@controller ~] # keystone endpoint-create \
> --service- id =$(keystone service-list | awk '/ volumev2 / {print $2}' ) \
> --publicurl=http: //controller :8776 /v2/ %\(tenant_id\)s \
> --internalurl=http: //controller :8776 /v2/ %\(tenant_id\)s \
> --adminurl=http: //controller :8776 /v2/ %\(tenant_id\)s
+-------------+-----------------------------------------+
| Property | Value |
+-------------+-----------------------------------------+
| adminurl | http: //controller :8776 /v2/ %(tenant_id)s |
| id | 40edb783979842e99f95d75cfc5abbe8 |
| internalurl | http: //controller :8776 /v2/ %(tenant_id)s |
| publicurl | http: //controller :8776 /v2/ %(tenant_id)s |
| region | regionOne |
| service_id | dbd3b5d766f546cfb54dfc8a75f56a8e |
+-------------+-----------------------------------------+
|
启动服务
1
2
3
4
5
6
|
[root@controller ~] # service openstack-cinder-api start
Starting openstack-cinder-api: [ OK ]
[root@controller ~] # service openstack-cinder-scheduler start
Starting openstack-cinder-scheduler: [ OK ]
[root@controller ~] # chkconfig openstack-cinder-api on
[root@controller ~] # chkconfig openstack-cinder-scheduler on
|
配置存储节点
准备逻辑卷
1
2
3
4
|
[root@block ~] # pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created
[root@block ~] # vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created
|
安装并配置cinder存储服务
安装所需软件包
1
|
[root@block ~] # yum install openstack-cinder scsi-target-utils
|
keystone相关配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
[root@block ~] # openstack-config --set /etc/cinder/cinder.conf DEFAULT \
> auth_strategy keystone
[root@block ~] # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \
> auth_uri http: //controller :5000
[root@block ~] # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \
> auth_host controller
[root@block ~] # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \
> auth_protocol http
[root@block ~] # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \
> auth_port 35357
[root@block ~] # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \
> admin_user cinder
[root@block ~] # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \
> admin_tenant_name service
[root@block ~] # openstack-config --set /etc/cinder/cinder.conf keystone_authtoken \
> admin_password cinder
|
消息队列配置
1
2
3
4
|
[root@block ~] # openstack-config --set /etc/cinder/cinder.conf \
> DEFAULT rpc_backend qpid
[root@block ~] # openstack-config --set /etc/cinder/cinder.conf \
> DEFAULT qpid_hostname controller
|
连接数据库配置
1
2
|
[root@block ~] # openstack-config --set /etc/cinder/cinder.conf \
> database connection mysql: //cinder :cinder@controller /cinder
|
配置本节点提供cinder-volume服务使用的接口
1
|
[root@block ~] # openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.10.126
|
指定Glance服务节点
1
2
|
[root@block ~] # openstack-config --set /etc/cinder/cinder.conf \
> DEFAULT glance_host controller
|
指定卷信息文件存放位置
1
2
|
[root@block ~] # openstack-config --set /etc/cinder/cinder.conf \
> DEFAULT volumes_dir /etc/cinder/volumes
|
配置scsi-target
1
2
3
4
5
|
[root@block ~] # openstack-config --set /etc/cinder/cinder.conf \
> DEFAULT iscsi_helper tgtadm
[root@block ~] # vim /etc/tgt/targets.conf
include /etc/cinder/volumes/ *
|
启动服务
fedora的epel源的中icehouse版本的openstack-cinder的服务openstack-cinder-volume默认为先读取/usr/share/cinder/cinder-dist.conf 这个配置文件,而其内容是有错误的。直接启动会导致创建后的卷无法关联至instace上,所以请禁止服务不再读取此文件。
1
2
3
4
5
6
|
[root@block ~] # service openstack-cinder-volume start
Starting openstack-cinder-volume: [ OK ]
[root@block ~] # service tgtd start
Starting SCSI target daemon: [ OK ]
[root@block ~] # chkconfig openstack-cinder-volume on
[root@block ~] # chkconfig tgtd on
|
卷创建测试
在cinder Controller节点执行如下命令,创建一个5G 大小名为demoVolume的逻辑卷
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
[root@controller ~] # cinder create --display-name demoVolume 5
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2015-07-27T15:08:11.145570 |
| display_description | None |
| display_name | demoVolume |
| encrypted | False |
| id | ab0d03a8-4e89-4a17-8dc3-3432426f07a2 |
| metadata | {} |
| size | 5 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
|
列出所有卷
1
2
3
4
5
6
|
[root@controller ~] # cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ab0d03a8-4e89-4a17-8dc3-3432426f07a2 | available | demoVolume | 5 | None | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|
将此卷附加至指定的实例上
1
2
3
4
5
6
7
8
9
|
[root@controller ~] # nova volume-attach demo-i1 ab0d03a8-4e89-4a17-8dc3-3432426f07a2
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | ab0d03a8-4e89-4a17-8dc3-3432426f07a2 |
| serverId | 15a35c37-2be2-4998-b98e-e2e472df0142 |
| volumeId | ab0d03a8-4e89-4a17-8dc3-3432426f07a2 |
+----------+--------------------------------------+
|
查看关联结果
1
2
3
4
5
6
|
[root@controller ~] # cinder list
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| ab0d03a8-4e89-4a17-8dc3-3432426f07a2 | in -use | demoVolume | 5 | None | false | 15a35c37-2be2-4998-b98e-e2e472df0142 |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
|
挂载成功,接下来就可以打开对应实例控制台,查看磁盘的附加状态,并对磁盘进行相应的操作了,这里就不再演示了
The end
喜大普奔,终于结束了,非核心组件就不做介绍了,这篇幅我也是深深的醉了,做个实验真不容易,因内存有限,死卡死卡的,真怀疑会突然卡爆掉,辛亏不是2G的RAM。以上仅为个人学习整理,如有错漏,大神勿喷~~~