Chinaunix首页 | 论坛 | 博客
  • 博客访问: 10168278
  • 博文数量: 1669
  • 博客积分: 16831
  • 博客等级: 上将
  • 技术积分: 12594
  • 用 户 组: 普通用户
  • 注册时间: 2011-02-25 07:23
个人简介

柔中带刚,刚中带柔,淫荡中富含柔和,刚猛中荡漾风骚,无坚不摧,无孔不入!

文章分类

全部博文(1669)

文章存档

2023年(4)

2022年(1)

2021年(10)

2020年(24)

2019年(4)

2018年(19)

2017年(66)

2016年(60)

2015年(49)

2014年(201)

2013年(221)

2012年(638)

2011年(372)

分类: 云计算

2014-02-27 14:53:59

 
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with  or .

Clone in Desktop 
branch: GRE/2NICs
SkiBLE  April 02, 2013

 
file 980 lines (656 sloc) 27.126 kb
Open    

OpenStack Folsom Install Guide

Version: 3.0
Source:
Keywords: Multi node OpenStack, Folsom, Quantum, Nova, Keystone, Glance, Horizon, Cinder, OpenVSwitch, KVM, Ubuntu Server 12.10 (64 bits).

Authors

Copyright (C) Bilel Msekni <>

Contributors

Roy Sowa <> Stephen gran <>
Dennis E Miyoshi <> Marco Consonni <>
Houssem Medhioub <> Djamal Zeghlache <>

Wana contribute ? Read the guide, send your contribution and get your name listed ;)

Table of Contents

0. What is it?
1. Requirements
2. Controller Node
3. Network Node
4. Compute Node
5. Start your first VM
6. Licencing
7. Contacts
8. Acknowledgement
9. Credits
10. To do

0. What is it?

OpenStack Folsom Install Guide is an easy and tested way to create your own OpenStack plateform.

Version 3.0

Status: Testing

1. Requirements

Node Role: NICs
Control Node: eth0 (100.10.10.51), eth1 (192.168.100.51)
Network Node: eth0 (100.10.10.52), eth2 (192.168.100.52)
Compute Node: eth0 (100.10.10.53)

Note 1: Compute and Controller nodes can be merged into one node.

Note 2: If you are not interrested in Quantum, you can also use this guide but you must follow the nova section found  instead of the one written in this guide.

Note 3: This is my current network architecture, you can add as many compute node as you wish.

2. Controller Node

2.1. Preparing Ubuntu 12.10

  • After you install Ubuntu 12.10 Server 64bits, Go to the sudo mode and don't leave it until the end of this guide:

    sudo su
    
  • Update your system:

    apt-get update
    apt-get upgrade
    apt-get dist-upgrade
    

2.2.Networking

  • Only one NIC on the controller should be internet connected:

    #Exposes OpenStack API to the internet
    auto eth1
    iface eth1 inet static
    address 192.168.100.51
    netmask 255.255.255.0
    gateway 192.168.100.1
    dns-nameservers 8.8.8.8
    
    #Management & network configuration
    auto eth0
    iface eth0 inet static
    address 100.10.10.51
    netmask 255.255.255.0
    
  • Restart your networking services:

    service networking restart
    

2.3. MySQL & RabbitMQ

  • Install MySQL:

    apt-get install mysql-server python-mysqldb
    
  • Configure mysql to accept all incoming requests:

    sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
    service mysql restart
    
  • Install RabbitMQ:

    apt-get install rabbitmq-server
    

2.4. Node synchronization

  • Install other services:

    apt-get install ntp
    
  • Configure the NTP server to synchronize between your compute nodes and the controller node:

    sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
    service ntp restart
    

2.5. Others

  • Install other services:

    apt-get install vlan bridge-utils
    
  • Enable IP_Forwarding:

    nano /etc/sysctl.conf
    # Uncomment net.ipv4.ip_forward=1, to save you from rebooting, perform the following
    sysctl net.ipv4.ip_forward=1
    

2.6. Keystone

This is how we install OpenStack's identity service:

  • Start by the keystone packages:

    apt-get install keystone
    
  • Create a new MySQL database for keystone:

    mysql -u root -p
    CREATE DATABASE keystone;
    GRANT ALL ON keystone.* TO 'keystoneUser'@'%' IDENTIFIED BY 'keystonePass';
    quit;
    
  • Adapt the connection attribute in the /etc/keystone/keystone.conf to the new database:

    connection = mysql://keystoneUser:keystonePass@100.10.10.51/keystone
    
  • Restart the identity service then synchronize the database:

    service keystone restart
    keystone-manage db_sync
    
  • Fill up the keystone database using the two scripts available in the  of this git repository. Beware that you MUST comment every part related to Quantum if you don't intend to install it otherwise you will have trouble with your dashboard later:

    #Modify the HOST_IP and HOST_IP_EXT variable before executing the scripts
    
    chmod +x keystone_basic.sh
    chmod +x keystone_endpoints_basic.sh
    
    ./keystone_basic.sh
    ./keystone_endpoints_basic.sh
    
  • Create a simple credential file and load it so you won't be bothered later:

    nano creds
    #Paste the following:
    export OS_TENANT_NAME=admin
    export OS_USERNAME=admin
    export OS_PASSWORD=admin_pass
    export OS_AUTH_URL=""
    # Load it:
    source creds
    
  • To test Keystone, we use a simple curl request:

    apt-get install curl openssl
    curl  -H 'x-auth-token: ADMIN'
    

2.7. Glance

  • After installing Keystone, we continue with installing image storage service a.k.a Glance:

    apt-get install glance
    
  • Create a new MySQL database for Glance:

    mysql -u root -p
    CREATE DATABASE glance;
    GRANT ALL ON glance.* TO 'glanceUser'@'%' IDENTIFIED BY 'glancePass';
    quit;
    
  • Update /etc/glance/glance-api-paste.ini with:

    [filter:authtoken]
    paste.filter_factory = keystone.middleware.auth_token:filter_factory
    auth_host = 100.10.10.51
    auth_port = 35357
    auth_protocol = http
    admin_tenant_name = service
    admin_user = glance
    admin_password = service_pass
    
  • Update the /etc/glance/glance-registry-paste.ini with:

    [filter:authtoken]
    paste.filter_factory = keystone.middleware.auth_token:filter_factory
    auth_host = 100.10.10.51
    auth_port = 35357
    auth_protocol = http
    admin_tenant_name = service
    admin_user = glance
    admin_password = service_pass
    
  • Update /etc/glance/glance-api.conf with:

    sql_connection = mysql://glanceUser:glancePass@100.10.10.51/glance
    
  • And:

    [paste_deploy]
    flavor = keystone
    
  • Update the /etc/glance/glance-registry.conf with:

    sql_connection = mysql://glanceUser:glancePass@100.10.10.51/glance
    
  • And:

    [paste_deploy]
    flavor = keystone
    
  • Restart the glance-api and glance-registry services:

    service glance-api restart; service glance-registry restart
    
  • Synchronize the glance database:

    glance-manage db_sync
    
  • To test Glance's well installation, we upload a new image to the store. Start by downloading the cirros cloud image to your node and then uploading it to Glance:

    mkdir images
    cd images
    wget 
    glance image-create --name myFirstImage --is-public true --container-format bare --disk-format qcow2 < cirros-0.3.0-x86_64-disk.img
    
  • Now list the images to see what you have just uploaded:

    glance image-list
    

2.8. Quantum

  • Install the Quantum server and the OpenVSwitch package collection:

    apt-get install quantum-server quantum-plugin-openvswitch
    
  • Create a database:

    mysql -u root -p
    CREATE DATABASE quantum;
    GRANT ALL ON quantum.* TO 'quantumUser'@'%' IDENTIFIED BY 'quantumPass';
    quit;
    
  • Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:

    #Under the database section
    [DATABASE]
    sql_connection = mysql://quantumUser:quantumPass@100.10.10.51/quantum
    
    #Under the OVS section
    [OVS]
    tenant_network_type = gre
    tunnel_id_ranges = 1:1000
    enable_tunneling = True
    
  • Edit /etc/quantum/api-paste.ini

    [filter:authtoken]
    paste.filter_factory = keystone.middleware.auth_token:filter_factory
    auth_host = 100.10.10.51
    auth_port = 35357
    auth_protocol = http
    admin_tenant_name = service
    admin_user = quantum
    admin_password = service_pass
    
  • Restart the quantum server:

    service quantum-server restart
    

2.9. Nova

  • Start by installing nova components:

    apt-get install -y nova-api nova-cert novnc nova-consoleauth nova-scheduler nova-novncproxy
    
  • Prepare a Mysql database for Nova:

    mysql -u root -p
    CREATE DATABASE nova;
    GRANT ALL ON nova.* TO 'novaUser'@'%' IDENTIFIED BY 'novaPass';
    quit;
    
  • Now modify authtoken section in the /etc/nova/api-paste.ini file to this:

    [filter:authtoken]
    paste.filter_factory = keystone.middleware.auth_token:filter_factory
    auth_host = 100.10.10.51
    auth_port = 35357
    auth_protocol = http
    admin_tenant_name = service
    admin_user = nova
    admin_password = service_pass
    signing_dirname = /tmp/keystone-signing-nova
    
  • Modify the /etc/nova/nova.conf like this:

    [DEFAULT]
    logdir=/var/log/nova
    state_path=/var/lib/nova
    lock_path=/run/lock/nova
    verbose=True
    api_paste_config=/etc/nova/api-paste.ini
    scheduler_driver=nova.scheduler.simple.SimpleScheduler
    s3_host=100.10.10.51
    ec2_host=100.10.10.51
    ec2_dmz_host=100.10.10.51
    rabbit_host=100.10.10.51
    cc_host=100.10.10.51
    dmz_cidr=169.254.169.254/32
    metadata_host=100.10.10.51
    metadata_listen=0.0.0.0
    nova_url=
    sql_connection=mysql://novaUser:novaPass@100.10.10.51/nova
    ec2_url=
    root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
    
    # Auth
    use_deprecated_auth=false
    auth_strategy=keystone
    keystone_ec2_url=
    # Imaging service
    glance_api_servers=100.10.10.51:9292
    image_service=nova.image.glance.GlanceImageService
    
    # Vnc configuration
    novnc_enabled=true
    novncproxy_base_url=
    novncproxy_port=6080
    vncserver_proxyclient_address=192.168.100.51
    vncserver_listen=0.0.0.0
    
    # Network settings
    network_api_class=nova.network.quantumv2.api.API
    quantum_url=
    quantum_auth_strategy=keystone
    quantum_admin_tenant_name=service
    quantum_admin_username=quantum
    quantum_admin_password=service_pass
    quantum_admin_auth_url=
    libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
    linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
    firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
    
    # Compute #
    compute_driver=libvirt.LibvirtDriver
    
    # Cinder #
    volume_api_class=nova.volume.cinder.API
    osapi_volume_listen_port=5900
    
  • Synchronize your database:

    nova-manage db sync
    
  • Restart nova-* services:

    cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done
    
  • Check for the smiling faces on nova-* services to confirm your installation:

    nova-manage service list
    

2.10. Cinder

  • Install the required packages:

    apt-get install cinder-api cinder-scheduler cinder-volume iscsitarget open-iscsi iscsitarget-dkms
    
  • Configure the iscsi services:

    sed -i 's/false/true/g' /etc/default/iscsitarget
    
  • Restart the services:

    service iscsitarget start
    service open-iscsi start
    
  • Prepare a Mysql database for Cinder:

    mysql -u root -p
    CREATE DATABASE cinder;
    GRANT ALL ON cinder.* TO 'cinderUser'@'%' IDENTIFIED BY 'cinderPass';
    quit;
    
  • Configure /etc/cinder/api-paste.ini like the following:

    [filter:authtoken]
    paste.filter_factory = keystone.middleware.auth_token:filter_factory
    service_protocol = http
    service_host = 192.168.100.51
    service_port = 5000
    auth_host = 100.10.10.51
    auth_port = 35357
    auth_protocol = http
    admin_tenant_name = service
    admin_user = cinder
    admin_password = service_pass
    
  • Edit the /etc/cinder/cinder.conf to:

    [DEFAULT]
    rootwrap_config=/etc/cinder/rootwrap.conf
    sql_connection = mysql://cinderUser:cinderPass@100.10.10.51/cinder
    api_paste_config = /etc/cinder/api-paste.ini
    iscsi_helper=ietadm
    volume_name_template = volume-%s
    volume_group = cinder-volumes
    verbose = True
    auth_strategy = keystone
    #osapi_volume_listen_port=5900
    
  • Then, synchronize your database:

    cinder-manage db sync
    
  • Finally, don't forget to create a volumegroup and name it cinder-volumes:

    dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=2G
    losetup /dev/loop2 cinder-volumes
    fdisk /dev/loop2
    #Type in the followings:
    n
    p
    1
    ENTER
    ENTER
    t
    8e
    w
    
  • Proceed to create the physical volume then the volume group:

    pvcreate /dev/loop2
    vgcreate cinder-volumes /dev/loop2
    

Note: Beware that this volume group gets lost after a system reboot. (Click  to know how to load it after a reboot)

  • Restart the cinder services:

    service cinder-volume restart
    service cinder-api restart
    

2.11. Horizon

  • To install horizon, proceed like this

    apt-get install openstack-dashboard memcached
    
  • If you don't like the OpenStack ubuntu theme, you can disabled it and go back to the default look:

    nano /etc/openstack-dashboard/local_settings.py
    #Comment these lines
    #Enable the Ubuntu theme if it is present.
    #try:
    #    from ubuntu_theme import *
    #except ImportError:
    #    pass
    
  • Reload Apache and memcached:

    service apache2 restart; service memcached restart
    

You can now access your OpenStack 192.168.100.51/horizon with credentials admin:admin_pass.

Note: A reboot might be needed for a successful login

3. Network node

3.1. Preparing the Node

  • Update your system:

    apt-get update
    apt-get upgrade
    apt-get dist-upgrade
    
  • Install ntp service:

    apt-get install ntp
    
  • Configure the NTP server to follow the controller node:

    sed -i 's/server ntp.ubuntu.com/server 100.10.10.51/g' /etc/ntp.conf
    service ntp restart
    
  • Install other services:

    apt-get install vlan bridge-utils
    
  • Enable IP_Forwarding:

    nano /etc/sysctl.conf
    # Uncomment net.ipv4.ip_forward=1, to save you from rebooting, perform the following
    sysctl net.ipv4.ip_forward=1
    

3.2.Networking

  • Put the internet connected NIC in promisc mode:

    auto eth2
    iface eth2 inet manual
    up ifconfig $IFACE 0.0.0.0 up
    up ip link set $IFACE promisc on
    down ip link set $IFACE promisc off
    down ifconfig $IFACE down
    
    auto eth0
    iface eth0 inet static
    address 100.10.10.52
    netmask 255.255.255.0
    

3.3. OpenVSwitch

  • Install the openVSwitch:

    apt-get install -y openvswitch-switch openvswitch-datapath-dkms
    
  • Create the bridges:

    #br-int is used for VM integration
    ovs-vsctl add-br br-int
    
    #br-ex is used for accessing internet.
    ovs-vsctl add-br br-ex
    ovs-vsctl br-set-external-id br-ex bridge-id br-ex
    ovs-vsctl add-port br-ex eth2
    

3.4. Quantum

We need to install the l3 agent, dhcp agent and the openVSwitch plugin agent

  • Install quantum DHCP and l3 agents:

    apt-get -y install quantum-dhcp-agent quantum-l3-agent quantum-plugin-openvswitch-agent
    
  • Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:

    #Under the database section
    [DATABASE]
    sql_connection = mysql://quantumUser:quantumPass@100.10.10.51/quantum
    
    #Under the OVS section
    [OVS]
    tenant_network_type = gre
    tunnel_id_ranges = 1:1000
    integration_bridge = br-int
    tunnel_bridge = br-tun
    local_ip = 100.10.10.52
    enable_tunneling = True
    
  • In addition, update the /etc/quantum/l3_agent.ini:

    auth_url = 
    auth_region = RegionOne
    admin_tenant_name = service
    admin_user = quantum
    admin_password = service_pass
    metadata_ip = 192.168.100.51
    metadata_port = 8775
    use_namespaces = False
    
  • Edit /etc/quantum/dhcp_agent.ini:

    use_namespaces = False
    
  • Make sure that your rabbitMQ IP in /etc/quantum/quantum.conf is set to the controller node:

    rabbit_host = 100.10.10.51
    
  • To get the l3_agent to function properly, you need to undergo a special operation described .

  • Restart all the services:

    service quantum-dhcp-agent restart
    service quantum-l3-agent restart
    service quantum-plugin-openvswitch-agent restart
    

4. Compute Node

4.1. Preparing the Node

  • Update your system:

    apt-get update
    apt-get upgrade
    apt-get dist-upgrade
    
  • Install ntp service:

    apt-get install ntp
    
  • Configure the NTP server to follow the controller node:

    sed -i 's/server ntp.ubuntu.com/server 100.10.10.51/g' /etc/ntp.conf
    service ntp restart
    
  • Install other services:

    apt-get install vlan bridge-utils
    
  • Enable IP_Forwarding:

    nano /etc/sysctl.conf
    # Uncomment net.ipv4.ip_forward=1, to save you from rebooting, perform the following
    sysctl net.ipv4.ip_forward=1
    

4.2.Networking

  • Perform the following:

    # OpenStack management
    auto eth0
    iface eth0 inet static
    address 100.10.10.53
    netmask 255.255.255.0
    

4.3 KVM

  • make sure that your hardware enables virtualization:

    apt-get install cpu-checker
    kvm-ok
    
  • Normally you would get a good response. Now, move to install kvm and configure it:

    apt-get install -y kvm libvirt-bin pm-utils
    
  • Edit the cgroup_device_acl array in the /etc/libvirt/qemu.conf file to:

    cgroup_device_acl = [
    "/dev/null", "/dev/full", "/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc", "/dev/hpet","/dev/net/tun"
    ]
    
  • Delete default virtual bridge

    virsh net-destroy default
    virsh net-undefine default
    
  • Enable live migration by updating /etc/libvirt/libvirtd.conf file:

    listen_tls = 0
    listen_tcp = 1
    auth_tcp = "none"
    
  • Edit libvirtd_opts variable in /etc/init/libvirt-bin.conf file:

    env libvirtd_opts="-d -l"
    
  • Edit /etc/default/libvirt-bin file

    libvirtd_opts="-d -l"
    
  • Restart the libvirt service to load the new values:

    service libvirt-bin restart
    

4.4. OpenVSwitch

  • Install the openVSwitch:

    apt-get install -y openvswitch-switch openvswitch-datapath-dkms
    
  • Create the bridges:

    #br-int will be used for VM integration
    ovs-vsctl add-br br-int
    

4.5. Quantum

  • Install the Quantum openvswitch agent:

    apt-get -y install quantum-plugin-openvswitch-agent
    
  • Edit the OVS plugin configuration file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini with:

    #Under the database section
    [DATABASE]
    sql_connection = mysql://quantumUser:quantumPass@100.10.10.51/quantum
    
    #Under the OVS section
    [OVS]
    tenant_network_type = gre
    tunnel_id_ranges = 1:1000
    integration_bridge = br-int
    tunnel_bridge = br-tun
    local_ip = 100.10.10.53
    enable_tunneling = True
    
  • Make sure that your rabbitMQ IP in /etc/quantum/quantum.conf is set to the controller node:

    rabbit_host = 100.10.10.51
    
  • Restart all the services:

    service quantum-plugin-openvswitch-agent restart
    

4.6. Nova

  • Install nova's required components for the compute node:

    apt-get install nova-compute-kvm
    
  • Now modify authtoken section in the /etc/nova/api-paste.ini file to this:

    [filter:authtoken]
    paste.filter_factory = keystone.middleware.auth_token:filter_factory
    auth_host = 100.10.10.51
    auth_port = 35357
    auth_protocol = http
    admin_tenant_name = service
    admin_user = nova
    admin_password = service_pass
    signing_dirname = /tmp/keystone-signing-nova
    
  • Edit /etc/nova/nova-compute.conf file

    [DEFAULT]
    libvirt_type=kvm
    libvirt_ovs_bridge=br-int
    libvirt_vif_type=ethernet
    libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
    libvirt_use_virtio_for_bridges=True
    
  • Modify the /etc/nova/nova.conf like this:

    [DEFAULT]
    logdir=/var/log/nova
    state_path=/var/lib/nova
    lock_path=/run/lock/nova
    verbose=True
    api_paste_config=/etc/nova/api-paste.ini
    scheduler_driver=nova.scheduler.simple.SimpleScheduler
    s3_host=100.10.10.51
    ec2_host=100.10.10.51
    ec2_dmz_host=100.10.10.51
    rabbit_host=100.10.10.51
    cc_host=100.10.10.51
    dmz_cidr=169.254.169.254/32
    metadata_host=100.10.10.51
    metadata_listen=0.0.0.0
    nova_url=
    sql_connection=mysql://novaUser:novaPass@100.10.10.51/nova
    ec2_url=
    root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
    
    # Auth
    use_deprecated_auth=false
    auth_strategy=keystone
    keystone_ec2_url=
    # Imaging service
    glance_api_servers=100.10.10.51:9292
    image_service=nova.image.glance.GlanceImageService
    
    # Vnc configuration
    novnc_enabled=true
    novncproxy_base_url=
    novncproxy_port=6080
    vncserver_proxyclient_address=100.10.10.53
    vncserver_listen=0.0.0.0
    
    # Network settings
    network_api_class=nova.network.quantumv2.api.API
    quantum_url=
    quantum_auth_strategy=keystone
    quantum_admin_tenant_name=service
    quantum_admin_username=quantum
    quantum_admin_password=service_pass
    quantum_admin_auth_url=
    libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
    linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
    firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
    
    # Compute #
    compute_driver=libvirt.LibvirtDriver
    
    # Cinder #
    volume_api_class=nova.volume.cinder.API
    osapi_volume_listen_port=5900
    
  • Restart nova-* services:

    cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done
    
  • Check for the smiling faces on nova-* services to confirm your installation:

    nova-manage service list
    

5. Your First VM

To start your first VM, we first need to create a new tenant, user, internal and external network. SSH to your controller node and perform the following.

  • Create a new tenant

    keystone tenant-create --name project_one
    
  • Create a new user and assign the member role to it in the new tenant (keystone role-list to get the appropriate id):

    keystone user-create --name=user_one --pass=user_one --tenant-id $put_id_of_project_one --email=user_one@domain.com
    keystone user-role-add --tenant-id $put_id_of_project_one  --user-id $put_id_of_user_one --role-id $put_id_of_member_role
    
  • Create a new network for the tenant:

    quantum net-create --tenant-id $put_id_of_project_one net_proj_one
    
  • Create a new subnet inside the new tenant network:

    quantum subnet-create --tenant-id $put_id_of_project_one net_proj_one 50.50.1.0/24
    
  • Create a router for the new tenant:

    quantum router-create --tenant-id $put_id_of_project_one router_proj_one
    
  • Add the router to the subnet:

    quantum router-interface-add $put_router_proj_one_id_here $put_subnet_id_here
    

You can now start creating VMs but they will not be accessible from the internet. If you like them to be so, perform the following:

  • Create your external network with the tenant id belonging to the service tenant (keystone tenant-list to get the appropriate id)

    quantum net-create --tenant-id $put_id_of_service_tenant ext_net --router:external=True
    
  • Go back to the /etc/quantum/l3_agent.ini file and edit it:

    gateway_external_net_id = $id_of_ext_net
    router_id = $router_proj_one_id
    
  • Restart l3-agent:

    service quantum-l3-agent restart
    
  • Create a subnet containing your floating IPs:

    quantum subnet-create --tenant-id $put_id_of_service_tenant --allocation-pool start=192.168.100.102,end=192.168.100.126 --gateway 192.168.100.1 ext_net 192.168.100.100/24 --enable_dhcp=False
    
  • Set the router for the external network:

    quantum router-gateway-set $put_router_proj_one_id_here $put_id_of_ext_net_here
    

VMs gain access to the metadata server locally present in the controller node via the external network. To create that necessary connection perform the following:

  • Get the IP address of router proj one:

    quantum port-list -- --device_id  --device_owner network:router_gateway
    
  • Add the following route on controller node only:

    route add -net 50.50.1.0/24 gw $router_proj_one_IP
    

Unfortunatly, you can't use the dashboard to assign floating IPs to VMs so you need to get your hands a bit dirty to give your VM a public IP.

  • Start by allocating a floating ip to the project one tenant:

    quantum floatingip-create --tenant-id $put_id_of_project_one ext_net
    
  • pick the id of the port corresponding to your VM:

    quantum port-list
    
  • Associate the floating IP to your VM:

    quantum floatingip-associate $put_id_floating_ip $put_id_vm_port
    

This is it !, You can now ping you VM and start administrating you OpenStack !

I Hope you enjoyed this guide, please if you have any feedbacks, don't hesitate.

6. Licensing

OpenStack Folsom Install Guide by Bilel Msekni is licensed under a Creative Commons Attribution 3.0 Unported License.

To view a copy of this license, visit [  ].

7. Contacts

Bilel Msekni: 

8. Acknowledgment

This work has been supported by:

  • CompatibleOne Project (French FUI project) []
  • Easi-Clouds (ITEA2 project) []

9. Credits

This work has been based on:

  • Emilien Macchi's Folsom guide []
  • OpenStack Documentation []
  • OpenStack Quantum Install []

10. To do

This guide is just a startup. Your suggestions are always welcomed.

Some of this guide's needs might be:

  • Define more Quantum configurations to cover all usecases possible see .
  • Define more Quantum configurations to cover all usecases possible see .
Something went wrong with that request. Please try again.
阅读(2106) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~