SlideShare a Scribd company logo
1 of 44
INN694 – Project
OpenStack
Semester 2, 2014
STUDENT NAME STUDENT ID
Fabien Chastel n8745064
SUPERVISOR
Dr Vicky Liu
OpenStack – Final report
INN694 – Project Page |1/43|
Executive Summary
The purpose of this documentation is to provide a step by step installation procedure of OpenStack. This
include the configuration of the primary environment such as the network time synchronisation, the database
that will store all data that OpenStack services need, the OpenStack package and the messaging service use by
OpenStack services in order to communicate. Then it will explain how to install the core components that are
needed to run a basic instancewhichare the Identityservice, theImage service, thecompute service, a compute
node and the network services.
OpenStack – Final report
INN694 – Project Page |2/43|
TABLE OF CONTENTS
EXECUTIVE SUMMARY .................................................................................................................................................................1
1 - INTRODUCTION .........................................................................................................................................................................3
2 - OPENSTACK ENVIRONMENT .............................................................................................................................................3
2.1 - HARDWARE REQUIREMENT................................................................................................................................................................... 4
2.2 - SOFTWARE REQUIREMENT.................................................................................................................................................................... 4
3 - INSTALLATION OF OPENSTACK........................................................................................................................................5
3.1 - PRIMARY ENVIRONMENT CONFIGURATION........................................................................................................................................ 6
3.2 - IDENTITY SERVICE................................................................................................................................................................................ 11
3.3 - IMAGE SERVICE: GLANCE.................................................................................................................................................................... 14
3.4 - COMPUTE SERVICE: NOVA.................................................................................................................................................................. 16
3.5 - NETWORKING SERVICE: NEUTRON................................................................................................................................................... 19
3.6 - DASHBOARD: HORIZON...................................................................................................................................................................... 26
3.7 - BLOCK STORAGE: CINDER................................................................................................................................................................... 28
4 - TROUBLESHOOTING.............................................................................................................................................................33
5 - USEFUL COMMAND ................................................................................................................................................................34
5.1 - GENERAL COMMAND........................................................................................................................................................................... 34
5.2 - GLANCE ................................................................................................................................................................................................. 34
6 - TUTORIAL ....................................................................................................................................................................................35
6.1 - LUNCH AN INSTANCE: .......................................................................................................................................................................... 35
6.2 - PROVIDE PUBLIC ADDRESS TO INSTANCES........................................................................................................................................ 38
6.3 - CREATE A NEW SECURITY GROUP (ACL)........................................................................................................................................... 39
6.4 - FORCE THE MTU.................................................................................................................................................................................. 41
6.5 - CREATE A VIRTUAL MACHINE USING “VIRTINST”...............................................................ERROR!BOOKMARKNOTDEFINED.
7 - TABLE OF FIGURES.................................................................................................................................................................41
8 - TABLE OF TABLES ...................................................................................................................................................................41
9 - REFERENCES .............................................................................................................................................................................42
OpenStack – Final report
INN694 – Project Page |3/43|
1 - Introduction
OpenStack is a worldwide association of developers and cloud computing technologists, managed by the
OpenStack Foundation, that produce the omnipresent open source computing platform for public and private
clouds [1]. Cloud computing is about sharing resource such as RAM, CPU and other among several machine.
For instance, if you have two computers, one with 2 core CPU, 4GB of RAM and 100GB of storage and this
other one has 4 core CPU, 16GB of RAM and 500GB of storage, it will summarised the resources and the use
will perceive it as one server of 6 core CPU, 20GB of RAM and 600GB of storage (in theory). In this case, the
OpenStack was installed on three computer provided by QUT, the specification of those computer will be
listed later on the report.
OpenStack is an open-source software cloud computing platform mainly focused on IaaS (Infrastructure
as a Service). It can control a big pools of compute, storage and networking resources of an entire datacentre
using a single web-based dashboard.
Figure 1 - OpenStack overview [1]
2 - OpenStack environment
The environment of OpenStack is highly scalable and depend on the needs of each companies. The
OpenStack scalability will probably always differ from one company to another according to the need of this
company and “no one Solution meet everyone’s scalability goals [2]. For instance, some companies will need
to have a plethora of big instancesthatneed a lot of VPCU and RAM but lessstorage whereasother companies
will need only small instances using few VCPU and RAM but need a huge amount of storage. OpenStack has
been designed to be horizontally scalable in order to suit the cloud paradigm [2]. It means that after the initial
installation of OpenStack, it is possible to add more power of or storable simply by adding another server on
the cloud
OpenStack can be install in a virtual machine using software like VMware or VirtualBox for experiment
purposes in order to run few smallinstances asitcan be installedina multinationalenterprisecanrun thousands
of instances, small or big, such as Amazon with it Amazon Cloud Services.
OpenStack – Final report
INN694 – Project Page |4/43|
2.1 - Hardware requirement
A basic environment does not need a huge amount of resource to be functional. However, there is a
minimum requirement in order to support several minimal CirrOS instances. This minimum are as bellow:
Node Processor Memory Storage
Controller 1 2 BG 5 GB
Network 1 512 MB 5 GB
Compute 1 2 GB 10 GB
Table 1 - Hardware requirement
2.2 - Software requirement
OpenStack need to be install on the top of a Linux distribution, the list of compatible distribution is as
follow:
 Debian
 openSUSE and SUSE Linux Enterprise Server
 Red Hat Enterprise
 Centos
 Fedora
 Ubuntu
Before installing OpenStack, it is necessary to have a good base in order to be able to install OpenStack
services without or with only few problems. Firstly, it is strongly recommended to have only a minimal
installation of Linux distribution on the purpose of allowing more resource for OpenStack and reduce
confusion and it is highly recommended to have a 64-bit version of Linux for a number reason such as the
limit of RAM (3-4GB) and the increase of the capability of the processor. It will also allow to create a 64-bit
instances as well as 32-bit.
Secondly, the network topology should reflect the needs of the company and the IP addressing should be
chosen quit carefully. All automatic IP assignments should be disable and be manually configure on each
nodes. In addition to the addressing, it is better to have a DNS server with the record of all nodes but it is
not compulsory as it can be done using the “hosts” files located on the folder “/etc”, however it become
complicate to manage when the network grow up. Then, the time should be synchronised amongst all nodes
from the controller using application like NTP.
The following step is to install a database as the majority of OpenStack services need a database in order
to store information. A database must be installed, preferably on the controller, as well as the Python library
related to the database chosen. The Python library also need to be installed on all additional nodes that need
to access this database using the API from OpenStack. The recommended database software and python
library for OpenStack is MySQL and MySQL Python library.
Then the OpenStack package need to be installed on each servers/nodes. The OpenStack packagecan be
install by adding a specific repository and use the normal install command like “apt-get install” or “yum
install”. Some recent Linux distribution such as Ubuntu 14.04 include those packages on their repository.
The final step before installing the main services is to install a message broker. Indeed, to coordinate
operation and status information among services, OpenStack use a message broker. Several message brokers
are compatible with OpenStack, yet the most commonly used is RabbitMQ. Same as the database, it is
preferable to install the message broker on the main controller.
OpenStack – Final report
INN694 – Project Page |5/43|
3 - Installation of OpenStack
The first stage of the installation of OpenStack is to install the operating system that will store the cloud.
This documentation assume that a basic installation of Linux was done using Ubuntu 14.04 where system was
updated (apt-get update) and upgraded (apt-get upgrade) and a DNS was implemented to resolve all the IPs.
In addition, it is important to testthe configuration by following the verificationon eachsection or by checking
the respective log after restarting a service and if a problem occur, it is recommended to fix it before continue.
After follow the Section from 10.1 to 10.5, all the core components necessary to launch a basic instance will be
install. The rest will be optional.
All the command in red colour need to be change according to the actual network. For instance, most
passwords were generated using a command (“openssl rand -hex 10”), also shown in section 3.2.1.5-. The
Table 2 show the list of password that will be needed during the installation. The password that need to be
remembered such as the system and MySQL root password were chosen carefully and easy to remember like
“osuc@123456”, otherwise all other passwords created for OpenStack services in Keystone or MySQL were
generated using a command line for security reasons.
Location Username Password Description
System root root_password Password for Ubuntu
YourUsername Your_Username_Password Password for Ubuntu
MySQL root MySQL_Root_Password Root password for the database
dbu_keystone MySQL_Keystone_Password Database user for Identity service
dbu_glance MySQL_Glance_Password Database user for Image Service
dbu_nova MySQL_Nova_Password Database user for Computeservice
dbu_horizon MySQL_Horizon_Password Database user for the dashboard
dbu_cinder MySQL_Cinder_Password Database user for the Block Storage service
dbu_neutron MySQL_Neutron_Password Database user for the Networking service
dbu_heat MySQL_Heat_Password Database user for the Orchestration service
RabbitMQ guest Rabbit_guest_password User guest of RabbitMQ
YourUsername Rabbit_Strong_Password Another account of RabbitMQ
Keystone admin Keystone_Admin_Password Main user
glance Keystone_Glance_Password User for Image Service
nova Keystone_Nova_Password User for Computeservice
cinder Keystone_Cinder_Password User for Block Storage service
neutron Keystone_Neutron_Password User for Networking service
heat Keystone_Heat_Password User for Orchestration service
Table 2 - List of passwords
OpenStack – Final report
INN694 – Project Page |6/43|
Controller-node Network-node Compute-node
Management Network
Network:
- IPv4: 192.168.1.0/24
- IPv6 2402:ec00:face:1::/64
Domain: labqut-osuc.com
Instance Network
Network:
- IPv4: 192.168.2.0/24
- IPv6 2402:ec00:face:2::/64
Domain: labqut-osuc.com
QUT/Internet
External Network
Network:
- IPv4: 10.0.0.0/24
- IPv6 2402:ec00:face:10::/64
Domain: labqut-osuc.com
Figure 2 - Network topology
OpenStack – Final report
INN694 – Project Page |7/43|
3.1 - Primary environment configuration
3.1.1 - Network configuration
3.1.1.1 - Controller node
vi /etc/network/interface
auto eth0
iface eth0 inet static
address 192.168.1.11
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.12
dns-nameservers 192.168.1.12
dns-search labqut-osuc.com
iface eth0 inet6 static
pre-up modprobe ipv6
address 2402:ec00:face:1::11
netmask 64
gateway 2402:ec00:face:1::1
3.1.1.2 - Compute node
vi /etc/network/interface
# The primary network interface
auto em1
iface em1 inet static
address 192.168.1.13
netmask 255.255.255.0
gateway 192.168.1.12
dns-nameservers 192.168.1.12
dns-search labqut-osuc.com
iface em1 inet6 static
pre-up modprobe ipv6
address 2402:ec00:face:1::13
netmask 64
gateway 2402:ec00:face:1::1
auto p4p1
iface p4p1 inet static
address 192.168.2.13
netmask 255.255.255.0
iface p4p1 inet6 static
pre-up modprobe ipv6
address 2402:ec00:face:2::13
netmask 64
OpenStack – Final report
INN694 – Project Page |8/43|
3.1.1.3 - Network node
vi /etc/network/interface
# The network interface that will be created after the section 3.5.2.9 - Step 8:
Setup the Open vSwitch (OVS) service
auto br-ex
iface br-ex inet static
address 10.0.0.2
netmask 255.255.255.0
gateway 10.0.0.1
dns-nameservers 192.168.1.12
auto eth0
iface eth0 inet static
address 192.168.1.12
netmask 255.255.255.0
iface eth0 inet6 static
pre-up modprobe ipv6
address 2402:ec00:face:1::12
netmask 64
#This interface can be on DHCP for the beginning of the installation
auto eth1
iface eth1 inet manual
up ip link set $IFACE up
down ip link set $IFACE down
auto eth2
iface eth2 inet static
address 192.168.2.12
netmask 255.255.255.0
iface eth2 inet6 static
pre-up modprobe ipv6
address 2402:ec00:face:2::12
netmask 64
OpenStack – Final report
INN694 – Project Page |9/43|
3.1.1.4 - Test the connectivity
The simple wayto verify thatthe network configurationhas been done correctlyif to usethe command
ping. It is important to make sure that all nodes are able to ping the nodes that are on the same network.
For instance the Compute node must be able to ping both interfaces of the network node in IPv4 and
IPv6. Regarding the compute node, it should only ping one interface of the network and the compute
node:
From the compute node:
ping 192.168.1.12
ping 192.168.1.13
ping6 2402:ec00:face:1::12
ping6 2402:ec00:face:1::13
From the network node:
ping 192.168.1.11
ping 192.168.1.13
ping 192.168.2.13
ping6 2402:ec00:face:1::11
ping6 2402:ec00:face:1::13
ping6 2402:ec00:face:2::13
From the Compute node:
ping 192.168.1.11
ping 192.168.1.12
ping 192.168.2.12
ping6 2402:ec00:face:1::11
ping6 2402:ec00:face:1::12
ping6 2402:ec00:face:2::12
3.1.2 - Network Time Protocol
It is important to synchroniseserviceson allmachinesand NTP willdo it automatically.Itis suggested to
synchronise the time of all additional nodes from the controller.
3.1.2.1 - Step 1: Install the package
sudo apt-get install ntp
3.1.2.2 - Step 2: Remove the deprecated package ntpdate
sudo apt-get remove ntpdate
3.1.2.3 - Step 3: Setup the server
sudo vi /etc/ntp.conf
#Add iburst at the end of line for your favourite server
server 0.ubuntu.pool.ntp.org iburst
server 1.ubuntu.pool.ntp.org
server 2.ubuntu.pool.ntp.org
server 3.ubuntu.pool.ntp.org
#Ubuntu's ntp server
server ntp.ubuntu.com
OpenStack – Final report
INN694 – Project Page |10/43|
# ...
# Authorise your own network to communicate with the server.
restrict 192.168.1.0 mask 255.255.255.224 nomodify notrap
3.1.2.4 - Step 4: Setup the client(s)
sudo vi /etc/ntp.conf
#comment the line that start with "server"
#server 0.ubuntu.pool.ntp.org
#server 1.ubuntu.pool.ntp.org
#server 2.ubuntu.pool.ntp.org
#server 3.ubuntu.pool.ntp.org
server IP/HostnameOfController iburst
#Leave the fallback, which is the Ubuntu's ntp server in case of your server break down
server ntp.ubuntu.com
3.1.2.5 - Test
The synchronisation of the time can be test by running the command “date” on all server within one
or two second of delay.
3.1.3 - Database
3.1.3.1 - Step 1: Install the packages
sudo apt-get install python-mysqldb mysql-server
3.1.3.2 - Step 2: Adapt MySQL to work with OpenStack
sudo vi /etc/mysql/my.conf
[mysqld]
...
#Allow other nodes to connect to the local database
bind-address = IP/HostnameOfMySQLServer
...
#enable InnoDB, UTF-8 character set, and UTF-8 collation by default
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
3.1.3.3 - Step 3: Restart MySQL
sudo service mysql restart
3.1.3.4 - Step 4: Delete the anonymous users (some connection problems might happen if still present)
sudo mysql_install_db (Optional: to be usedif the next command fail)
sudo mysql_secure_installation (Answer "Yes" to all question unless you have a good reason to answerno)
3.1.3.5 - Step 5: Install the MySQL Python library on the additional nodes (Optional)
sudo apt-get install python-mysqldb
OpenStack – Final report
INN694 – Project Page |11/43|
3.1.4 - OpenStack packages
The latest version of OpenStack packages can be downloaded through the Ubuntu Cloud Archive,
which is a special repository.
3.1.4.1 - Step 1: Install python software
sudo apt-get install python-software-properties
Remark: The following steps are not require for Ubuntu 14.04.
3.1.4.2 - Step 2: Add the Ubuntu Cloud archive for Icehouse (optional)
sudo add-apt-repository cloud-archive:icehouse
3.1.4.3 - Step 3: Update the packages list and upgrade the system (optional)
sudo apt-get update
sudo apt-get dist-upgrade
3.1.4.4 - Step 3: Install “Backported Linux Kernel” (Only for Ubuntu 12.04: improve the stability)
sudo apt-get install linux-image-generic-lts-saucy linux-headers-generic-lts-saucy
3.1.4.5 - Step 4: Restart the system
sudo reboot
3.1.5 - Messaging server
This documentationdo not explainallthe option of RabbitMQ, for more information about RabbitMQ
access control please follow the website provided in the reference [7].
3.1.5.1 - Step 1: Install the package
sudo apt-get install rabbitmq-server
3.1.5.2 - Step 2: Change default password of the existing user (guest/guest)
Remark: it is strongly recommended to change the password for the guest user for security purpose.
sudo rabbitmqctl change_password guest Rabbit_Guest_Password
3.1.5.3 - Step 3: Create a unique account
Remark: it is possible to use the guest user-name and password for each OpenStack service, but it is
not recommended.
sudo rabbitmqctl add_user YourUserName StrongPassword
3.1.5.4 - Step 4: Set up the access control
sudo rabbitmqctl add_vhost NameVHost
sudo rabbitmqctl set_user_tags NameVHost administrator
rabbitmqctl set_permissions -p /NameVHost YourUserName ".*" ".*" ".*"
3.2 - Identity service
3.2.1 - Installation of Keystone
3.2.1.1 - Step 1: Install the package
sudo apt-get install keystone
3.2.1.2 - Step 2: Connect keystone to the MySQL database
sudo vi /etc/keystone/keystone.conf
OpenStack – Final report
INN694 – Project Page |12/43|
[database]
# The SQLAlchemy connection string used to connect to the database
connection =
mysql://dbu_keystone:MySQL_Keystone_Password@IP/HostnameOfController/keystone
3.2.1.3 - Step 3: Create the database and the user with MySQL
mysql -u root -p
mysql> CREATE DATABASE keystone;
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'dbu_keystone'@'localhost'
IDENTIFIED BY 'MySQL_Keystone_Password';
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'dbu_keystone'@'%'
IDENTIFIED BY 'MySQL_Keystone_Password';
mysql> exit
3.2.1.4 - Step 4: Create the tables
su -s /bin/sh -c "keystone-manage db_sync" keystone
3.2.1.5 - Step 5: Define the authorization token to communicate between the Identity Service and other OpenStack
services and the log
The following command generate a random shared key and should be usedto generateallthe password
use for OpenStack services:
openssl rand -hex 10
Result: 3856bdace7abac9cfc78
sudo vi /etc/keystone/keystone.conf
[DEFAULT]
admin_token = 3856bdace7abac9cfc78
log_dir = /var/log/keystone
3.2.1.6 - Step 6: Restart the service
sudo service keystone restart
3.2.1.7 - Step 7: Purge the expired token every hours
The identity service save all expired tokens in the local database without erase them at any time. This
could be helpful for auditing in production environment but it will increase the size of the databaseas well
as affect the performance of other services. It is recommended to purge the expired token every hour
using cron.
(crontab -l -u keystone 2>&1 | grep -q token_flush) || 
echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-
tokenflush.log 2>&1' >> /var/spool/cron/crontabs/keystone
3.2.2 - Set environment variable
In order to use the command related to OpenStack command-line such as “Keystone”, “Neutron” and
other, you need to provide the address of the identity service (–os-auth-url), an username (–os-username)
and a password (–os-password). However, when it is the first time to use the identity service, so there is no
user yet. Therefore, you need to connect to the service using the token generate before and export the
following variable:
OpenStack – Final report
INN694 – Project Page |13/43|
3.2.2.1 - Export Variable for initial installation of keystone
export OS_SERVICE_TOKEN=3856bdace7abac9cfc78
export OS_SERVICE_ENDPOINT=http://IP/HostnameOfController:35357/v2.0
After created the username and password needed to connect to the identity service, it is possible to set
environment variable using the OpenStack RC file (“Openrc.sh”), it is a project-specific environment files
that contain the credentials needed by all OpenStack services. To use this variable, you need to run the
command “source” and the name of the file. However, this solution is possible only when at least one user,
one tenant and the endpoint for the administration are created.
3.2.2.2 - Create the file and add the information for the authentication
sudo vi /home/NameOfProject-openrc.sh
export OS_USERNAME=Username
export OS_PASSWORD=Keystone_Username_Password
export OS_TENANT_NAME=NameOfProject
export OS_AUTH_URL=http://IP/HostnameOfController:35357/v2.0
3.2.2.3 - Step 2: Export the variable using the command “source”
sudo source /home/NameOfProject-openrc.sh
3.2.3 - Users, tenants and roles
3.2.3.1 - Step 1: Create and “admin” user (within 1 line)
sudo keystone user-create --name=admin --pass=Keystone_Admin_Password --
email=infos@connetwork.com.au
3.2.3.2 - Step 2: Create an “admin” role
sudo keystone role-create --name=admin
3.2.3.3 - Step 3: Create an “admin” tenant
sudo keystone tenant-create --name=admin --description="Description of Admin Tenant"
3.2.3.4 - Step 4: Link the user, the role and the tenant together
sudo keystone user-role-add --user=admin --tenant=admin --role=admin
3.2.3.5 - Step 5: Link the user, the _member_ role and the tenant together
sudo keystone user-role-add --user=admin --role=_member_ --tenant=admin
3.2.3.6 - Step 5: Create a common user (you can create as many user as you want) (within 1 line)
sudo keystone user-create --name=Username --pass=StrongPassword --
email=infos@connetwork.com.au
3.2.3.7 - Step 6: Create a normal tenant (one tenant can have many user) (within 1 line)
sudo keystone tenant-create --name=NameOfTenant --description="The description of your
denant Tenant"
3.2.3.8 - Step 7: Link them together
sudo keystone user-role-add --user=Username --role=_member_ --tenant=NameOfTenant
3.2.3.9 - Step 8: Create a tenant call “service” to access OpenStack services
sudo keystone tenant-create --name=service --description="Service Tenant"
OpenStack – Final report
INN694 – Project Page |14/43|
3.2.4 - Service and endpoints
3.2.4.1 - Step 1: Create a service entry for the Identity Service (within 1 line)
sudo keystone service-create --name=keystone --type=identity --description="OpenStack
Identity"
3.2.4.2 - Step 2: Specify a API endpoint for the Identity service
sudo keystone endpoint-create 
--service-id=$(keystone service-list | awk '/ identity / {print $2}') 
--publicurl=http://IP/HostnameOfController:5000/v2.0 
--internalurl=http://IP/HostnameOfController:5000/v2.0 
--adminurl=http://IP/HostnameOfController:35357/v2.0
3.3 - Image service: Glance
3.3.1 - Step 1: Install the packages
sudo apt-get install glance python-glanceclient
3.3.2 - Step 2: Add the database section in the file glance-api.conf and glance-registry.conf
sudo vi /etc/glance/glance-api.conf
...
[database]
connection = mysql://dbu_glance:MySQL_Glance_Password@IP/HostnameOfController/glance
sudo vi /etc/glance/glance-registry.conf
...
[database]
connection = mysql://dbu_glance:MySQL_Glance_Password@IP/HostnameOfController/glance
3.3.3 - Step 3: Add the information about the message broker in the file glance-api.conf
sudo vi /etc/glance/glance-api.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = IP/HostnameOfController
rabbit_port = 5672
rabbit_use_ssl = false
rabbit_userid = YourUserName
rabbit_password = StrongPassword
rabbit_virtual_host = NameVHost
rabbit_notification_exchange = glance
rabbit_notification_topic = notifications
rabbit_durable_queues = False
3.3.4 - Step 4: Delete the default database
sudo rm /var/lib/glance/glance.sqlite
3.3.5 - Step 5: Create the Database and the user using MySQL
sudo mysql -u root -p
OpenStack – Final report
INN694 – Project Page |15/43|
mysql> CREATE DATABASE glance;
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'dbu_glance'@'localhost'
IDENTIFIED BY 'MySQL_Glance_Password';
mysql> GRANT ALL PRIVILEGES ON glance.* TO 'dbu_glance'@'%'
IDENTIFIED BY 'MySQL_Glance_Password';
3.3.6 - Step 6: Create tables
su -s /bin/sh -c "glance-manage db_sync" glance
3.3.7 - Step 7: Create the user “glance” in the Identity service
sudo keystone user-create --name=glance --pass=Keystone_Glance_Password --
email=infos@connetwork.com.au
sudo keystone user-role-add --user=glance --tenant=service --role=admin
3.3.8 - Step 8: Add the authentication for the Identity Service in the file glance-api.conf and glance-
registry.conf
sudo vi /etc/glance/glance-api.conf
[keystone_authtoken]
auth_uri = http://IP/HostnameOfController:5000
auth_host =IP/HostnameOfController
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = Keystone_Glance_Password
...
[paste_deploy]
...
flavor = keystone
sudo vi /etc/glance/glance-registry.conf
[keystone_authtoken]
auth_uri = http://IP/HostnameOfController:5000
auth_host =IP/HostnameOfController
auth_port = 35357
auth_protocol = http a
dmin_tenant_name = service
admin_user = glance
admin_password = Keystone_Glance_Password
...
[paste_deploy]
...
flavor = keystone
3.3.9 - Step 9: Register the Image Service with the Identity service (within 1 line for the first command)
sudo keystone service-create --name=glance --type=image --description="OpenStack Image
Service"
sudo keystone endpoint-create
OpenStack – Final report
INN694 – Project Page |16/43|
--service-id=$(keystone service-list | awk '/ image / {print $2}') 
--publicurl=http://IP/HostnameOfController:9292 
--internalurl=http://IP/HostnameOfController:9292 
--adminurl=http://IP/HostnameOfController:9292
3.3.10 - Step 10: Restart the services
sudo service glance-registry restart
sudo service glance-api restart
3.3.11 - Verify the installation
In order to verify the installation of glance, it is necessary to download at leastone virtual machine image
into the server using any method such as “wget”, “scp” or other. This example will assume that the server
has an internet connection and download a CirrOS image.
3.3.11.1 - Step 1: create a temporary folder
mkdir /home/iso
3.3.11.2 - Step 2: change the directory
cd /home/iso
3.3.11.3 - Step 3: Download the image
wget http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
3.3.11.4 - Source OpenStack RC file
source /home/NameOfProject-openrc.sh
3.3.11.5 - Add the image into glance
sudo glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 
--container-format bare --is-public True --progress < cirros-0.3.2-x86_64-disk.img
3.3.11.6 - Check if the image has been successfully added to glance:
sudo glance image-list
3.4 - Compute service: Nova
3.4.1 - Service
3.4.1.1 - Step 1: Install the packages (within 1 line)
sudo apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy
nova-scheduler python-novaclient
3.4.1.2 - Step 2: Add the MySQL connection on the file “nova.conf” as well as setup RabbitMQ
sudo vi /etc/nova/nova.conf
[DEFAULT]
#Use the Identity service (keystone) for authentication
auth_strategy = keystone
#Set up the message broker
rpc_backend = rabbit
rabbit_host = IP/HotsnameOfController
rabbit_userid = YourUserName
rabbit_password = StrongPassword
OpenStack – Final report
INN694 – Project Page |17/43|
rabbit_virtual_host = NameVHost
auth_strategy = keystone
my_ip = IP/HostnameOfController
vncserver_listen = IP/HostnameOfController
vncserver_proxyclient_address = IP/HostnameOfController
[database]
connection = mysql://dbu_nova:MySQL_Nova_Password@IP/HostnameOfController/nova
[keystone_authtoken]
auth_uri = http://IP/HostnameOfController:5000
auth_host = IP/HostnameOfController
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = Keystone_Nova_Password
3.4.1.3 - Step 3: Create the database and user
mysql -u root -p
mysql> CREATE DATABASE nova;
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'dbu_nova'@'localhost' 
IDENTIFIED BY 'MySQL_Nova_Password';
mysql> GRANT ALL PRIVILEGES ON nova.* TO 'dbu_nova'@'%' 
IDENTIFIED BY 'MySQL_Nova_Password';
3.4.1.4 - Step 4: Create the tables
su -s /bin/sh -c "nova-manage db sync" nova
3.4.1.5 - Step 5: Create user on the Identity service
sudo keystone user-create --name=nova --pass=Keystone_Nova_Password --
email=infos@connetwork.com.au
sudo keystone user-role-add --user=nova --tenant=service --role=admin
3.4.1.6 - Step 6: Create the service and endpoint
sudo keystone service-create --name=nova --type=compute --description="OpenStack
Compute"
sudo keystone endpoint-create 
--service-id=$(keystone service-list | awk '/ compute / {print $2}') 
--publicurl=http://IP/HostnameOfController:8774/v2/%(tenant_id)s 
--internalurl=http://IP/HostnameOfController:8774/v2/%(tenant_id)s 
--adminurl=http://IP/HostnameOfController:8774/v2/%(tenant_id)s
3.4.1.7 - Step 7: Restart all nova service
sudo service nova-api restart
sudo service nova-cert restart
sudo service nova-consoleauth restart
sudo service nova-scheduler restart
sudo service nova-conductor restart
sudo service nova-novncproxy restart
OpenStack – Final report
INN694 – Project Page |18/43|
3.4.2 - Compute node
The compute node can be install in the same server as the controller. However, it is recommended to
install it in another server.
3.4.2.1 - Step 1: Install the packages
sudo apt-get install nova-compute-kvm python-guestfs libguestfs-tools qemu-system
3.4.2.2 - Step 2: Make the kernel readable by any hypervisor services such as qemu or libguestfs
The kernelis not readableby defaultfor basicuser and for securityreason, but hypervisor servicesneed
to read it in order to work better
dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r)
The previous command makes the kernel readable, yet it is not permanent as when the kernel will be
updated it will not be readable anymore. Therefore, you need to create a file to overwrite all upcoming
update
vi /etc/kernel/postinst.d/statoverride
#!/bin/sh
version="$1"
# passing the kernel version is required
[ -z "${version}" ] && exit 0
dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-${version}
3.4.2.3 - Step 3: Edit the nova.conf
vi /etc/nova/nova.conf
[DEFAULT]
#Use the Identity service (keystone) for authentication
auth_strategy = keystone
#Set up the message broker
rpc_backend = rabbit
rabbit_host = IP/HotsnameOfController
rabbit_userid = YourUserName
rabbit_password = StrongPassword
rabbit_virtual_host = NameVHost
auth_strategy = keystone
#Interface for the console
my_ip = IP/HostnameOfController
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = IP/HostnameOfController
novncproxy_base_url = http://IP/HostnameOfController:6080/vnc_auto.html
#Location of the image service (Glance)
glance_host = IP/HostnameOfController
[database]
connection = mysql://dbu_nova:MySQL_Nova_Password@IP/HostnameOfController/nova
[keystone_authtoken]
auth_uri = http://IP/HostnameOfController:5000
auth_host = IP/HostnameOfController
OpenStack – Final report
INN694 – Project Page |19/43|
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = Keystone_Nova_Password
3.4.2.4 - Step 4: Check if your system support hardware acceleration
sudo egrep -c '(vmx|svm)' /proc/cpuinfo
If the result if greater than 0 (1 or more), you can skip the following step
3.4.2.5 - Step 5: Only if the result of the previous command is “0”
vi /etc/nova/nova-compute.conf
3.4.2.6 - Step 6: Restart the service
sudo service nova-compute restart
3.5 - Networking service: Neutron
3.5.1 - Controller node
3.5.1.1 - Step 1: Create the database
mysql -u root -p
mysql> CREATE DATABASE neutron;
mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'dbu_neutron'@'localhost'
IDENTIFIED BY 'MySQL_Neutron_Password';
mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'dbu_neutron'@'%'
IDENTIFIED BY 'MySQL_Neutron_Password';
3.5.1.2 - Step 2: Create the user in the Identity service (within 1 line)
sudo keystone user-create --name neutron --pass Keystone_Neutron_Password --email
infos@connetwork.com.au
3.5.1.3 - Step 3: Link the user to the service tenant and admin role
sudo keystone user-role-add --user neutron --tenant service --role admin
3.5.1.4 - Step 4: Create the service for Neutron in the identity service (within 1 line)
sudo keystone service-create --name neutron --type network --description "OpenStack
Networking"
3.5.1.5 - Step 5: Create the service endpoint in the identity service
sudo keystone endpoint-create 
--service-id $(keystone service-list | awk '/ network / {print $2}') 
--publicurl http://IP/HostnameOfController:9696 
--adminurl http://IP/HostnameOfController:9696 
--internalurl http://IP/HostnameOfController:9696
3.5.1.6 - Step 6: Install the networking components (packages)
sudo apt-get install neutron-server neutron-plugin-ml2
OpenStack – Final report
INN694 – Project Page |20/43|
3.5.1.7 - Step 7: Get the service tenant identifier (SERVICE_TENANT_ID)
sudo keystone tenant-get service
Example:
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Service Tenant |
| enabled | True |
| id | 032ff6f1056a4d82b51a87ff106c8185 |
| name | service |
+-------------+----------------------------------+
3.5.1.8 - Step 8: Edit neutron configuration file (neutron.conf)
sudo vi /etc/neutron/neutron.conf
[DEFAULT]
#Rabbit information
rabbit_host = IP/HostnameOfController
rabbit_port = 5672
rabbit_use_ssl = false
rabbit_userid = YourUserName
rabbit_password = StrongPassword
rabbit_virtual_host = /NameVHost
rabbit_notification_exchange = neutron
rabbit_notification_topic = notifications
#type of authentication
auth_strategy = keystone
#communication with the service nova
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://IP/HostnameOfController:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = SERVICE_TENANT_ID
nova_admin_password = Keystone_Nova_Password
nova_admin_auth_url = http://IP/HostnameOfController:35357/v2.0
#Configuration of the Modular Layer 2 (ML2)
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
#connection to the database
[database]
connection =
mysql://dbu_neutron:MySQL_Neutron_Password@IP/HostnameOfController/neutron
[keystone_authtoken]
#Authentication information:
auth_uri = http://IP/HostnameOfController:5000
auth_host = IP/HostnameOfController
OpenStack – Final report
INN694 – Project Page |21/43|
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = Keystone_Neutron_Password
3.5.1.9 - Step 9: Edit the nova.conf to configure compute to use Networking
sudo vi /etc/nova/nova.conf
[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
neutron_url = http://IP/HostnameOfController:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = Keystone_Neutron_Password
neutron_admin_auth_url = http://IP/HostnameOfController:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron
3.5.1.10 - Step 10: Restart the necessary services
sudo service nova-api restart
sudo service nova-scheduler restart
sudo service nova-conductor restart
sudo service neutron-server restart
3.5.2 - Network node
3.5.2.1 - Pre-step: Enable few networking function
sudo vi /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
Update the changes
sudo sysctl -p
3.5.2.2 - Step 1: Install the networking components (packages) (within 1 line)
apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent openvswitch-
datapath-dkms neutron-l3-agent neutron-dhcp-agent
3.5.2.3 - Step 2: Edit neutron configuration file (neutron.conf)
sudo vi /etc/neutron/neutron.conf
[DEFAULT]
#Rabbit information
rabbit_host = IP/HostnameOfController
rabbit_port = 5672
rabbit_use_ssl = false
OpenStack – Final report
INN694 – Project Page |22/43|
rabbit_userid = YourUserName
rabbit_password = StrongPassword
rabbit_virtual_host = /NameVHost
rabbit_virtual_host = qutlab-osuc
rabbit_notification_exchange = neutron
rabbit_notification_topic = notifications
#type of authentication
auth_strategy = keystone
#Configuration of the Modular Layer 2 (ML2)
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
[keystone_authtoken]
auth_uri = http:// IP/HostnameOfController:5000
auth_host = IP/HostnameOfController
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = Keystone_Neutron_Password
3.5.2.4 - Step 3: Setup the Layer-3 (L3) Agent
vi /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
3.5.2.5 - Step 4: Setup the DHCP Agent
vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
3.5.2.6 - Step 5: Setup the metadata Agent
vi /etc/neutron/metadata_agent.ini
[DEFAULT]
auth_url = http:// IP/HostnameOfController:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = Keystone_Neutron_Password
nova_metadata_ip = IP/HostnameOfController
metadata_proxy_shared_secret = Metadata_Secret_Key
#Uncomment the next line for troubleshooting
#verbose = True
OpenStack – Final report
INN694 – Project Page |23/43|
3.5.2.7 - Step 6: Setup the nova service to inform about the metadata proxy information
Remark: this part need to be done on the controller-node and the Metadata_Secret_Key must
be the same as the previous step on the file “metadata_agent.ini”.
sudo vi /etc/nova/nova.conf
[DEFAULT]
...
#Metadata proxy information between Neutron and Nova
service_neutron_metadata_proxy = true
neutron_metadata_proxy_shared_secret = Metadata_Secret_Key
3.5.2.8 - Step 7: Setup the Modular Layer 2 (ML2) plug-in
sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[securitygroup]
firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
[ovs]
local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS #the local ip address of the
network interface or the hostname that can resolve the address of the instance
network
tunnel_type = gre
enable_tunneling = True
3.5.2.9 - Step 8: Setup the Open vSwitch (OVS) service
The OVS service provide the virtual networking framework for instances. It create a virtual bridge
between the external network (e.g.: Internet) and the internal network (e.g.: Used for instances). The
externalbridge (namedbr-ex on thistutorial)need to be connectedto a physicalnetwork interfacein order
to communicate with the external network.
Restart the service:
sudo service openvswitch-switch restart
Add the integration bridge:
sudo ovs-vsctl add-br br-int
Add the external bridge:
sudo ovs-vsctl add-br br-ex
Add a physical network interface to the external bridge (Ex: eth0, eth1 …)
sudo ovs-vsctl add-port br-ex INTERFACE_NAME
OpenStack – Final report
INN694 – Project Page |24/43|
3.5.2.10 - Step 9: Restart the necessary services
sudo service neutron-plugin-openvswitch-agent restart
sudo service neutron-l3-agent restart
sudo service neutron-dhcp-agent restart
sudo service neutron-metadata-agent restart
3.5.3 - Compute node
3.5.3.1 - Pre-step: Enable few networking function
sudo vi /etc/sysctl.conf
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
Update the changes
sudo sysctl -p
3.5.3.2 - Step 1: Install the networking components (packages) (within 1 line)
sudo apt-get install neutron-common neutron-plugin-ml2 neutron-plugin-openvswitch-agent
openvswitch-datapath-dkms
3.5.3.3 - Step 2: Edit neutron configuration file (neutron.conf)
sudo vi /etc/neutron/neutron.conf
[DEFAULT]
#Rabbit information
rabbit_host = IP/HostnameOfController
rabbit_port = 5672
rabbit_use_ssl = false
rabbit_userid = YourUserName
rabbit_password = StrongPassword
rabbit_virtual_host = /NameVHost
rabbit_notification_exchange = neutron
rabbit_notification_topic = notifications
#type of authentication
auth_strategy = keystone
#Configuration of the Modular Layer 2 (ML2)
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
[keystone_authtoken]
#Authentication information:
auth_uri = http://IP/HostnameOfController:5000
auth_host = IP/HostnameOfController
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = Keystone_Neutron_Password
OpenStack – Final report
INN694 – Project Page |25/43|
3.5.3.4 - Step 3: Edit the nova.conf to configure compute to use Networking
sudo vi /etc/nova/nova.conf
[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
neutron_url = http://IP/HostnameOfController:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = Keystone_Neutron_Password
neutron_admin_auth_url = http://IP/HostnameOfController:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron
3.5.3.5 - Step 4: Setup the Modular Layer 2 (ML2) plug-in
sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[securitygroup]
firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
[ovs]
local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS #the ip address of the network
interface or the hostname that can resolve the address of the instance network
tunnel_type = gre
enable_tunneling = True
3.5.3.6 - Step 5: Setup the Open vSwitch (OVS)service
sudo service openvswitch-switch restart
sudo ovs-vsctl add-br br-int
3.5.3.7 - Step 16: Restart the necessary services
sudo service nova-compute restart
sudo service neutron-plugin-openvswitch-agent restart
OpenStack – Final report
INN694 – Project Page |26/43|
3.5.4 - Create an initial network
3.5.4.1 - Source OpenStack RC file
source /home/NameOfProject-openrc.sh
3.5.4.2 - Create the external network
neutron net-create NameOfExternalNetwork --shared --router:external=True
3.5.4.3 - Create a subnet for the external network
The external subnet need to be carefully chosen and should have a different range of IP address of the
actual external network. For instance, if the network is 10.0.0.0/24, the DHCP on the external router
could be from 10.0.0.2 to 10.0.0.99 and the subnet on OpenStack could be from 10.0.0.100 to 10.0.0.150.
neutron subnet-create NameOfExternalNetwork --name NameOfExternalSubnet 
--allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END 
--disable-dhcp --gateway EXTERNAL_NETWORK_GATEWAY EXTERNAL_NETWORK_CIDR
For example:
neutron subnet-create ext-net --name ext-subnet 
--allocation-pool start=10.0.0.100,end=10.0.0.150 
--disable-dhcp --gateway 10.0.0.1 10.0.0.0/24
3.5.4.4 - Create the external network
neutron net-create NameOfInternalNetwork
3.5.4.5 - Create a subnet for the external network
neutron subnet-create demo-net --name NameOfInternalNetwork 
--gateway TENANT_NETWORK_GATEWAY TENANT_NETWORK_CIDR
For example:
neutron subnet-create demo-net --name demo-subnet 
--gateway 192.168.1.1 192.168.1.0/24
3.5.4.6 - Create a virtual router
neutron router-create MyRouter
3.5.4.7 - Attach the router to the internal network
neutron router-interface-add MyRouter NameOfInternalNetwork
3.5.4.8 - Attach the router to the external network by specifying it as the gateway
neutron router-gateway-set demo-router NameOfExternalNetwork
3.6 - Dashboard: Horizon
3.6.1 - Step 1: Install the packages
sudo apt-get install apache2 memcached libapache2-mod-wsgi openstack-dashboard
When the dashboard is installed using the packages from Ubuntu repository, it comes with a ubuntu
theme that change the dashboard. To remove it use the following command:
apt-get remove --purge openstack-dashboard-ubuntu-theme
OpenStack – Final report
INN694 – Project Page |27/43|
3.6.2 - Step 2: Change the “LOCATION” value to match the one on the file /etc/memcached.conf
Vi /etc/openstack-dashboard/local_settings.py
CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '127.0.0.1:11211'
}
}
3.6.3 - Step 3: Update the ALLOWED_HOSTS to include your computer (only if you wish to access the
dashboard from a specific list of computer) and the address of the controller
Vi /etc/openstack-dashboard/local_settings.py
ALLOWED_HOSTS = ['localhost', 'Your-computer']
OPENSTACK_HOST = "IP/HostnameOfController"
3.6.4 - Step 4: Restart the service
service apache2 restart
service memcached restart
3.6.5 - Step 5: Access the dashboard with your favourite web browser
“http://IP/HostnameOfController/horizon”
Figure 3 - Dashboard login
OpenStack – Final report
INN694 – Project Page |28/43|
3.7 - Block storage: Cinder
3.7.1 - On the controller
3.7.1.1 - Step 1: Install the packages
sudo apt-get install cinder-api cinder-scheduler
3.7.1.2 - Step 2: Set up the connection to the database
vi /etc/cinder/cinder.conf
[database]
connection =
mysql://dbu_nova:MySQL_Cinder_Password@IP/HostnameOfController/cinder
3.7.1.3 - Step 3: Create the database and user
mysql> CREATE DATABASE cinder;
mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'dbu_cinder'@'localhost' 
IDENTIFIED BY 'MySQL_Cinder_Password';
mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'dbu_cinder'@'%' 
IDENTIFIED BY 'MySQL_Cinder_Password';
3.7.1.4 - Step 4: Create the tables
su -s /bin/sh -c "cinder-manage db sync" cinder
3.7.1.5 - Step 5: Create user on the Identity service
keystone user-create --name=cinder --pass= Keystone_Cinder_Password --
email=cinder@connetwork.com.au
keystone user-role-add --user=cinder --tenant=service --role=admin
3.7.1.6 - Step 6: Add information about the identity service and the message broker
vi /etc/cinder/cinder.conf
[DEFAULT]
#Set up the message broker
rpc_backend = rabbit
rabbit_host = IP/HotsnameOfController
rabbit_userid = YourUserName
rabbit_password = StrongPassword
rabbit_virtual_host = NameVHost
#add the persmission to connect to the identity service;
[keystone_authtoken]
auth_uri = http://IP/HostnameOfController:5000
auth_host = IP/HostnameOfController
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = Keystone_Cinder_Password
OpenStack – Final report
INN694 – Project Page |29/43|
3.7.1.7 - Step 7: Create the service and endpoint
keystone service-create --name=cinder --type=volume --description="OpenStack Block
Storage"
keystone endpoint-create 
--service-id=$(keystone service-list | awk '/ volume / {print $2}') 
--publicurl=http://IP/HostnameOfController:8776/v1/%(tenant_id)s 
--internalurl=http://IP/HostnameOfController:8776/v1/%(tenant_id)s 
--adminurl=http://IP/HostnameOfController:8776/v1/%(tenant_id)s
3.7.1.8 - Step 8: Create the service and endpoint for the version 2
keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block
Storage v2"
keystone endpoint-create 
--service-id=$(keystone service-list | awk '/ volumev2 / {print $2}') 
--publicurl=http://IP/HostnameOfController:8776/v2/%(tenant_id)s 
--internalurl=http://IP/HostnameOfController:8776/v2/%(tenant_id)s 
--adminurl=http://IP/HostnameOfController:8776/v2/%(tenant_id)s
3.7.1.9 - Step 9: Restart all necessary services
service cinder-scheduler restart
service cinder-api restart
3.7.2 - On a storage node (Can be done in any machine)
This part assume that the type of partition of sda3 it LVM
3.7.2.1 - Step 1:Install the LVM package
apt-get install lvm2
3.7.2.2 - Step 2:Create a physical volume
pvcreate /dev/sda3
3.7.2.3 - Step 3:Create a volume group call “cinder-volume”
Note: if cinder is install on more than one host, the name of the logical should be different in every
host
vgcreate cinder-volumes /dev/sda3
3.7.2.4 - Step 4:Change the configuration of LVM
vi /etc/lvm/lvm.conf
devices {
...
filter = [ "a/sda1/", "a/sda3/", "r/.*/"]
...
}
3.7.2.5 - Step 5:Test the configuration
pvdisplay
3.7.2.6 - Step 6: Install the packages
apt-get install cinder-volume
OpenStack – Final report
INN694 – Project Page |30/43|
3.7.2.7 - Step 7: Add information about the identity service and the message broker
vi /etc/cinder/cinder.conf
[DEFAULT]
#Set up the message broker
rpc_backend = rabbit
rabbit_host = IP/HotsnameOfController
rabbit_userid = YourUserName
rabbit_password = StrongPassword
rabbit_virtual_host = NameVHost
#add the persmission to connect to the identity service;
enabled_backends=lvmdriver-NameOfDriver
[lvmdriver-NameOfDriver]
volume_group= NameOfVolumeGroup
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name= NameOfBackEnd
[keystone_authtoken]
auth_uri = http://IP/HostnameOfController:5000
auth_host = IP/HostnameOfController
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = Keystone_Cinder_Password
glance_host = controller IP/HotsnameOfController
[database]
connection =
mysql://dbu_nova:MySQL_Cinder_Password@IP/HostnameOfController/cinder
3.7.2.8 - Step8: Restart all nova service
service cinder-volume restart
service tgt restart
3.8 - Orchestration: Heat
3.8.1 - Step 1: Install the packages
apt-get install heat-api heat-api-cfn heat-engine
3.8.2 - Step 2: Set up the connection to the database
vi /etc/heat/heat.conf
[database]
connection = mysql://dbu_heat:Keystone_Heat_Password@IP/HostnameOfController /heat
3.8.3 - Step 3: Create the database and user
mysql -u root -p
mysql> CREATE DATABASE heat;
mysql> GRANT ALL PRIVILEGES ON heat.* TO 'dbu_heat'@'localhost' 
IDENTIFIED BY 'Keystone_Heat_Password';
OpenStack – Final report
INN694 – Project Page |31/43|
mysql> GRANT ALL PRIVILEGES ON heat.* TO 'dbu_heat'@'%' 
IDENTIFIED BY 'Keystone_Heat_Password';
3.8.4 - Step 4: Create the tables
su -s /bin/sh -c "heat-manage db_sync" heat
3.8.5 - Step 5: Change the configuration file
vi /etc/heat/heat.conf
#logging
verbose = True
log_dir=/var/log/heat
#Rabbit information
rpc_backend = rabbit
rabbit_host = IP/HotsnameOfController
rabbit_userid = YourUserName
rabbit_password = StrongPassword
rabbit_virtual_host = NameVHost
[keystone_authtoken]
auth_host = IP/HotsnameOfController
auth_port = 35357
auth_protocol = http
auth_uri = http://IP/HotsnameOfController:5000/v2.0
admin_tenant_name = service
admin_user = heat
admin_password = Keystone_Heat_Password
[ec2authtoken]
auth_uri = http://IP/HotsnameOfController:5000/v2.0
3.8.6 - Step 6: Create user on the Identity service
keystone user-create --name=heat --pass=Keystone_Heat_Password --
email=heat@connetwork.com.au
keystone user-role-add --user=heat --tenant=service --role=admin
3.8.7 - Step 7: Create the service and endpoint
keystone service-create --name=heat --type=orchestration --description="Orchestration"
keystone endpoint-create 
--service-id=$(keystone service-list | awk '/ orchestration / {print $2}') 
--publicurl=http://IP/HotsnameOfController:8004/v1/%(tenant_id)s 
--internalurl=http://IP/HotsnameOfController:8004/v1/%(tenant_id)s 
--adminurl=http://IP/HotsnameOfController:8004/v1/%(tenant_id)s
keystone service-create --name=heat-cfn --type=cloudformation 
--description="Orchestration CloudFormation"
keystone endpoint-create 
--service-id=$(keystone service-list | awk '/ cloudformation / {print $2}') 
--publicurl=http://IP/HotsnameOfController:8000/v1 
--internalurl=http://IP/HotsnameOfController:8000/v1
OpenStack – Final report
INN694 – Project Page |32/43|
--adminurl=http://IP/HotsnameOfController:8000/v1
3.8.8 - Step 8: Create the heat_stack_user role.
This role is used as the default role for users created by the Orchestration module.
keystone role-create --name heat_stack_user
3.8.9 - Step 9: Setup the URL of the metadata server
vi /etc/heat/heat.conf
[DEFAULT]
...
# URL of the Heat metadata server. (string value)
heat_metadata_server_url = http://IP/HotsnameOfController:8000
# URL of the Heat waitcondition server. (string value)
heat_waitcondition_server_url = http:/
IP/HotsnameOfController:8000/v1/waitcondition
3.8.10 - Step 10: Restart all necessary services
service heat-api restart
service heat-api-cfn restart
service heat-engine restart
OpenStack – Final report
INN694 – Project Page |33/43|
4 - Troubleshooting
When a problem occur, it Is important to remember that all the error are recorded on the logs and located
on respective folder depending on the service. For instance the logs for Nova will be located in
“/var/lod/nova”, the log for Neutron will be located in “/var/log/neutron” and so on. Most of the time,
the log are quite explicit and the problem can be fix quickly.
See log on /var/log
Error Possible solution
Host not found Cannot find the host for the service that was asked
AMQP server on controller:5672 is
unreachable: Socket closed
The information for rabbit are not correct, the
information on the configuration file need to be check:
- Username
- Password
- Virtual host
- Is thehostname of thecontroller can be resolve?
Cannot ping or ssh the instance Check if the security rule has a ICMP
I have installed a webserver in the instance, I
can ping and ssh it. When I do a nmap of the
instance the port 80 seems open but when I
try to access the website it takes a long time
and nothing happen.
Check the MTU of the virtual machine and make sure
that you follow the section “6.4 - Force the MTU of
the virtual machine”
OpenStack – Final report
INN694 – Project Page |34/43|
5 - Useful command
5.1 - General command
Command Description
openssl rand –hex 10 Generate a random password
ping:
-s NUMBER
Test the connectivity
Specify the size of the packet (MTU)
tail -f Show the end of a file and any upcoming update
rabbitmqctl list_users RabbitMQ: List of user
5.2 - Keystone
Argument Option Description
user-create --name Name of the user
--pass Password
--email Email of the user
endpoint-list List of endpoint
endpoint-get NameOfEndpoint Information of one endpoint
role-list List of role
role-get NameOfrole Information of one role
service-list List of service
service-get NameOfservice Information of one service
tenant-list List of tenant
tenant-get NameOfTenant Information of one tenant
user-list List of User
user-get NameOfUser Information of one User
5.3 - Glance
The “glance” command is used to manage virtual image by command line. A list of possible options and
arguments are listed below:
Argument Option Description
image-create --name
Name of the image for OpenStack
--disk-format
Format of the image file:
qcow2, raw, vhd, vmdk, vdi, iso, aki, ari, and ami
--container-format
Format of the container1:
Bare, ovf, aki and ami
--is-public
< LocationOfTheImage
Example sudo glance image-create --name "cirros-0.3.2-x86_64" --
disk-format qcow2 
--container-format bare --is-public True --progress <
cirros-0.3.2-x86_64-disk.img
1 Specify bare to indicate that the image file is not in a file format that contains metadata about the virtual machine. Although this
field is currently required, it is not actually used by any of the OpenStack services and has no effect on system behaviour. Because
the value is not used anywhere, it is safe to always specify bare as the container format [8].
OpenStack – Final report
INN694 – Project Page |35/43|
6 - Tutorial
6.1 - Lunch an instance:
After start an instance and provide a public IP address, the instance will not be accessiblefrom the outside
network. The security group need to be setup and the port need to be opened according to the needs.
6.1.1 - Command line
6.1.1.1 - Source OpenStack RC file
source /home/NameOfProject-openrc.sh
6.1.1.2 - Generate the key
ssh-keygen
6.1.1.3 - Add the public key to Nova
nova keypair-add --pub-key ~/.ssh/id_rsa.pub demo-key
6.1.1.4 - Verify if the key has been added
nova keypair-list
6.1.1.5 - Check the list of flavours
nova flavor-list
6.1.1.6 - Check the list of images
nova image-list
6.1.1.7 - Check the list of networks
neutron net-list
6.1.1.8 - Check the list of security groups
nova secgroup-list
6.1.1.9 - Start the instance according to the previous information gathered
nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64 --nic net-id= IDofNetwork 
--security-group default --key-name demo-key NameOfInstance
6.1.1.10 - Verify if the instance is started
nova list
6.1.1.11 - Get the URL of the VNC console
nova get-vnc-console NameOfInstance novnc
OpenStack – Final report
INN694 – Project Page |36/43|
6.1.2 - Dashboard
Click on project  compute  Instance
Click on “lunch Instance”
On the Details tab:
- Set the availabilityzone (if more than one)
- Write the name of the instance
- Select the flavor (resource that are given to
the instance)
- The number of instance
- Select “Boot from image”
- Select your image
On the “Access and security” tab
- If the image was made specifically for
cloud selecta key pair,if the image has
a username and password by default
you do not need to select any key
- If the key does not exist, click on the
“+”
- Select the security group
OpenStack – Final report
INN694 – Project Page |37/43|
On the “Networking” tab:
- Select the internal network
Click on the launch button
To see the virtual machine, click on more and then
console
The console is not manage very well if not on
full screen, therefore click on “Click here to
show only console”
OpenStack – Final report
INN694 – Project Page |38/43|
6.2 - Provide public address to instances
6.2.1 - Command line
6.2.1.1 - Create a floating IP
neutron floatingip-create NameOfExternalNetwork
6.2.1.2 - Associate the floating IP to the instance
nova floating-ip-associate MyInstance IPAddress
6.2.2 - Dashboard
Assign an external IP address:
- Click on the arrow next to “More”
- Associate Floating IP
If no IP are available, click on the “+”
Select the external network and click on “Allocate IP”
OpenStack – Final report
INN694 – Project Page |39/43|
Select the floating Ip that was associated on the previous step, the port to which the IP need to
be associated and click on associate
6.3 - Create a new security group (ACL)
6.3.1 - Command line
6.3.1.1 - Create a security group
nova secgroup-create NameOfSecurityGroup "Description of the security group"
6.3.1.2 - Add a rule to allow ping
nova secgroup-add-rule NameOfSecurityGroup icmp -1 -1 0.0.0.0/0
6.3.1.3 - Add a rule to allow ssh
nova secgroup-add-rule NameOfSecurityGroup tcp 22 22 0.0.0.0/0
6.3.2 - Dashboard
6.3.2.1 - Go to the tab “Access & Security”
Click on project  compute  Access & Security
OpenStack – Final report
INN694 – Project Page |40/43|
6.3.2.1 - Click on the “Create Security Group”
6.3.2.2 - Specify the name (no space) and the description and click “Create Security Group”
6.3.2.3 - Click on manage rule
6.3.2.4 - Click add rule
6.3.2.5 - Select the right option depending on the service on the instance
Rules: list of known service such as SSH, http and
so on as well as customer rule where the user can
specify the number of the port
Direction:
- Ingress: from outside to the VM
- Egress: from the VM to outside
Remote: type of remote
CIDR: who does it concern
OpenStack – Final report
INN694 – Project Page |41/43|
6.4 - Force the MTU of the virtual machine
6.4.1 - Setup the DHCP agent :
vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
...
#add the following line
dnsmasq_config_file=/etc/neutron/dnsmasq/dnsmasq-neutron.conf
6.4.2 - Create the dnsmasq-neutron.conf
vi /etc/neutron/dnsmasq/dnsmasq-neutron.conf
dhcp-option-force=26, 1400
7 - Table of figures
Figure 1 - OpenStack overview [1] ............................................................................................................................................................................ 3
Figure 4 - Network topology..................................................................................................................................................................................... 6
Figure 4 - Dashboard login..................................................................................................................................................................................... 27
8 - Table of tables
Table 1 - Hardware requirement ............................................................................................................................................................................... 4
Table 2 - List of password ........................................................................................................................................................................................ 5
OpenStack – Final report
INN694 – Project Page |42/43|
9 - References
1. OpenStack. [Online].Availablefrom: https://www.openstack.org/.
2. OpenStack Foundation. OpenStack documentation - Chapter 5. Scaling.[Online].Availablefrom:
http://docs.openstack.org/openstack-ops/content/scaling.html.
3. rabbitmqctl(1) manual page. [Online].Availablefrom: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html.
4. Verify the Image Serviceinstallation.[Online].Availablefrom: http://docs.openstack.org/icehouse/install-
guide/install/apt/content/glance-verify.html.
OpenStack – Final report
INN694 – Project Page |43/43|

More Related Content

What's hot

OpenStack Benelux - Cloud, OpenStack and a Market In Motion - Sept 2015final
OpenStack Benelux -  Cloud, OpenStack and a Market In Motion - Sept 2015final OpenStack Benelux -  Cloud, OpenStack and a Market In Motion - Sept 2015final
OpenStack Benelux - Cloud, OpenStack and a Market In Motion - Sept 2015final John Zannos
 
How logging makes a private cloud a better cloud - OpenStack最新情報セミナー(2016年12月)
How logging makes a private cloud a better cloud - OpenStack最新情報セミナー(2016年12月)How logging makes a private cloud a better cloud - OpenStack最新情報セミナー(2016年12月)
How logging makes a private cloud a better cloud - OpenStack最新情報セミナー(2016年12月)VirtualTech Japan Inc.
 
Securing OpenStack and Beyond with Ansible
Securing OpenStack and Beyond with AnsibleSecuring OpenStack and Beyond with Ansible
Securing OpenStack and Beyond with AnsibleMajor Hayden
 
OpenStack - An Overview
OpenStack - An OverviewOpenStack - An Overview
OpenStack - An Overviewgraziol
 
GlusterFS Update and OpenStack Integration
GlusterFS Update and OpenStack IntegrationGlusterFS Update and OpenStack Integration
GlusterFS Update and OpenStack IntegrationEtsuji Nakai
 
Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...
Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...
Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...VMware Tanzu
 
OpenStack at NTT Resonant: Lessons Learned in Web Infrastructure
OpenStack at NTT Resonant: Lessons Learned in Web InfrastructureOpenStack at NTT Resonant: Lessons Learned in Web Infrastructure
OpenStack at NTT Resonant: Lessons Learned in Web InfrastructureTomoya Hashimoto
 
Ceph Day LA: Building your own disaster? The safe way to make Ceph storage re...
Ceph Day LA: Building your own disaster? The safe way to make Ceph storage re...Ceph Day LA: Building your own disaster? The safe way to make Ceph storage re...
Ceph Day LA: Building your own disaster? The safe way to make Ceph storage re...Ceph Community
 
TripleO Lightning Talk
TripleO Lightning TalkTripleO Lightning Talk
TripleO Lightning Talkcmsj1
 
Mastering OpenStack - Episode 04 - Provisioning and Deployment
Mastering OpenStack - Episode 04 - Provisioning and DeploymentMastering OpenStack - Episode 04 - Provisioning and Deployment
Mastering OpenStack - Episode 04 - Provisioning and DeploymentRoozbeh Shafiee
 
NTTドコモ様 導入事例 OpenStack Summit 2016 Barcelona 講演「Expanding and Deepening NTT D...
NTTドコモ様 導入事例 OpenStack Summit 2016 Barcelona 講演「Expanding and Deepening NTT D...NTTドコモ様 導入事例 OpenStack Summit 2016 Barcelona 講演「Expanding and Deepening NTT D...
NTTドコモ様 導入事例 OpenStack Summit 2016 Barcelona 講演「Expanding and Deepening NTT D...VirtualTech Japan Inc.
 
OpenStack DevStack Configuration localrc local.conf Tutorial
OpenStack DevStack Configuration localrc local.conf TutorialOpenStack DevStack Configuration localrc local.conf Tutorial
OpenStack DevStack Configuration localrc local.conf TutorialSaju Madhavan
 
CloudStack and cloud-init
CloudStack and cloud-initCloudStack and cloud-init
CloudStack and cloud-initMarcusS13
 
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Community
 
OpenStack Summit Tokyo - Know-how of Challlenging Deploy/Operation NTT DOCOMO...
OpenStack Summit Tokyo - Know-how of Challlenging Deploy/Operation NTT DOCOMO...OpenStack Summit Tokyo - Know-how of Challlenging Deploy/Operation NTT DOCOMO...
OpenStack Summit Tokyo - Know-how of Challlenging Deploy/Operation NTT DOCOMO...Masaaki Nakagawa
 
Top Ten Security Considerations when Setting up your OpenNebula Cloud
Top Ten Security Considerations when Setting up your OpenNebula CloudTop Ten Security Considerations when Setting up your OpenNebula Cloud
Top Ten Security Considerations when Setting up your OpenNebula CloudNETWAYS
 
[OpenStack Day in Korea 2015] Track 3-6 - Archiectural Overview of the Open S...
[OpenStack Day in Korea 2015] Track 3-6 - Archiectural Overview of the Open S...[OpenStack Day in Korea 2015] Track 3-6 - Archiectural Overview of the Open S...
[OpenStack Day in Korea 2015] Track 3-6 - Archiectural Overview of the Open S...OpenStack Korea Community
 
20121204 open technet_openstack_이틀만하면나처럼할수있다
20121204 open technet_openstack_이틀만하면나처럼할수있다20121204 open technet_openstack_이틀만하면나처럼할수있다
20121204 open technet_openstack_이틀만하면나처럼할수있다Nalee Jang
 

What's hot (20)

OpenStack Benelux - Cloud, OpenStack and a Market In Motion - Sept 2015final
OpenStack Benelux -  Cloud, OpenStack and a Market In Motion - Sept 2015final OpenStack Benelux -  Cloud, OpenStack and a Market In Motion - Sept 2015final
OpenStack Benelux - Cloud, OpenStack and a Market In Motion - Sept 2015final
 
How logging makes a private cloud a better cloud - OpenStack最新情報セミナー(2016年12月)
How logging makes a private cloud a better cloud - OpenStack最新情報セミナー(2016年12月)How logging makes a private cloud a better cloud - OpenStack最新情報セミナー(2016年12月)
How logging makes a private cloud a better cloud - OpenStack最新情報セミナー(2016年12月)
 
Securing OpenStack and Beyond with Ansible
Securing OpenStack and Beyond with AnsibleSecuring OpenStack and Beyond with Ansible
Securing OpenStack and Beyond with Ansible
 
OpenStack - An Overview
OpenStack - An OverviewOpenStack - An Overview
OpenStack - An Overview
 
GlusterFS Update and OpenStack Integration
GlusterFS Update and OpenStack IntegrationGlusterFS Update and OpenStack Integration
GlusterFS Update and OpenStack Integration
 
Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...
Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...
Cloud Foundry and OpenStack - A Marriage Made in Heaven! (Cloud Foundry Summi...
 
OpenStack at NTT Resonant: Lessons Learned in Web Infrastructure
OpenStack at NTT Resonant: Lessons Learned in Web InfrastructureOpenStack at NTT Resonant: Lessons Learned in Web Infrastructure
OpenStack at NTT Resonant: Lessons Learned in Web Infrastructure
 
Ceph Day LA: Building your own disaster? The safe way to make Ceph storage re...
Ceph Day LA: Building your own disaster? The safe way to make Ceph storage re...Ceph Day LA: Building your own disaster? The safe way to make Ceph storage re...
Ceph Day LA: Building your own disaster? The safe way to make Ceph storage re...
 
TripleO Lightning Talk
TripleO Lightning TalkTripleO Lightning Talk
TripleO Lightning Talk
 
Mastering OpenStack - Episode 04 - Provisioning and Deployment
Mastering OpenStack - Episode 04 - Provisioning and DeploymentMastering OpenStack - Episode 04 - Provisioning and Deployment
Mastering OpenStack - Episode 04 - Provisioning and Deployment
 
NTTドコモ様 導入事例 OpenStack Summit 2016 Barcelona 講演「Expanding and Deepening NTT D...
NTTドコモ様 導入事例 OpenStack Summit 2016 Barcelona 講演「Expanding and Deepening NTT D...NTTドコモ様 導入事例 OpenStack Summit 2016 Barcelona 講演「Expanding and Deepening NTT D...
NTTドコモ様 導入事例 OpenStack Summit 2016 Barcelona 講演「Expanding and Deepening NTT D...
 
OpenStack DevStack Configuration localrc local.conf Tutorial
OpenStack DevStack Configuration localrc local.conf TutorialOpenStack DevStack Configuration localrc local.conf Tutorial
OpenStack DevStack Configuration localrc local.conf Tutorial
 
CloudStack and cloud-init
CloudStack and cloud-initCloudStack and cloud-init
CloudStack and cloud-init
 
Openstack deployment-with ubuntu
Openstack deployment-with ubuntuOpenstack deployment-with ubuntu
Openstack deployment-with ubuntu
 
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise Ceph Day Chicago - Brining Ceph Storage to the Enterprise
Ceph Day Chicago - Brining Ceph Storage to the Enterprise
 
OpenStack Summit Tokyo - Know-how of Challlenging Deploy/Operation NTT DOCOMO...
OpenStack Summit Tokyo - Know-how of Challlenging Deploy/Operation NTT DOCOMO...OpenStack Summit Tokyo - Know-how of Challlenging Deploy/Operation NTT DOCOMO...
OpenStack Summit Tokyo - Know-how of Challlenging Deploy/Operation NTT DOCOMO...
 
Internship presentation
Internship presentationInternship presentation
Internship presentation
 
Top Ten Security Considerations when Setting up your OpenNebula Cloud
Top Ten Security Considerations when Setting up your OpenNebula CloudTop Ten Security Considerations when Setting up your OpenNebula Cloud
Top Ten Security Considerations when Setting up your OpenNebula Cloud
 
[OpenStack Day in Korea 2015] Track 3-6 - Archiectural Overview of the Open S...
[OpenStack Day in Korea 2015] Track 3-6 - Archiectural Overview of the Open S...[OpenStack Day in Korea 2015] Track 3-6 - Archiectural Overview of the Open S...
[OpenStack Day in Korea 2015] Track 3-6 - Archiectural Overview of the Open S...
 
20121204 open technet_openstack_이틀만하면나처럼할수있다
20121204 open technet_openstack_이틀만하면나처럼할수있다20121204 open technet_openstack_이틀만하면나처럼할수있다
20121204 open technet_openstack_이틀만하면나처럼할수있다
 

Similar to INN694-2014-OpenStack installation process V5

Openstack starter-guide-diablo
Openstack starter-guide-diabloOpenstack starter-guide-diablo
Openstack starter-guide-diablobabycat_feifei
 
Openstack starter-guide-diablo
Openstack starter-guide-diabloOpenstack starter-guide-diablo
Openstack starter-guide-diablo锐 张
 
Openstack_administration
Openstack_administrationOpenstack_administration
Openstack_administrationAshish Sharma
 
Cloud computing lab open stack
Cloud computing lab open stackCloud computing lab open stack
Cloud computing lab open stackarunuiet
 
Build a Basic Cloud Using RDO-manager
Build a Basic Cloud Using RDO-managerBuild a Basic Cloud Using RDO-manager
Build a Basic Cloud Using RDO-managerK Rain Leander
 
Introduction to openstack
Introduction to openstackIntroduction to openstack
Introduction to openstackYaniv Zadka
 
Introduction to Orchestration and DevOps with OpenStack
Introduction to Orchestration and DevOps with OpenStackIntroduction to Orchestration and DevOps with OpenStack
Introduction to Orchestration and DevOps with OpenStackAbderrahmane TEKFI
 
Ceph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing GuideCeph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing GuideKaran Singh
 
OpenStack on the Fabric - OpenStack Korea January Seminar 2014
OpenStack on the Fabric - OpenStack Korea January Seminar 2014OpenStack on the Fabric - OpenStack Korea January Seminar 2014
OpenStack on the Fabric - OpenStack Korea January Seminar 2014Jun Lee
 
How to Become Cloud Backup Provider
How to Become Cloud Backup ProviderHow to Become Cloud Backup Provider
How to Become Cloud Backup ProviderCloudian
 
How to become cloud backup provider
How to become cloud backup providerHow to become cloud backup provider
How to become cloud backup providerCLOUDIAN KK
 
Openstack install-guide-apt-kilo
Openstack install-guide-apt-kiloOpenstack install-guide-apt-kilo
Openstack install-guide-apt-kiloduchant
 
Mastering OpenStack - Episode 11 - Scaling Out
Mastering OpenStack - Episode 11 - Scaling OutMastering OpenStack - Episode 11 - Scaling Out
Mastering OpenStack - Episode 11 - Scaling OutRoozbeh Shafiee
 
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...OpenStack
 
Cloud Foundry and OpenStack – Marriage Made in Heaven !
Cloud Foundry and OpenStack – Marriage Made in Heaven !Cloud Foundry and OpenStack – Marriage Made in Heaven !
Cloud Foundry and OpenStack – Marriage Made in Heaven ! Animesh Singh
 
Webinar "Introduction to OpenStack"
Webinar "Introduction to OpenStack"Webinar "Introduction to OpenStack"
Webinar "Introduction to OpenStack"CREATE-NET
 
Survey of open source cloud architectures
Survey of open source cloud architecturesSurvey of open source cloud architectures
Survey of open source cloud architecturesabhinav vedanbhatla
 
Open stack presentation
Open stack presentationOpen stack presentation
Open stack presentationFrikha Nour
 

Similar to INN694-2014-OpenStack installation process V5 (20)

tr-4537
tr-4537tr-4537
tr-4537
 
Openstack starter-guide-diablo
Openstack starter-guide-diabloOpenstack starter-guide-diablo
Openstack starter-guide-diablo
 
Openstack starter-guide-diablo
Openstack starter-guide-diabloOpenstack starter-guide-diablo
Openstack starter-guide-diablo
 
Lenovo midokura
Lenovo midokuraLenovo midokura
Lenovo midokura
 
Openstack_administration
Openstack_administrationOpenstack_administration
Openstack_administration
 
Cloud computing lab open stack
Cloud computing lab open stackCloud computing lab open stack
Cloud computing lab open stack
 
Build a Basic Cloud Using RDO-manager
Build a Basic Cloud Using RDO-managerBuild a Basic Cloud Using RDO-manager
Build a Basic Cloud Using RDO-manager
 
Introduction to openstack
Introduction to openstackIntroduction to openstack
Introduction to openstack
 
Introduction to Orchestration and DevOps with OpenStack
Introduction to Orchestration and DevOps with OpenStackIntroduction to Orchestration and DevOps with OpenStack
Introduction to Orchestration and DevOps with OpenStack
 
Ceph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing GuideCeph Object Storage Reference Architecture Performance and Sizing Guide
Ceph Object Storage Reference Architecture Performance and Sizing Guide
 
OpenStack on the Fabric - OpenStack Korea January Seminar 2014
OpenStack on the Fabric - OpenStack Korea January Seminar 2014OpenStack on the Fabric - OpenStack Korea January Seminar 2014
OpenStack on the Fabric - OpenStack Korea January Seminar 2014
 
How to Become Cloud Backup Provider
How to Become Cloud Backup ProviderHow to Become Cloud Backup Provider
How to Become Cloud Backup Provider
 
How to become cloud backup provider
How to become cloud backup providerHow to become cloud backup provider
How to become cloud backup provider
 
Openstack install-guide-apt-kilo
Openstack install-guide-apt-kiloOpenstack install-guide-apt-kilo
Openstack install-guide-apt-kilo
 
Mastering OpenStack - Episode 11 - Scaling Out
Mastering OpenStack - Episode 11 - Scaling OutMastering OpenStack - Episode 11 - Scaling Out
Mastering OpenStack - Episode 11 - Scaling Out
 
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
 
Cloud Foundry and OpenStack – Marriage Made in Heaven !
Cloud Foundry and OpenStack – Marriage Made in Heaven !Cloud Foundry and OpenStack – Marriage Made in Heaven !
Cloud Foundry and OpenStack – Marriage Made in Heaven !
 
Webinar "Introduction to OpenStack"
Webinar "Introduction to OpenStack"Webinar "Introduction to OpenStack"
Webinar "Introduction to OpenStack"
 
Survey of open source cloud architectures
Survey of open source cloud architecturesSurvey of open source cloud architectures
Survey of open source cloud architectures
 
Open stack presentation
Open stack presentationOpen stack presentation
Open stack presentation
 

INN694-2014-OpenStack installation process V5

  • 1. INN694 – Project OpenStack Semester 2, 2014 STUDENT NAME STUDENT ID Fabien Chastel n8745064 SUPERVISOR Dr Vicky Liu
  • 2. OpenStack – Final report INN694 – Project Page |1/43| Executive Summary The purpose of this documentation is to provide a step by step installation procedure of OpenStack. This include the configuration of the primary environment such as the network time synchronisation, the database that will store all data that OpenStack services need, the OpenStack package and the messaging service use by OpenStack services in order to communicate. Then it will explain how to install the core components that are needed to run a basic instancewhichare the Identityservice, theImage service, thecompute service, a compute node and the network services.
  • 3. OpenStack – Final report INN694 – Project Page |2/43| TABLE OF CONTENTS EXECUTIVE SUMMARY .................................................................................................................................................................1 1 - INTRODUCTION .........................................................................................................................................................................3 2 - OPENSTACK ENVIRONMENT .............................................................................................................................................3 2.1 - HARDWARE REQUIREMENT................................................................................................................................................................... 4 2.2 - SOFTWARE REQUIREMENT.................................................................................................................................................................... 4 3 - INSTALLATION OF OPENSTACK........................................................................................................................................5 3.1 - PRIMARY ENVIRONMENT CONFIGURATION........................................................................................................................................ 6 3.2 - IDENTITY SERVICE................................................................................................................................................................................ 11 3.3 - IMAGE SERVICE: GLANCE.................................................................................................................................................................... 14 3.4 - COMPUTE SERVICE: NOVA.................................................................................................................................................................. 16 3.5 - NETWORKING SERVICE: NEUTRON................................................................................................................................................... 19 3.6 - DASHBOARD: HORIZON...................................................................................................................................................................... 26 3.7 - BLOCK STORAGE: CINDER................................................................................................................................................................... 28 4 - TROUBLESHOOTING.............................................................................................................................................................33 5 - USEFUL COMMAND ................................................................................................................................................................34 5.1 - GENERAL COMMAND........................................................................................................................................................................... 34 5.2 - GLANCE ................................................................................................................................................................................................. 34 6 - TUTORIAL ....................................................................................................................................................................................35 6.1 - LUNCH AN INSTANCE: .......................................................................................................................................................................... 35 6.2 - PROVIDE PUBLIC ADDRESS TO INSTANCES........................................................................................................................................ 38 6.3 - CREATE A NEW SECURITY GROUP (ACL)........................................................................................................................................... 39 6.4 - FORCE THE MTU.................................................................................................................................................................................. 41 6.5 - CREATE A VIRTUAL MACHINE USING “VIRTINST”...............................................................ERROR!BOOKMARKNOTDEFINED. 7 - TABLE OF FIGURES.................................................................................................................................................................41 8 - TABLE OF TABLES ...................................................................................................................................................................41 9 - REFERENCES .............................................................................................................................................................................42
  • 4. OpenStack – Final report INN694 – Project Page |3/43| 1 - Introduction OpenStack is a worldwide association of developers and cloud computing technologists, managed by the OpenStack Foundation, that produce the omnipresent open source computing platform for public and private clouds [1]. Cloud computing is about sharing resource such as RAM, CPU and other among several machine. For instance, if you have two computers, one with 2 core CPU, 4GB of RAM and 100GB of storage and this other one has 4 core CPU, 16GB of RAM and 500GB of storage, it will summarised the resources and the use will perceive it as one server of 6 core CPU, 20GB of RAM and 600GB of storage (in theory). In this case, the OpenStack was installed on three computer provided by QUT, the specification of those computer will be listed later on the report. OpenStack is an open-source software cloud computing platform mainly focused on IaaS (Infrastructure as a Service). It can control a big pools of compute, storage and networking resources of an entire datacentre using a single web-based dashboard. Figure 1 - OpenStack overview [1] 2 - OpenStack environment The environment of OpenStack is highly scalable and depend on the needs of each companies. The OpenStack scalability will probably always differ from one company to another according to the need of this company and “no one Solution meet everyone’s scalability goals [2]. For instance, some companies will need to have a plethora of big instancesthatneed a lot of VPCU and RAM but lessstorage whereasother companies will need only small instances using few VCPU and RAM but need a huge amount of storage. OpenStack has been designed to be horizontally scalable in order to suit the cloud paradigm [2]. It means that after the initial installation of OpenStack, it is possible to add more power of or storable simply by adding another server on the cloud OpenStack can be install in a virtual machine using software like VMware or VirtualBox for experiment purposes in order to run few smallinstances asitcan be installedina multinationalenterprisecanrun thousands of instances, small or big, such as Amazon with it Amazon Cloud Services.
  • 5. OpenStack – Final report INN694 – Project Page |4/43| 2.1 - Hardware requirement A basic environment does not need a huge amount of resource to be functional. However, there is a minimum requirement in order to support several minimal CirrOS instances. This minimum are as bellow: Node Processor Memory Storage Controller 1 2 BG 5 GB Network 1 512 MB 5 GB Compute 1 2 GB 10 GB Table 1 - Hardware requirement 2.2 - Software requirement OpenStack need to be install on the top of a Linux distribution, the list of compatible distribution is as follow:  Debian  openSUSE and SUSE Linux Enterprise Server  Red Hat Enterprise  Centos  Fedora  Ubuntu Before installing OpenStack, it is necessary to have a good base in order to be able to install OpenStack services without or with only few problems. Firstly, it is strongly recommended to have only a minimal installation of Linux distribution on the purpose of allowing more resource for OpenStack and reduce confusion and it is highly recommended to have a 64-bit version of Linux for a number reason such as the limit of RAM (3-4GB) and the increase of the capability of the processor. It will also allow to create a 64-bit instances as well as 32-bit. Secondly, the network topology should reflect the needs of the company and the IP addressing should be chosen quit carefully. All automatic IP assignments should be disable and be manually configure on each nodes. In addition to the addressing, it is better to have a DNS server with the record of all nodes but it is not compulsory as it can be done using the “hosts” files located on the folder “/etc”, however it become complicate to manage when the network grow up. Then, the time should be synchronised amongst all nodes from the controller using application like NTP. The following step is to install a database as the majority of OpenStack services need a database in order to store information. A database must be installed, preferably on the controller, as well as the Python library related to the database chosen. The Python library also need to be installed on all additional nodes that need to access this database using the API from OpenStack. The recommended database software and python library for OpenStack is MySQL and MySQL Python library. Then the OpenStack package need to be installed on each servers/nodes. The OpenStack packagecan be install by adding a specific repository and use the normal install command like “apt-get install” or “yum install”. Some recent Linux distribution such as Ubuntu 14.04 include those packages on their repository. The final step before installing the main services is to install a message broker. Indeed, to coordinate operation and status information among services, OpenStack use a message broker. Several message brokers are compatible with OpenStack, yet the most commonly used is RabbitMQ. Same as the database, it is preferable to install the message broker on the main controller.
  • 6. OpenStack – Final report INN694 – Project Page |5/43| 3 - Installation of OpenStack The first stage of the installation of OpenStack is to install the operating system that will store the cloud. This documentation assume that a basic installation of Linux was done using Ubuntu 14.04 where system was updated (apt-get update) and upgraded (apt-get upgrade) and a DNS was implemented to resolve all the IPs. In addition, it is important to testthe configuration by following the verificationon eachsection or by checking the respective log after restarting a service and if a problem occur, it is recommended to fix it before continue. After follow the Section from 10.1 to 10.5, all the core components necessary to launch a basic instance will be install. The rest will be optional. All the command in red colour need to be change according to the actual network. For instance, most passwords were generated using a command (“openssl rand -hex 10”), also shown in section 3.2.1.5-. The Table 2 show the list of password that will be needed during the installation. The password that need to be remembered such as the system and MySQL root password were chosen carefully and easy to remember like “osuc@123456”, otherwise all other passwords created for OpenStack services in Keystone or MySQL were generated using a command line for security reasons. Location Username Password Description System root root_password Password for Ubuntu YourUsername Your_Username_Password Password for Ubuntu MySQL root MySQL_Root_Password Root password for the database dbu_keystone MySQL_Keystone_Password Database user for Identity service dbu_glance MySQL_Glance_Password Database user for Image Service dbu_nova MySQL_Nova_Password Database user for Computeservice dbu_horizon MySQL_Horizon_Password Database user for the dashboard dbu_cinder MySQL_Cinder_Password Database user for the Block Storage service dbu_neutron MySQL_Neutron_Password Database user for the Networking service dbu_heat MySQL_Heat_Password Database user for the Orchestration service RabbitMQ guest Rabbit_guest_password User guest of RabbitMQ YourUsername Rabbit_Strong_Password Another account of RabbitMQ Keystone admin Keystone_Admin_Password Main user glance Keystone_Glance_Password User for Image Service nova Keystone_Nova_Password User for Computeservice cinder Keystone_Cinder_Password User for Block Storage service neutron Keystone_Neutron_Password User for Networking service heat Keystone_Heat_Password User for Orchestration service Table 2 - List of passwords
  • 7. OpenStack – Final report INN694 – Project Page |6/43| Controller-node Network-node Compute-node Management Network Network: - IPv4: 192.168.1.0/24 - IPv6 2402:ec00:face:1::/64 Domain: labqut-osuc.com Instance Network Network: - IPv4: 192.168.2.0/24 - IPv6 2402:ec00:face:2::/64 Domain: labqut-osuc.com QUT/Internet External Network Network: - IPv4: 10.0.0.0/24 - IPv6 2402:ec00:face:10::/64 Domain: labqut-osuc.com Figure 2 - Network topology
  • 8. OpenStack – Final report INN694 – Project Page |7/43| 3.1 - Primary environment configuration 3.1.1 - Network configuration 3.1.1.1 - Controller node vi /etc/network/interface auto eth0 iface eth0 inet static address 192.168.1.11 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.12 dns-nameservers 192.168.1.12 dns-search labqut-osuc.com iface eth0 inet6 static pre-up modprobe ipv6 address 2402:ec00:face:1::11 netmask 64 gateway 2402:ec00:face:1::1 3.1.1.2 - Compute node vi /etc/network/interface # The primary network interface auto em1 iface em1 inet static address 192.168.1.13 netmask 255.255.255.0 gateway 192.168.1.12 dns-nameservers 192.168.1.12 dns-search labqut-osuc.com iface em1 inet6 static pre-up modprobe ipv6 address 2402:ec00:face:1::13 netmask 64 gateway 2402:ec00:face:1::1 auto p4p1 iface p4p1 inet static address 192.168.2.13 netmask 255.255.255.0 iface p4p1 inet6 static pre-up modprobe ipv6 address 2402:ec00:face:2::13 netmask 64
  • 9. OpenStack – Final report INN694 – Project Page |8/43| 3.1.1.3 - Network node vi /etc/network/interface # The network interface that will be created after the section 3.5.2.9 - Step 8: Setup the Open vSwitch (OVS) service auto br-ex iface br-ex inet static address 10.0.0.2 netmask 255.255.255.0 gateway 10.0.0.1 dns-nameservers 192.168.1.12 auto eth0 iface eth0 inet static address 192.168.1.12 netmask 255.255.255.0 iface eth0 inet6 static pre-up modprobe ipv6 address 2402:ec00:face:1::12 netmask 64 #This interface can be on DHCP for the beginning of the installation auto eth1 iface eth1 inet manual up ip link set $IFACE up down ip link set $IFACE down auto eth2 iface eth2 inet static address 192.168.2.12 netmask 255.255.255.0 iface eth2 inet6 static pre-up modprobe ipv6 address 2402:ec00:face:2::12 netmask 64
  • 10. OpenStack – Final report INN694 – Project Page |9/43| 3.1.1.4 - Test the connectivity The simple wayto verify thatthe network configurationhas been done correctlyif to usethe command ping. It is important to make sure that all nodes are able to ping the nodes that are on the same network. For instance the Compute node must be able to ping both interfaces of the network node in IPv4 and IPv6. Regarding the compute node, it should only ping one interface of the network and the compute node: From the compute node: ping 192.168.1.12 ping 192.168.1.13 ping6 2402:ec00:face:1::12 ping6 2402:ec00:face:1::13 From the network node: ping 192.168.1.11 ping 192.168.1.13 ping 192.168.2.13 ping6 2402:ec00:face:1::11 ping6 2402:ec00:face:1::13 ping6 2402:ec00:face:2::13 From the Compute node: ping 192.168.1.11 ping 192.168.1.12 ping 192.168.2.12 ping6 2402:ec00:face:1::11 ping6 2402:ec00:face:1::12 ping6 2402:ec00:face:2::12 3.1.2 - Network Time Protocol It is important to synchroniseserviceson allmachinesand NTP willdo it automatically.Itis suggested to synchronise the time of all additional nodes from the controller. 3.1.2.1 - Step 1: Install the package sudo apt-get install ntp 3.1.2.2 - Step 2: Remove the deprecated package ntpdate sudo apt-get remove ntpdate 3.1.2.3 - Step 3: Setup the server sudo vi /etc/ntp.conf #Add iburst at the end of line for your favourite server server 0.ubuntu.pool.ntp.org iburst server 1.ubuntu.pool.ntp.org server 2.ubuntu.pool.ntp.org server 3.ubuntu.pool.ntp.org #Ubuntu's ntp server server ntp.ubuntu.com
  • 11. OpenStack – Final report INN694 – Project Page |10/43| # ... # Authorise your own network to communicate with the server. restrict 192.168.1.0 mask 255.255.255.224 nomodify notrap 3.1.2.4 - Step 4: Setup the client(s) sudo vi /etc/ntp.conf #comment the line that start with "server" #server 0.ubuntu.pool.ntp.org #server 1.ubuntu.pool.ntp.org #server 2.ubuntu.pool.ntp.org #server 3.ubuntu.pool.ntp.org server IP/HostnameOfController iburst #Leave the fallback, which is the Ubuntu's ntp server in case of your server break down server ntp.ubuntu.com 3.1.2.5 - Test The synchronisation of the time can be test by running the command “date” on all server within one or two second of delay. 3.1.3 - Database 3.1.3.1 - Step 1: Install the packages sudo apt-get install python-mysqldb mysql-server 3.1.3.2 - Step 2: Adapt MySQL to work with OpenStack sudo vi /etc/mysql/my.conf [mysqld] ... #Allow other nodes to connect to the local database bind-address = IP/HostnameOfMySQLServer ... #enable InnoDB, UTF-8 character set, and UTF-8 collation by default default-storage-engine = innodb innodb_file_per_table collation-server = utf8_general_ci init-connect = 'SET NAMES utf8' character-set-server = utf8 3.1.3.3 - Step 3: Restart MySQL sudo service mysql restart 3.1.3.4 - Step 4: Delete the anonymous users (some connection problems might happen if still present) sudo mysql_install_db (Optional: to be usedif the next command fail) sudo mysql_secure_installation (Answer "Yes" to all question unless you have a good reason to answerno) 3.1.3.5 - Step 5: Install the MySQL Python library on the additional nodes (Optional) sudo apt-get install python-mysqldb
  • 12. OpenStack – Final report INN694 – Project Page |11/43| 3.1.4 - OpenStack packages The latest version of OpenStack packages can be downloaded through the Ubuntu Cloud Archive, which is a special repository. 3.1.4.1 - Step 1: Install python software sudo apt-get install python-software-properties Remark: The following steps are not require for Ubuntu 14.04. 3.1.4.2 - Step 2: Add the Ubuntu Cloud archive for Icehouse (optional) sudo add-apt-repository cloud-archive:icehouse 3.1.4.3 - Step 3: Update the packages list and upgrade the system (optional) sudo apt-get update sudo apt-get dist-upgrade 3.1.4.4 - Step 3: Install “Backported Linux Kernel” (Only for Ubuntu 12.04: improve the stability) sudo apt-get install linux-image-generic-lts-saucy linux-headers-generic-lts-saucy 3.1.4.5 - Step 4: Restart the system sudo reboot 3.1.5 - Messaging server This documentationdo not explainallthe option of RabbitMQ, for more information about RabbitMQ access control please follow the website provided in the reference [7]. 3.1.5.1 - Step 1: Install the package sudo apt-get install rabbitmq-server 3.1.5.2 - Step 2: Change default password of the existing user (guest/guest) Remark: it is strongly recommended to change the password for the guest user for security purpose. sudo rabbitmqctl change_password guest Rabbit_Guest_Password 3.1.5.3 - Step 3: Create a unique account Remark: it is possible to use the guest user-name and password for each OpenStack service, but it is not recommended. sudo rabbitmqctl add_user YourUserName StrongPassword 3.1.5.4 - Step 4: Set up the access control sudo rabbitmqctl add_vhost NameVHost sudo rabbitmqctl set_user_tags NameVHost administrator rabbitmqctl set_permissions -p /NameVHost YourUserName ".*" ".*" ".*" 3.2 - Identity service 3.2.1 - Installation of Keystone 3.2.1.1 - Step 1: Install the package sudo apt-get install keystone 3.2.1.2 - Step 2: Connect keystone to the MySQL database sudo vi /etc/keystone/keystone.conf
  • 13. OpenStack – Final report INN694 – Project Page |12/43| [database] # The SQLAlchemy connection string used to connect to the database connection = mysql://dbu_keystone:MySQL_Keystone_Password@IP/HostnameOfController/keystone 3.2.1.3 - Step 3: Create the database and the user with MySQL mysql -u root -p mysql> CREATE DATABASE keystone; mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'dbu_keystone'@'localhost' IDENTIFIED BY 'MySQL_Keystone_Password'; mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'dbu_keystone'@'%' IDENTIFIED BY 'MySQL_Keystone_Password'; mysql> exit 3.2.1.4 - Step 4: Create the tables su -s /bin/sh -c "keystone-manage db_sync" keystone 3.2.1.5 - Step 5: Define the authorization token to communicate between the Identity Service and other OpenStack services and the log The following command generate a random shared key and should be usedto generateallthe password use for OpenStack services: openssl rand -hex 10 Result: 3856bdace7abac9cfc78 sudo vi /etc/keystone/keystone.conf [DEFAULT] admin_token = 3856bdace7abac9cfc78 log_dir = /var/log/keystone 3.2.1.6 - Step 6: Restart the service sudo service keystone restart 3.2.1.7 - Step 7: Purge the expired token every hours The identity service save all expired tokens in the local database without erase them at any time. This could be helpful for auditing in production environment but it will increase the size of the databaseas well as affect the performance of other services. It is recommended to purge the expired token every hour using cron. (crontab -l -u keystone 2>&1 | grep -q token_flush) || echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone- tokenflush.log 2>&1' >> /var/spool/cron/crontabs/keystone 3.2.2 - Set environment variable In order to use the command related to OpenStack command-line such as “Keystone”, “Neutron” and other, you need to provide the address of the identity service (–os-auth-url), an username (–os-username) and a password (–os-password). However, when it is the first time to use the identity service, so there is no user yet. Therefore, you need to connect to the service using the token generate before and export the following variable:
  • 14. OpenStack – Final report INN694 – Project Page |13/43| 3.2.2.1 - Export Variable for initial installation of keystone export OS_SERVICE_TOKEN=3856bdace7abac9cfc78 export OS_SERVICE_ENDPOINT=http://IP/HostnameOfController:35357/v2.0 After created the username and password needed to connect to the identity service, it is possible to set environment variable using the OpenStack RC file (“Openrc.sh”), it is a project-specific environment files that contain the credentials needed by all OpenStack services. To use this variable, you need to run the command “source” and the name of the file. However, this solution is possible only when at least one user, one tenant and the endpoint for the administration are created. 3.2.2.2 - Create the file and add the information for the authentication sudo vi /home/NameOfProject-openrc.sh export OS_USERNAME=Username export OS_PASSWORD=Keystone_Username_Password export OS_TENANT_NAME=NameOfProject export OS_AUTH_URL=http://IP/HostnameOfController:35357/v2.0 3.2.2.3 - Step 2: Export the variable using the command “source” sudo source /home/NameOfProject-openrc.sh 3.2.3 - Users, tenants and roles 3.2.3.1 - Step 1: Create and “admin” user (within 1 line) sudo keystone user-create --name=admin --pass=Keystone_Admin_Password -- email=infos@connetwork.com.au 3.2.3.2 - Step 2: Create an “admin” role sudo keystone role-create --name=admin 3.2.3.3 - Step 3: Create an “admin” tenant sudo keystone tenant-create --name=admin --description="Description of Admin Tenant" 3.2.3.4 - Step 4: Link the user, the role and the tenant together sudo keystone user-role-add --user=admin --tenant=admin --role=admin 3.2.3.5 - Step 5: Link the user, the _member_ role and the tenant together sudo keystone user-role-add --user=admin --role=_member_ --tenant=admin 3.2.3.6 - Step 5: Create a common user (you can create as many user as you want) (within 1 line) sudo keystone user-create --name=Username --pass=StrongPassword -- email=infos@connetwork.com.au 3.2.3.7 - Step 6: Create a normal tenant (one tenant can have many user) (within 1 line) sudo keystone tenant-create --name=NameOfTenant --description="The description of your denant Tenant" 3.2.3.8 - Step 7: Link them together sudo keystone user-role-add --user=Username --role=_member_ --tenant=NameOfTenant 3.2.3.9 - Step 8: Create a tenant call “service” to access OpenStack services sudo keystone tenant-create --name=service --description="Service Tenant"
  • 15. OpenStack – Final report INN694 – Project Page |14/43| 3.2.4 - Service and endpoints 3.2.4.1 - Step 1: Create a service entry for the Identity Service (within 1 line) sudo keystone service-create --name=keystone --type=identity --description="OpenStack Identity" 3.2.4.2 - Step 2: Specify a API endpoint for the Identity service sudo keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') --publicurl=http://IP/HostnameOfController:5000/v2.0 --internalurl=http://IP/HostnameOfController:5000/v2.0 --adminurl=http://IP/HostnameOfController:35357/v2.0 3.3 - Image service: Glance 3.3.1 - Step 1: Install the packages sudo apt-get install glance python-glanceclient 3.3.2 - Step 2: Add the database section in the file glance-api.conf and glance-registry.conf sudo vi /etc/glance/glance-api.conf ... [database] connection = mysql://dbu_glance:MySQL_Glance_Password@IP/HostnameOfController/glance sudo vi /etc/glance/glance-registry.conf ... [database] connection = mysql://dbu_glance:MySQL_Glance_Password@IP/HostnameOfController/glance 3.3.3 - Step 3: Add the information about the message broker in the file glance-api.conf sudo vi /etc/glance/glance-api.conf [DEFAULT] rpc_backend = rabbit rabbit_host = IP/HostnameOfController rabbit_port = 5672 rabbit_use_ssl = false rabbit_userid = YourUserName rabbit_password = StrongPassword rabbit_virtual_host = NameVHost rabbit_notification_exchange = glance rabbit_notification_topic = notifications rabbit_durable_queues = False 3.3.4 - Step 4: Delete the default database sudo rm /var/lib/glance/glance.sqlite 3.3.5 - Step 5: Create the Database and the user using MySQL sudo mysql -u root -p
  • 16. OpenStack – Final report INN694 – Project Page |15/43| mysql> CREATE DATABASE glance; mysql> GRANT ALL PRIVILEGES ON glance.* TO 'dbu_glance'@'localhost' IDENTIFIED BY 'MySQL_Glance_Password'; mysql> GRANT ALL PRIVILEGES ON glance.* TO 'dbu_glance'@'%' IDENTIFIED BY 'MySQL_Glance_Password'; 3.3.6 - Step 6: Create tables su -s /bin/sh -c "glance-manage db_sync" glance 3.3.7 - Step 7: Create the user “glance” in the Identity service sudo keystone user-create --name=glance --pass=Keystone_Glance_Password -- email=infos@connetwork.com.au sudo keystone user-role-add --user=glance --tenant=service --role=admin 3.3.8 - Step 8: Add the authentication for the Identity Service in the file glance-api.conf and glance- registry.conf sudo vi /etc/glance/glance-api.conf [keystone_authtoken] auth_uri = http://IP/HostnameOfController:5000 auth_host =IP/HostnameOfController auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = glance admin_password = Keystone_Glance_Password ... [paste_deploy] ... flavor = keystone sudo vi /etc/glance/glance-registry.conf [keystone_authtoken] auth_uri = http://IP/HostnameOfController:5000 auth_host =IP/HostnameOfController auth_port = 35357 auth_protocol = http a dmin_tenant_name = service admin_user = glance admin_password = Keystone_Glance_Password ... [paste_deploy] ... flavor = keystone 3.3.9 - Step 9: Register the Image Service with the Identity service (within 1 line for the first command) sudo keystone service-create --name=glance --type=image --description="OpenStack Image Service" sudo keystone endpoint-create
  • 17. OpenStack – Final report INN694 – Project Page |16/43| --service-id=$(keystone service-list | awk '/ image / {print $2}') --publicurl=http://IP/HostnameOfController:9292 --internalurl=http://IP/HostnameOfController:9292 --adminurl=http://IP/HostnameOfController:9292 3.3.10 - Step 10: Restart the services sudo service glance-registry restart sudo service glance-api restart 3.3.11 - Verify the installation In order to verify the installation of glance, it is necessary to download at leastone virtual machine image into the server using any method such as “wget”, “scp” or other. This example will assume that the server has an internet connection and download a CirrOS image. 3.3.11.1 - Step 1: create a temporary folder mkdir /home/iso 3.3.11.2 - Step 2: change the directory cd /home/iso 3.3.11.3 - Step 3: Download the image wget http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img 3.3.11.4 - Source OpenStack RC file source /home/NameOfProject-openrc.sh 3.3.11.5 - Add the image into glance sudo glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 --container-format bare --is-public True --progress < cirros-0.3.2-x86_64-disk.img 3.3.11.6 - Check if the image has been successfully added to glance: sudo glance image-list 3.4 - Compute service: Nova 3.4.1 - Service 3.4.1.1 - Step 1: Install the packages (within 1 line) sudo apt-get install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient 3.4.1.2 - Step 2: Add the MySQL connection on the file “nova.conf” as well as setup RabbitMQ sudo vi /etc/nova/nova.conf [DEFAULT] #Use the Identity service (keystone) for authentication auth_strategy = keystone #Set up the message broker rpc_backend = rabbit rabbit_host = IP/HotsnameOfController rabbit_userid = YourUserName rabbit_password = StrongPassword
  • 18. OpenStack – Final report INN694 – Project Page |17/43| rabbit_virtual_host = NameVHost auth_strategy = keystone my_ip = IP/HostnameOfController vncserver_listen = IP/HostnameOfController vncserver_proxyclient_address = IP/HostnameOfController [database] connection = mysql://dbu_nova:MySQL_Nova_Password@IP/HostnameOfController/nova [keystone_authtoken] auth_uri = http://IP/HostnameOfController:5000 auth_host = IP/HostnameOfController auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = Keystone_Nova_Password 3.4.1.3 - Step 3: Create the database and user mysql -u root -p mysql> CREATE DATABASE nova; mysql> GRANT ALL PRIVILEGES ON nova.* TO 'dbu_nova'@'localhost' IDENTIFIED BY 'MySQL_Nova_Password'; mysql> GRANT ALL PRIVILEGES ON nova.* TO 'dbu_nova'@'%' IDENTIFIED BY 'MySQL_Nova_Password'; 3.4.1.4 - Step 4: Create the tables su -s /bin/sh -c "nova-manage db sync" nova 3.4.1.5 - Step 5: Create user on the Identity service sudo keystone user-create --name=nova --pass=Keystone_Nova_Password -- email=infos@connetwork.com.au sudo keystone user-role-add --user=nova --tenant=service --role=admin 3.4.1.6 - Step 6: Create the service and endpoint sudo keystone service-create --name=nova --type=compute --description="OpenStack Compute" sudo keystone endpoint-create --service-id=$(keystone service-list | awk '/ compute / {print $2}') --publicurl=http://IP/HostnameOfController:8774/v2/%(tenant_id)s --internalurl=http://IP/HostnameOfController:8774/v2/%(tenant_id)s --adminurl=http://IP/HostnameOfController:8774/v2/%(tenant_id)s 3.4.1.7 - Step 7: Restart all nova service sudo service nova-api restart sudo service nova-cert restart sudo service nova-consoleauth restart sudo service nova-scheduler restart sudo service nova-conductor restart sudo service nova-novncproxy restart
  • 19. OpenStack – Final report INN694 – Project Page |18/43| 3.4.2 - Compute node The compute node can be install in the same server as the controller. However, it is recommended to install it in another server. 3.4.2.1 - Step 1: Install the packages sudo apt-get install nova-compute-kvm python-guestfs libguestfs-tools qemu-system 3.4.2.2 - Step 2: Make the kernel readable by any hypervisor services such as qemu or libguestfs The kernelis not readableby defaultfor basicuser and for securityreason, but hypervisor servicesneed to read it in order to work better dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r) The previous command makes the kernel readable, yet it is not permanent as when the kernel will be updated it will not be readable anymore. Therefore, you need to create a file to overwrite all upcoming update vi /etc/kernel/postinst.d/statoverride #!/bin/sh version="$1" # passing the kernel version is required [ -z "${version}" ] && exit 0 dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-${version} 3.4.2.3 - Step 3: Edit the nova.conf vi /etc/nova/nova.conf [DEFAULT] #Use the Identity service (keystone) for authentication auth_strategy = keystone #Set up the message broker rpc_backend = rabbit rabbit_host = IP/HotsnameOfController rabbit_userid = YourUserName rabbit_password = StrongPassword rabbit_virtual_host = NameVHost auth_strategy = keystone #Interface for the console my_ip = IP/HostnameOfController vnc_enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = IP/HostnameOfController novncproxy_base_url = http://IP/HostnameOfController:6080/vnc_auto.html #Location of the image service (Glance) glance_host = IP/HostnameOfController [database] connection = mysql://dbu_nova:MySQL_Nova_Password@IP/HostnameOfController/nova [keystone_authtoken] auth_uri = http://IP/HostnameOfController:5000 auth_host = IP/HostnameOfController
  • 20. OpenStack – Final report INN694 – Project Page |19/43| auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = Keystone_Nova_Password 3.4.2.4 - Step 4: Check if your system support hardware acceleration sudo egrep -c '(vmx|svm)' /proc/cpuinfo If the result if greater than 0 (1 or more), you can skip the following step 3.4.2.5 - Step 5: Only if the result of the previous command is “0” vi /etc/nova/nova-compute.conf 3.4.2.6 - Step 6: Restart the service sudo service nova-compute restart 3.5 - Networking service: Neutron 3.5.1 - Controller node 3.5.1.1 - Step 1: Create the database mysql -u root -p mysql> CREATE DATABASE neutron; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'dbu_neutron'@'localhost' IDENTIFIED BY 'MySQL_Neutron_Password'; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'dbu_neutron'@'%' IDENTIFIED BY 'MySQL_Neutron_Password'; 3.5.1.2 - Step 2: Create the user in the Identity service (within 1 line) sudo keystone user-create --name neutron --pass Keystone_Neutron_Password --email infos@connetwork.com.au 3.5.1.3 - Step 3: Link the user to the service tenant and admin role sudo keystone user-role-add --user neutron --tenant service --role admin 3.5.1.4 - Step 4: Create the service for Neutron in the identity service (within 1 line) sudo keystone service-create --name neutron --type network --description "OpenStack Networking" 3.5.1.5 - Step 5: Create the service endpoint in the identity service sudo keystone endpoint-create --service-id $(keystone service-list | awk '/ network / {print $2}') --publicurl http://IP/HostnameOfController:9696 --adminurl http://IP/HostnameOfController:9696 --internalurl http://IP/HostnameOfController:9696 3.5.1.6 - Step 6: Install the networking components (packages) sudo apt-get install neutron-server neutron-plugin-ml2
  • 21. OpenStack – Final report INN694 – Project Page |20/43| 3.5.1.7 - Step 7: Get the service tenant identifier (SERVICE_TENANT_ID) sudo keystone tenant-get service Example: +-------------+----------------------------------+ | Property | Value | +-------------+----------------------------------+ | description | Service Tenant | | enabled | True | | id | 032ff6f1056a4d82b51a87ff106c8185 | | name | service | +-------------+----------------------------------+ 3.5.1.8 - Step 8: Edit neutron configuration file (neutron.conf) sudo vi /etc/neutron/neutron.conf [DEFAULT] #Rabbit information rabbit_host = IP/HostnameOfController rabbit_port = 5672 rabbit_use_ssl = false rabbit_userid = YourUserName rabbit_password = StrongPassword rabbit_virtual_host = /NameVHost rabbit_notification_exchange = neutron rabbit_notification_topic = notifications #type of authentication auth_strategy = keystone #communication with the service nova notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://IP/HostnameOfController:8774/v2 nova_admin_username = nova nova_admin_tenant_id = SERVICE_TENANT_ID nova_admin_password = Keystone_Nova_Password nova_admin_auth_url = http://IP/HostnameOfController:35357/v2.0 #Configuration of the Modular Layer 2 (ML2) core_plugin = ml2 service_plugins = router allow_overlapping_ips = True #connection to the database [database] connection = mysql://dbu_neutron:MySQL_Neutron_Password@IP/HostnameOfController/neutron [keystone_authtoken] #Authentication information: auth_uri = http://IP/HostnameOfController:5000 auth_host = IP/HostnameOfController
  • 22. OpenStack – Final report INN694 – Project Page |21/43| auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = neutron admin_password = Keystone_Neutron_Password 3.5.1.9 - Step 9: Edit the nova.conf to configure compute to use Networking sudo vi /etc/nova/nova.conf [DEFAULT] ... network_api_class = nova.network.neutronv2.api.API neutron_url = http://IP/HostnameOfController:9696 neutron_auth_strategy = keystone neutron_admin_tenant_name = service neutron_admin_username = neutron neutron_admin_password = Keystone_Neutron_Password neutron_admin_auth_url = http://IP/HostnameOfController:35357/v2.0 linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver security_group_api = neutron 3.5.1.10 - Step 10: Restart the necessary services sudo service nova-api restart sudo service nova-scheduler restart sudo service nova-conductor restart sudo service neutron-server restart 3.5.2 - Network node 3.5.2.1 - Pre-step: Enable few networking function sudo vi /etc/sysctl.conf net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 Update the changes sudo sysctl -p 3.5.2.2 - Step 1: Install the networking components (packages) (within 1 line) apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent openvswitch- datapath-dkms neutron-l3-agent neutron-dhcp-agent 3.5.2.3 - Step 2: Edit neutron configuration file (neutron.conf) sudo vi /etc/neutron/neutron.conf [DEFAULT] #Rabbit information rabbit_host = IP/HostnameOfController rabbit_port = 5672 rabbit_use_ssl = false
  • 23. OpenStack – Final report INN694 – Project Page |22/43| rabbit_userid = YourUserName rabbit_password = StrongPassword rabbit_virtual_host = /NameVHost rabbit_virtual_host = qutlab-osuc rabbit_notification_exchange = neutron rabbit_notification_topic = notifications #type of authentication auth_strategy = keystone #Configuration of the Modular Layer 2 (ML2) core_plugin = ml2 service_plugins = router allow_overlapping_ips = True [keystone_authtoken] auth_uri = http:// IP/HostnameOfController:5000 auth_host = IP/HostnameOfController auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = neutron admin_password = Keystone_Neutron_Password 3.5.2.4 - Step 3: Setup the Layer-3 (L3) Agent vi /etc/neutron/l3_agent.ini [DEFAULT] interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver use_namespaces = True 3.5.2.5 - Step 4: Setup the DHCP Agent vi /etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq use_namespaces = True 3.5.2.6 - Step 5: Setup the metadata Agent vi /etc/neutron/metadata_agent.ini [DEFAULT] auth_url = http:// IP/HostnameOfController:5000/v2.0 auth_region = regionOne admin_tenant_name = service admin_user = neutron admin_password = Keystone_Neutron_Password nova_metadata_ip = IP/HostnameOfController metadata_proxy_shared_secret = Metadata_Secret_Key #Uncomment the next line for troubleshooting #verbose = True
  • 24. OpenStack – Final report INN694 – Project Page |23/43| 3.5.2.7 - Step 6: Setup the nova service to inform about the metadata proxy information Remark: this part need to be done on the controller-node and the Metadata_Secret_Key must be the same as the previous step on the file “metadata_agent.ini”. sudo vi /etc/nova/nova.conf [DEFAULT] ... #Metadata proxy information between Neutron and Nova service_neutron_metadata_proxy = true neutron_metadata_proxy_shared_secret = Metadata_Secret_Key 3.5.2.8 - Step 7: Setup the Modular Layer 2 (ML2) plug-in sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch [ml2_type_gre] tunnel_id_ranges = 1:1000 [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True [ovs] local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS #the local ip address of the network interface or the hostname that can resolve the address of the instance network tunnel_type = gre enable_tunneling = True 3.5.2.9 - Step 8: Setup the Open vSwitch (OVS) service The OVS service provide the virtual networking framework for instances. It create a virtual bridge between the external network (e.g.: Internet) and the internal network (e.g.: Used for instances). The externalbridge (namedbr-ex on thistutorial)need to be connectedto a physicalnetwork interfacein order to communicate with the external network. Restart the service: sudo service openvswitch-switch restart Add the integration bridge: sudo ovs-vsctl add-br br-int Add the external bridge: sudo ovs-vsctl add-br br-ex Add a physical network interface to the external bridge (Ex: eth0, eth1 …) sudo ovs-vsctl add-port br-ex INTERFACE_NAME
  • 25. OpenStack – Final report INN694 – Project Page |24/43| 3.5.2.10 - Step 9: Restart the necessary services sudo service neutron-plugin-openvswitch-agent restart sudo service neutron-l3-agent restart sudo service neutron-dhcp-agent restart sudo service neutron-metadata-agent restart 3.5.3 - Compute node 3.5.3.1 - Pre-step: Enable few networking function sudo vi /etc/sysctl.conf net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 Update the changes sudo sysctl -p 3.5.3.2 - Step 1: Install the networking components (packages) (within 1 line) sudo apt-get install neutron-common neutron-plugin-ml2 neutron-plugin-openvswitch-agent openvswitch-datapath-dkms 3.5.3.3 - Step 2: Edit neutron configuration file (neutron.conf) sudo vi /etc/neutron/neutron.conf [DEFAULT] #Rabbit information rabbit_host = IP/HostnameOfController rabbit_port = 5672 rabbit_use_ssl = false rabbit_userid = YourUserName rabbit_password = StrongPassword rabbit_virtual_host = /NameVHost rabbit_notification_exchange = neutron rabbit_notification_topic = notifications #type of authentication auth_strategy = keystone #Configuration of the Modular Layer 2 (ML2) core_plugin = ml2 service_plugins = router allow_overlapping_ips = True [keystone_authtoken] #Authentication information: auth_uri = http://IP/HostnameOfController:5000 auth_host = IP/HostnameOfController auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = neutron admin_password = Keystone_Neutron_Password
  • 26. OpenStack – Final report INN694 – Project Page |25/43| 3.5.3.4 - Step 3: Edit the nova.conf to configure compute to use Networking sudo vi /etc/nova/nova.conf [DEFAULT] ... network_api_class = nova.network.neutronv2.api.API neutron_url = http://IP/HostnameOfController:9696 neutron_auth_strategy = keystone neutron_admin_tenant_name = service neutron_admin_username = neutron neutron_admin_password = Keystone_Neutron_Password neutron_admin_auth_url = http://IP/HostnameOfController:35357/v2.0 linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver security_group_api = neutron 3.5.3.5 - Step 4: Setup the Modular Layer 2 (ML2) plug-in sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch [ml2_type_gre] tunnel_id_ranges = 1:1000 [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True [ovs] local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS #the ip address of the network interface or the hostname that can resolve the address of the instance network tunnel_type = gre enable_tunneling = True 3.5.3.6 - Step 5: Setup the Open vSwitch (OVS)service sudo service openvswitch-switch restart sudo ovs-vsctl add-br br-int 3.5.3.7 - Step 16: Restart the necessary services sudo service nova-compute restart sudo service neutron-plugin-openvswitch-agent restart
  • 27. OpenStack – Final report INN694 – Project Page |26/43| 3.5.4 - Create an initial network 3.5.4.1 - Source OpenStack RC file source /home/NameOfProject-openrc.sh 3.5.4.2 - Create the external network neutron net-create NameOfExternalNetwork --shared --router:external=True 3.5.4.3 - Create a subnet for the external network The external subnet need to be carefully chosen and should have a different range of IP address of the actual external network. For instance, if the network is 10.0.0.0/24, the DHCP on the external router could be from 10.0.0.2 to 10.0.0.99 and the subnet on OpenStack could be from 10.0.0.100 to 10.0.0.150. neutron subnet-create NameOfExternalNetwork --name NameOfExternalSubnet --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END --disable-dhcp --gateway EXTERNAL_NETWORK_GATEWAY EXTERNAL_NETWORK_CIDR For example: neutron subnet-create ext-net --name ext-subnet --allocation-pool start=10.0.0.100,end=10.0.0.150 --disable-dhcp --gateway 10.0.0.1 10.0.0.0/24 3.5.4.4 - Create the external network neutron net-create NameOfInternalNetwork 3.5.4.5 - Create a subnet for the external network neutron subnet-create demo-net --name NameOfInternalNetwork --gateway TENANT_NETWORK_GATEWAY TENANT_NETWORK_CIDR For example: neutron subnet-create demo-net --name demo-subnet --gateway 192.168.1.1 192.168.1.0/24 3.5.4.6 - Create a virtual router neutron router-create MyRouter 3.5.4.7 - Attach the router to the internal network neutron router-interface-add MyRouter NameOfInternalNetwork 3.5.4.8 - Attach the router to the external network by specifying it as the gateway neutron router-gateway-set demo-router NameOfExternalNetwork 3.6 - Dashboard: Horizon 3.6.1 - Step 1: Install the packages sudo apt-get install apache2 memcached libapache2-mod-wsgi openstack-dashboard When the dashboard is installed using the packages from Ubuntu repository, it comes with a ubuntu theme that change the dashboard. To remove it use the following command: apt-get remove --purge openstack-dashboard-ubuntu-theme
  • 28. OpenStack – Final report INN694 – Project Page |27/43| 3.6.2 - Step 2: Change the “LOCATION” value to match the one on the file /etc/memcached.conf Vi /etc/openstack-dashboard/local_settings.py CACHES = { 'default': { 'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION' : '127.0.0.1:11211' } } 3.6.3 - Step 3: Update the ALLOWED_HOSTS to include your computer (only if you wish to access the dashboard from a specific list of computer) and the address of the controller Vi /etc/openstack-dashboard/local_settings.py ALLOWED_HOSTS = ['localhost', 'Your-computer'] OPENSTACK_HOST = "IP/HostnameOfController" 3.6.4 - Step 4: Restart the service service apache2 restart service memcached restart 3.6.5 - Step 5: Access the dashboard with your favourite web browser “http://IP/HostnameOfController/horizon” Figure 3 - Dashboard login
  • 29. OpenStack – Final report INN694 – Project Page |28/43| 3.7 - Block storage: Cinder 3.7.1 - On the controller 3.7.1.1 - Step 1: Install the packages sudo apt-get install cinder-api cinder-scheduler 3.7.1.2 - Step 2: Set up the connection to the database vi /etc/cinder/cinder.conf [database] connection = mysql://dbu_nova:MySQL_Cinder_Password@IP/HostnameOfController/cinder 3.7.1.3 - Step 3: Create the database and user mysql> CREATE DATABASE cinder; mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'dbu_cinder'@'localhost' IDENTIFIED BY 'MySQL_Cinder_Password'; mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'dbu_cinder'@'%' IDENTIFIED BY 'MySQL_Cinder_Password'; 3.7.1.4 - Step 4: Create the tables su -s /bin/sh -c "cinder-manage db sync" cinder 3.7.1.5 - Step 5: Create user on the Identity service keystone user-create --name=cinder --pass= Keystone_Cinder_Password -- email=cinder@connetwork.com.au keystone user-role-add --user=cinder --tenant=service --role=admin 3.7.1.6 - Step 6: Add information about the identity service and the message broker vi /etc/cinder/cinder.conf [DEFAULT] #Set up the message broker rpc_backend = rabbit rabbit_host = IP/HotsnameOfController rabbit_userid = YourUserName rabbit_password = StrongPassword rabbit_virtual_host = NameVHost #add the persmission to connect to the identity service; [keystone_authtoken] auth_uri = http://IP/HostnameOfController:5000 auth_host = IP/HostnameOfController auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = cinder admin_password = Keystone_Cinder_Password
  • 30. OpenStack – Final report INN694 – Project Page |29/43| 3.7.1.7 - Step 7: Create the service and endpoint keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage" keystone endpoint-create --service-id=$(keystone service-list | awk '/ volume / {print $2}') --publicurl=http://IP/HostnameOfController:8776/v1/%(tenant_id)s --internalurl=http://IP/HostnameOfController:8776/v1/%(tenant_id)s --adminurl=http://IP/HostnameOfController:8776/v1/%(tenant_id)s 3.7.1.8 - Step 8: Create the service and endpoint for the version 2 keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2" keystone endpoint-create --service-id=$(keystone service-list | awk '/ volumev2 / {print $2}') --publicurl=http://IP/HostnameOfController:8776/v2/%(tenant_id)s --internalurl=http://IP/HostnameOfController:8776/v2/%(tenant_id)s --adminurl=http://IP/HostnameOfController:8776/v2/%(tenant_id)s 3.7.1.9 - Step 9: Restart all necessary services service cinder-scheduler restart service cinder-api restart 3.7.2 - On a storage node (Can be done in any machine) This part assume that the type of partition of sda3 it LVM 3.7.2.1 - Step 1:Install the LVM package apt-get install lvm2 3.7.2.2 - Step 2:Create a physical volume pvcreate /dev/sda3 3.7.2.3 - Step 3:Create a volume group call “cinder-volume” Note: if cinder is install on more than one host, the name of the logical should be different in every host vgcreate cinder-volumes /dev/sda3 3.7.2.4 - Step 4:Change the configuration of LVM vi /etc/lvm/lvm.conf devices { ... filter = [ "a/sda1/", "a/sda3/", "r/.*/"] ... } 3.7.2.5 - Step 5:Test the configuration pvdisplay 3.7.2.6 - Step 6: Install the packages apt-get install cinder-volume
  • 31. OpenStack – Final report INN694 – Project Page |30/43| 3.7.2.7 - Step 7: Add information about the identity service and the message broker vi /etc/cinder/cinder.conf [DEFAULT] #Set up the message broker rpc_backend = rabbit rabbit_host = IP/HotsnameOfController rabbit_userid = YourUserName rabbit_password = StrongPassword rabbit_virtual_host = NameVHost #add the persmission to connect to the identity service; enabled_backends=lvmdriver-NameOfDriver [lvmdriver-NameOfDriver] volume_group= NameOfVolumeGroup volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver volume_backend_name= NameOfBackEnd [keystone_authtoken] auth_uri = http://IP/HostnameOfController:5000 auth_host = IP/HostnameOfController auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = cinder admin_password = Keystone_Cinder_Password glance_host = controller IP/HotsnameOfController [database] connection = mysql://dbu_nova:MySQL_Cinder_Password@IP/HostnameOfController/cinder 3.7.2.8 - Step8: Restart all nova service service cinder-volume restart service tgt restart 3.8 - Orchestration: Heat 3.8.1 - Step 1: Install the packages apt-get install heat-api heat-api-cfn heat-engine 3.8.2 - Step 2: Set up the connection to the database vi /etc/heat/heat.conf [database] connection = mysql://dbu_heat:Keystone_Heat_Password@IP/HostnameOfController /heat 3.8.3 - Step 3: Create the database and user mysql -u root -p mysql> CREATE DATABASE heat; mysql> GRANT ALL PRIVILEGES ON heat.* TO 'dbu_heat'@'localhost' IDENTIFIED BY 'Keystone_Heat_Password';
  • 32. OpenStack – Final report INN694 – Project Page |31/43| mysql> GRANT ALL PRIVILEGES ON heat.* TO 'dbu_heat'@'%' IDENTIFIED BY 'Keystone_Heat_Password'; 3.8.4 - Step 4: Create the tables su -s /bin/sh -c "heat-manage db_sync" heat 3.8.5 - Step 5: Change the configuration file vi /etc/heat/heat.conf #logging verbose = True log_dir=/var/log/heat #Rabbit information rpc_backend = rabbit rabbit_host = IP/HotsnameOfController rabbit_userid = YourUserName rabbit_password = StrongPassword rabbit_virtual_host = NameVHost [keystone_authtoken] auth_host = IP/HotsnameOfController auth_port = 35357 auth_protocol = http auth_uri = http://IP/HotsnameOfController:5000/v2.0 admin_tenant_name = service admin_user = heat admin_password = Keystone_Heat_Password [ec2authtoken] auth_uri = http://IP/HotsnameOfController:5000/v2.0 3.8.6 - Step 6: Create user on the Identity service keystone user-create --name=heat --pass=Keystone_Heat_Password -- email=heat@connetwork.com.au keystone user-role-add --user=heat --tenant=service --role=admin 3.8.7 - Step 7: Create the service and endpoint keystone service-create --name=heat --type=orchestration --description="Orchestration" keystone endpoint-create --service-id=$(keystone service-list | awk '/ orchestration / {print $2}') --publicurl=http://IP/HotsnameOfController:8004/v1/%(tenant_id)s --internalurl=http://IP/HotsnameOfController:8004/v1/%(tenant_id)s --adminurl=http://IP/HotsnameOfController:8004/v1/%(tenant_id)s keystone service-create --name=heat-cfn --type=cloudformation --description="Orchestration CloudFormation" keystone endpoint-create --service-id=$(keystone service-list | awk '/ cloudformation / {print $2}') --publicurl=http://IP/HotsnameOfController:8000/v1 --internalurl=http://IP/HotsnameOfController:8000/v1
  • 33. OpenStack – Final report INN694 – Project Page |32/43| --adminurl=http://IP/HotsnameOfController:8000/v1 3.8.8 - Step 8: Create the heat_stack_user role. This role is used as the default role for users created by the Orchestration module. keystone role-create --name heat_stack_user 3.8.9 - Step 9: Setup the URL of the metadata server vi /etc/heat/heat.conf [DEFAULT] ... # URL of the Heat metadata server. (string value) heat_metadata_server_url = http://IP/HotsnameOfController:8000 # URL of the Heat waitcondition server. (string value) heat_waitcondition_server_url = http:/ IP/HotsnameOfController:8000/v1/waitcondition 3.8.10 - Step 10: Restart all necessary services service heat-api restart service heat-api-cfn restart service heat-engine restart
  • 34. OpenStack – Final report INN694 – Project Page |33/43| 4 - Troubleshooting When a problem occur, it Is important to remember that all the error are recorded on the logs and located on respective folder depending on the service. For instance the logs for Nova will be located in “/var/lod/nova”, the log for Neutron will be located in “/var/log/neutron” and so on. Most of the time, the log are quite explicit and the problem can be fix quickly. See log on /var/log Error Possible solution Host not found Cannot find the host for the service that was asked AMQP server on controller:5672 is unreachable: Socket closed The information for rabbit are not correct, the information on the configuration file need to be check: - Username - Password - Virtual host - Is thehostname of thecontroller can be resolve? Cannot ping or ssh the instance Check if the security rule has a ICMP I have installed a webserver in the instance, I can ping and ssh it. When I do a nmap of the instance the port 80 seems open but when I try to access the website it takes a long time and nothing happen. Check the MTU of the virtual machine and make sure that you follow the section “6.4 - Force the MTU of the virtual machine”
  • 35. OpenStack – Final report INN694 – Project Page |34/43| 5 - Useful command 5.1 - General command Command Description openssl rand –hex 10 Generate a random password ping: -s NUMBER Test the connectivity Specify the size of the packet (MTU) tail -f Show the end of a file and any upcoming update rabbitmqctl list_users RabbitMQ: List of user 5.2 - Keystone Argument Option Description user-create --name Name of the user --pass Password --email Email of the user endpoint-list List of endpoint endpoint-get NameOfEndpoint Information of one endpoint role-list List of role role-get NameOfrole Information of one role service-list List of service service-get NameOfservice Information of one service tenant-list List of tenant tenant-get NameOfTenant Information of one tenant user-list List of User user-get NameOfUser Information of one User 5.3 - Glance The “glance” command is used to manage virtual image by command line. A list of possible options and arguments are listed below: Argument Option Description image-create --name Name of the image for OpenStack --disk-format Format of the image file: qcow2, raw, vhd, vmdk, vdi, iso, aki, ari, and ami --container-format Format of the container1: Bare, ovf, aki and ami --is-public < LocationOfTheImage Example sudo glance image-create --name "cirros-0.3.2-x86_64" -- disk-format qcow2 --container-format bare --is-public True --progress < cirros-0.3.2-x86_64-disk.img 1 Specify bare to indicate that the image file is not in a file format that contains metadata about the virtual machine. Although this field is currently required, it is not actually used by any of the OpenStack services and has no effect on system behaviour. Because the value is not used anywhere, it is safe to always specify bare as the container format [8].
  • 36. OpenStack – Final report INN694 – Project Page |35/43| 6 - Tutorial 6.1 - Lunch an instance: After start an instance and provide a public IP address, the instance will not be accessiblefrom the outside network. The security group need to be setup and the port need to be opened according to the needs. 6.1.1 - Command line 6.1.1.1 - Source OpenStack RC file source /home/NameOfProject-openrc.sh 6.1.1.2 - Generate the key ssh-keygen 6.1.1.3 - Add the public key to Nova nova keypair-add --pub-key ~/.ssh/id_rsa.pub demo-key 6.1.1.4 - Verify if the key has been added nova keypair-list 6.1.1.5 - Check the list of flavours nova flavor-list 6.1.1.6 - Check the list of images nova image-list 6.1.1.7 - Check the list of networks neutron net-list 6.1.1.8 - Check the list of security groups nova secgroup-list 6.1.1.9 - Start the instance according to the previous information gathered nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64 --nic net-id= IDofNetwork --security-group default --key-name demo-key NameOfInstance 6.1.1.10 - Verify if the instance is started nova list 6.1.1.11 - Get the URL of the VNC console nova get-vnc-console NameOfInstance novnc
  • 37. OpenStack – Final report INN694 – Project Page |36/43| 6.1.2 - Dashboard Click on project  compute  Instance Click on “lunch Instance” On the Details tab: - Set the availabilityzone (if more than one) - Write the name of the instance - Select the flavor (resource that are given to the instance) - The number of instance - Select “Boot from image” - Select your image On the “Access and security” tab - If the image was made specifically for cloud selecta key pair,if the image has a username and password by default you do not need to select any key - If the key does not exist, click on the “+” - Select the security group
  • 38. OpenStack – Final report INN694 – Project Page |37/43| On the “Networking” tab: - Select the internal network Click on the launch button To see the virtual machine, click on more and then console The console is not manage very well if not on full screen, therefore click on “Click here to show only console”
  • 39. OpenStack – Final report INN694 – Project Page |38/43| 6.2 - Provide public address to instances 6.2.1 - Command line 6.2.1.1 - Create a floating IP neutron floatingip-create NameOfExternalNetwork 6.2.1.2 - Associate the floating IP to the instance nova floating-ip-associate MyInstance IPAddress 6.2.2 - Dashboard Assign an external IP address: - Click on the arrow next to “More” - Associate Floating IP If no IP are available, click on the “+” Select the external network and click on “Allocate IP”
  • 40. OpenStack – Final report INN694 – Project Page |39/43| Select the floating Ip that was associated on the previous step, the port to which the IP need to be associated and click on associate 6.3 - Create a new security group (ACL) 6.3.1 - Command line 6.3.1.1 - Create a security group nova secgroup-create NameOfSecurityGroup "Description of the security group" 6.3.1.2 - Add a rule to allow ping nova secgroup-add-rule NameOfSecurityGroup icmp -1 -1 0.0.0.0/0 6.3.1.3 - Add a rule to allow ssh nova secgroup-add-rule NameOfSecurityGroup tcp 22 22 0.0.0.0/0 6.3.2 - Dashboard 6.3.2.1 - Go to the tab “Access & Security” Click on project  compute  Access & Security
  • 41. OpenStack – Final report INN694 – Project Page |40/43| 6.3.2.1 - Click on the “Create Security Group” 6.3.2.2 - Specify the name (no space) and the description and click “Create Security Group” 6.3.2.3 - Click on manage rule 6.3.2.4 - Click add rule 6.3.2.5 - Select the right option depending on the service on the instance Rules: list of known service such as SSH, http and so on as well as customer rule where the user can specify the number of the port Direction: - Ingress: from outside to the VM - Egress: from the VM to outside Remote: type of remote CIDR: who does it concern
  • 42. OpenStack – Final report INN694 – Project Page |41/43| 6.4 - Force the MTU of the virtual machine 6.4.1 - Setup the DHCP agent : vi /etc/neutron/dhcp_agent.ini [DEFAULT] ... #add the following line dnsmasq_config_file=/etc/neutron/dnsmasq/dnsmasq-neutron.conf 6.4.2 - Create the dnsmasq-neutron.conf vi /etc/neutron/dnsmasq/dnsmasq-neutron.conf dhcp-option-force=26, 1400 7 - Table of figures Figure 1 - OpenStack overview [1] ............................................................................................................................................................................ 3 Figure 4 - Network topology..................................................................................................................................................................................... 6 Figure 4 - Dashboard login..................................................................................................................................................................................... 27 8 - Table of tables Table 1 - Hardware requirement ............................................................................................................................................................................... 4 Table 2 - List of password ........................................................................................................................................................................................ 5
  • 43. OpenStack – Final report INN694 – Project Page |42/43| 9 - References 1. OpenStack. [Online].Availablefrom: https://www.openstack.org/. 2. OpenStack Foundation. OpenStack documentation - Chapter 5. Scaling.[Online].Availablefrom: http://docs.openstack.org/openstack-ops/content/scaling.html. 3. rabbitmqctl(1) manual page. [Online].Availablefrom: http://www.rabbitmq.com/man/rabbitmqctl.1.man.html. 4. Verify the Image Serviceinstallation.[Online].Availablefrom: http://docs.openstack.org/icehouse/install- guide/install/apt/content/glance-verify.html.
  • 44. OpenStack – Final report INN694 – Project Page |43/43|