-
Notifications
You must be signed in to change notification settings - Fork 4
2. Controller and Network node installation
In this guide the controller and the network node are collapsed into a single node. It is possible to keep them separated, but it is outside the purpose of the present guide.
It is possible to install MySQL on the controller node or on a separate node (as done in this guide), depending on the desired architecture. In the following, replace $MYSQL_IP
with the IP address of the node hosting the database.
-
Install mysql:
# apt-get install python-mysqldb mysql-server
Note: When you install the server package, you are prompted for the root password for the database (which can, and should, be different from the password of the
root
system user). Choose a strong password and remember it. -
Edit the
/etc/mysql/my.cnf file
:Under the
[mysqld]
section, set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network:[mysqld] ... bind-address = $MYSQL_IP
Under the [mysqld] section, set the following keys to enable InnoDB, UTF-8 character set, and UTF-8 collation by default:
[mysqld] ... default-storage-engine = innodb innodb_file_per_table collation-server = utf8_general_ci init-connect = 'SET NAMES utf8' character-set-server = utf8
-
Restart the MySQL service to apply the changes:
# service mysql restart
-
You must delete the anonymous users that are created when the database is first started. Otherwise, database connection problems occur when you follow the instructions in this guide. To do this, use the
mysql_secure_installation
command. Note that ifmysql_secure_installation
fails you might need to usemysql_install_db
first:# mysql_install_db # mysql_secure_installation
This command presents a number of options for you to secure your database installation. Respond yes to all prompts unless you have a good reason to do otherwise (no need to change the password of the MySQL root user again).
Note that for all OpenStack services the corresponding DB's will be created after the configuration of the services themselves.
-
Install RabbitMQ
# apt-get install rabbitmq-server
-
Replace
$RABBIT_PASS
with a suitable password.# rabbitmqctl change_password guest $RABBIT_PASS
-
Install the OpenStack Identity service on the controller node, together with
python-keystoneclient
(which is a dependency):# apt-get install keystone
-
Use the password that you set previously to log in to MySQL as root. Create a database and an user (both) called
keystone
(replace$KEYSTONE_DBPASS
with a strong password you choose for thekeystone
user and database):$ mysql -u root -p mysql> CREATE DATABASE keystone; mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '$KEYSTONE_DBPASS'; mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '$KEYSTONE_DBPASS'; mysql> exit
-
The Identity service uses such database to store information. Specify the location of the database in the configuration file. In this guide, we use a MySQL database on the controller node with the username
keystone
.Edit
/etc/keystone/keystone.conf
and change the[database]
section:[database] # The SQLAlchemy connection string used to connect to the database connection = mysql://keystone:$KEYSTONE_DBPASS@$MYSQL_IP/keystone ...
-
By default, the Ubuntu packages create a SQLite database. Delete the
keystone.db
file created in the/var/lib/keystone/
directory so that it does not get used by mistake:# rm /var/lib/keystone/keystone.db
-
Create the database tables for the Identity service:
# su -s /bin/sh -c "keystone-manage db_sync" keystone
-
Define an authorization token to use as a shared secret between the Identity service and other OpenStack services. Use
openssl
to generate a random token and store it in the configuration file:# openssl rand -hex 10
-
Edit
/etc/keystone/keystone.conf
and change the [DEFAULT] section, replacing$ADMIN_TOKEN
with the results of the previous command:[DEFAULT] # A "shared secret" between keystone and other openstack services admin_token = $ADMIN_TOKEN ...
-
Configure the log directory. Edit the
/etc/keystone/keystone.conf
file and update the[DEFAULT]
section:[DEFAULT] ... log_dir = /var/log/keystone
-
Restart the Identity service:
# service keystone restart
-
By default, the Identity service stores expired tokens in the database indefinitely. While potentially useful for auditing in production environments, the accumulation of expired tokens will considerably increase database size consequently decreasing service performance, particularly in test environments with limited resources. We recommend configuring a periodic task using
cron
to purge expired tokens hourly.Run the following command to purge expired tokens every hour and log the output to
/var/log/keystone/keystone-tokenflush.log
:# (crontab -l -u keystone 2>&1 | grep -q token_flush) || echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>&1' >> /var/spool/cron/crontabs/keystone
$ export OS_SERVICE_TOKEN=$ADMIN_TOKEN
$ export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
Follow these steps to create an administrative user (called admin
), role (called admin
), and tenant (called admin
). You will use this account for administrative interaction with the OpenStack cloud.
By default, the Identity service creates a special _member_
role (with the underscores). The OpenStack dashboard automatically grants access to users with this role. You will give the admin
user access to this role in addition to the admin
role.
[Note] Note
Any role that you create must map to roles specified in the policy.json
file included with each OpenStack service. The default policy file for most services grants administrative access to the admin
role.
-
Create the
admin
user (replace$ADMIN_PASS
with a strong password and replace$ADMIN_EMAIL
with an email address to associate the account to):$ keystone user-create --name=admin --pass=$ADMIN_PASS --email=$ADMIN_EMAIL
-
Create the
admin
role:$ keystone role-create --name=admin
-
Create the
admin
tenant:$ keystone tenant-create --name=admin --description="Admin Tenant"
-
You must now link the
admin
user,admin
role, andadmin
tenant together using theuser-role-add
option:$ keystone user-role-add --user=admin --tenant=admin --role=admin
-
Link the
admin
user,_member_
role, andadmin
tenant:$ keystone user-role-add --user=admin --role=_member_ --tenant=admin
OpenStack services also require a username, tenant, and role to access other OpenStack services. In a basic installation, OpenStack services typically share a single tenant named service
.
You will create additional usernames and roles under this tenant as you install and configure each service.
-
Create the
service
tenant:$ keystone tenant-create --name=service --description="Service Tenant"
Keystone, the OpenStack Identity service, needs to be registered as a service in itself:
$ keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
The output should be something like this:
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| enabled | True |
| id | 68683d6ffd7d49859dd9f7fe2fd12be7 |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
Next create the endpoint:
$ keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') --publicurl=http://$CONTROLLER_PUBLIC_IP:5000/v2.0 --internalurl=http://controller:5000/v2.0 --adminurl=http://controller:35357/v2.0
Output example:
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://controller:35357/v2.0 |
| id | 0c34c6e6fd5f411a9e349eeca1c9b3db |
| internalurl | http://controller:5000/v2.0 |
| publicurl | http://10.10.10.11:5000/v2.0 |
| region | regionOne |
| service_id | 68683d6ffd7d49859dd9f7fe2fd12be7 |
+-------------+----------------------------------+
-
To verify that the Identity service is installed and configured correctly, clear the values in the
OS_SERVICE_TOKEN
andOS_SERVICE_ENDPOINT
environment variables:$ unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
These variables, which were used to bootstrap the administrative user and register the Identity service, are no longer needed.
You can now use regular user name-based authentication.
-
Request a authentication token by using the admin user and the password you chose for that user:
$ keystone --os-username=admin --os-password=$ADMIN_PASS --os-auth-url=http://controller:35357/v2.0 token-get
In response, you receive a token paired with your user ID. This verifies that the Identity service is running on the expected endpoint and that your user account is established with the expected credentials.
-
Verify that authorization behaves as expected. To do so, request authorization on a tenant:
$ keystone --os-username=admin --os-password=$ADMIN_PASS --os-tenant-name=admin --os-auth-url=http://controller:35357/v2.0 token-get
In response, you receive a token that includes the ID of the tenant that you specified. This verifies that your user account has an explicitly defined role on the specified tenant and the tenant exists as expected.
-
You can also set your
--os-*
variables in your environment to simplify command-line usage. Create anadmin-openrc.sh
file in the root home directory, with the following content:export OS_USERNAME=admin export OS_PASSWORD=$ADMIN_PASS export OS_TENANT_NAME=admin export OS_AUTH_URL=http://controller:35357/v2.0
-
Add the following line to the
.bashrc
file into your home directory (/root/
in this example) to read in the environment variables at every access:source /root/admin-openrc.sh
-
Verify that your
admin-openrc.sh
file is configured correctly. Run the same command without the--os-*
arguments:$ keystone token-get
-
Install the Image service:
# apt-get install glance python-glanceclient
The Image service stores information about images in a database. The examples in this guide use the MySQL database that is used by other OpenStack services.
-
Configure the location of the database. The Image service provides the
glance-api
andglance-registry
services, each with its own configuration file. You must update both configuration files throughout this section. Replace$GLANCE_DBPASS
with your Image service database password.Edit
/etc/glance/glance-api.conf
and/etc/glance/glance-registry.conf
and edit the[database]
section of each file:[database] connection = mysql://glance:$GLANCE_DBPASS@$MYSQL_IP/glance
-
Configure the Image service to use the message broker. Replace
$RABBIT_PASS
with the password you have chosen for theguest
account in RabbitMQ. Edit the/etc/glance/glance-api.conf
file and add the following keys to the[DEFAULT]
section:[DEFAULT] ... rpc_backend = rabbit rabbit_host = controller rabbit_password = $RABBIT_PASS
-
By default, the Ubuntu packages create an SQLite database. Delete the
glance.sqlite
file (if exists) created in the/var/lib/glance/
directory so that it does not get used by mistake:# rm /var/lib/glance/glance.sqlite
-
Use the password you created to log in as root to the DB and create a database called
glance
and an user calledglance
(replace$GLANCE_DBPASS
with the password you want to assign to theglance
MySQL user and database):$ mysql -u root -p mysql> CREATE DATABASE glance; mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$GLANCE_DBPASS'; mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$GLANCE_DBPASS';
-
Create the database tables for the Image service:
# su -s /bin/sh -c "glance-manage db_sync" glance
-
Create a glance user that the Image service can use to authenticate with the Identity service. Choose a password (to replace
$GLANCE_PASS
in the following command) and specify an email address (to replace$GLANCE_EMAIL
in the following command) for theglance
user. Use the service tenant and give the user the admin role:$ keystone user-create --name=glance --pass=$GLANCE_PASS --email=$GLANCE_EMAIL $ keystone user-role-add --user=glance --tenant=service --role=admin
-
Configure the Image service to use the Identity service for authentication.
Edit the
/etc/glance/glance-api.conf
and/etc/glance/glance-registry.conf
files. Replace$GLANCE_PASS
with the password you chose for the glance user in the Identity service.Add or modify the following keys under the
[keystone_authtoken]
section (replace$CONTROLLER_PUBLIC_IP
with the public IP address of the controller node and$GLANCE_PASS
with a suitable password for the Glance service):[keystone_authtoken] auth_uri = http://$CONTROLLER_PUBLIC_IP:5000 auth_host = controller auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = glance admin_password = $GLANCE_PASS
-
Modify the following key under the
[paste_deploy]
section:[paste_deploy] ... flavor = keystone
-
Register the Image service with the Identity service so that other OpenStack services can locate it. Register the service and create the endpoint (note that here the
controller
name, associated to the private IP of the controller node into the/etc/hosts
file, is used for theinternalurl
andadminurl
, while the controller public IP$CONTROLLER_PUBLIC_IP
is used for thepublicurl
; similar settings will be used for other services):$ keystone service-create --name=glance --type=image --description="OpenStack Image Service" $ keystone endpoint-create --service-id=$(keystone service-list | awk '/ image / {print $2}') --publicurl=http://$CONTROLLER_PUBLIC_IP:9292 --internalurl=http://controller:9292 --adminurl=http://controller:9292
-
Restart the glance service with its new settings:
# service glance-registry restart # service glance-api restart
To test the Image service installation, download at least one virtual machine image that is known to work with OpenStack. For example, CirrOS is a small test image that is often used for testing OpenStack deployments.
$ wget http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
$ glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 --container-format bare --is-public True --progress < cirros-0.3.2-x86_64-disk.img
Confirm that the image was uploaded and display its attributes:
$ glance image-list
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| defcfc7ad-56aa-2341-9553-d855997c1he0 | cirros-0.3.2-x86_64 | qcow2 | bare | 13167616 | active |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
Create the pool 'images':
ceph osd pool create images 128 128
Check the size replica of the new pool 'images' with:
ceph osd dump | grep replica
To be consistent with the previous Ceph configuration, if the size of the new pool is not set on 3 and min_size
is not set on value 2 do the following commands to set replica 3 for pool 'images' and min_size
(minimum amount of active PG to do r/w operations):
ceph osd pool set images size 3
ceph osd pool set images min_size 2
Install & Configure ceph client (if you haven't done yet):
# apt-get install python-ceph
Copy the file /etc/ceph/ceph.conf
from the ceph node into the controller where you are installing the glance server.
Set up the ceph client authentication:
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
ceph auth get-or-create client.glance | tee /etc/ceph/ceph.client.glance.keyring
chown glance:glance /etc/ceph/ceph.client.glance.keyring
Edit the configuration file /etc/glance/glance-api.conf setting the following parameters:
rbd_store_user=glance
rbd_store_pool=images
Restart services:
service glance-api restart && service glance-registry restart
Upload an image on the ceph backend (using the option --store rbd)
glance image-create --name centos6.5 --disk-format qcow2 --container-format bare --copy-from http://wn-recas-uniba-30.ba.infn.it/centos-6.5-20140117.0.x86_64.qcow2 --is-public True --store rbd
-
Install the Compute packages necessary for the controller node.
# apt-get install -y nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient
-
Compute stores information in a database. In this guide, we use a MySQL database on the controller node. Configure Compute with the database location and credentials. Replace
$NOVA_DBPASS
with the password for the database that you will create in a later step.Edit the
[database]
section in the/etc/nova/nova.conf
file, adding it if necessary, to modify this key:[database] connection = mysql://nova:$NOVA_DBPASS@$MYSQL_IP/nova
-
Configure the Compute service to use the RabbitMQ message broker by setting these configuration keys in the
[DEFAULT]
configuration group of the/etc/nova/nova.conf
file:[DEFAULT] ... rpc_backend = rabbit rabbit_host = controller rabbit_password = $RABBIT_PASS
-
Set the
my_ip
,vncserver_listen
, andvncserver_proxyclient_address
configuration options to the management interface IP address of the controller node:Edit the
/etc/nova/nova.conf
file and add these lines to the[DEFAULT]
section:[DEFAULT] ... my_ip = $PUBLIC_CONTROLLER_IP vncserver_listen = $PUBLIC_CONTROLLER_IP vncserver_proxyclient_address = $PUBLIC_CONTROLLER_IP
-
By default, the Ubuntu packages create an SQLite database. Delete the
nova.sqlite
file created in the/var/lib/nova/
directory so that it does not get used by mistake:# rm /var/lib/nova/nova.sqlite
-
Use the password you created previously to log in as root. Create a nova database user:
$ mysql -u root -p mysql> CREATE DATABASE nova; mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$NOVA_DBPASS'; mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$NOVA_DBPASS';
-
Create the Compute service tables:
# su -s /bin/sh -c "nova-manage db sync" nova
-
Create a nova user that Compute uses to authenticate with the Identity service. Use the
service
tenant and give the user theadmin
role (replace$NOVA_PASS
with the password you have chosen for the Compute service Nova and$NOVA_EMAIL
with the email address you want to associate to the service):$ keystone user-create --name=nova --pass=$NOVA_PASS --email=$NOVA_EMAIL $ keystone user-role-add --user=nova --tenant=service --role=admin
-
Configure Compute to use these credentials with the Identity service running on the controller.
Edit the
[DEFAULT]
section in the/etc/nova/nova.conf
file to add this key:[DEFAULT] ... auth_strategy = keystone
-
Add these keys to the
[keystone_authtoken]
section:[keystone_authtoken] ... auth_uri = http://$PUBLIC_CONTROLLER_IP:5000 auth_host = controller auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = nova admin_password = $NOVA_PASS
-
You must register Compute with the Identity service so that other OpenStack services can locate it. Register the service and specify the endpoint:
$ keystone service-create --name=nova --type=compute --description="OpenStack Compute" $ keystone endpoint-create --service-id=$(keystone service-list | awk '/ compute / {print $2}') --publicurl=http://$PUBLIC_CONTROLLER_IP:8774/v2/%\(tenant_id\)s --internalurl=http://controller:8774/v2/%\(tenant_id\)s --adminurl=http://controller:8774/v2/%\(tenant_id\)s
-
Restart Compute services:
# service nova-api restart # service nova-cert restart # service nova-consoleauth restart # service nova-scheduler restart # service nova-conductor restart # service nova-novncproxy restart
-
To verify your configuration, list available images:
$ nova image-list
The output should look like this:
+--------------------------------------+---------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.2-x86_64 | ACTIVE | |
+--------------------------------------+---------------------+--------+--------+
-
Before you configure the OpenStack Networking service (called Neutron), you must create a database and Identity service credentials including a user and service.
Connect to the database as the
root
user, create theneutron
database, and grant the proper access to it:Replace
$NEUTRON_DBPASS
with a suitable password.$ mysql -u root -p mysql> CREATE DATABASE neutron; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '$NEUTRON_DBPASS'; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '$NEUTRON_DBPASS';
-
Create Identity service credentials for Networking:
Create the neutron user:
Replace
$NEUTRON_PASS
with a suitable password and$NEUTRON_EMAIL
with a suitable e-mail address.$ keystone user-create --name neutron --pass $NEUTRON_PASS --email $NEUTRON_EMAIL
Link the
neutron
user to theservice
tenant andadmin
role:$ keystone user-role-add --user neutron --tenant service --role admin
Create the
neutron
service:$ keystone service-create --name neutron --type network --description "OpenStack Networking"
Create the service endpoint:
$ keystone endpoint-create --service-id $(keystone service-list | awk '/ network / {print $2}') --publicurl http://$CONTROLLER_PUBLIC_IP:9696 --adminurl http://controller:9696 --internalurl http://controller:9696
-
Install the Networking components
# apt-get install -y neutron-server neutron-plugin-ml2
-
Configure the Networking server component
The Networking server component configuration includes the database, authentication mechanism, message broker, topology change notifier, and plug-in.
Configure Networking to use the database:
Edit the
/etc/neutron/neutron.conf
file and add the following key to the[database]
section:Replace
$NEUTRON_DBPASS
with the password you chose for the database.[database] ... connection = mysql://neutron:$NEUTRON_DBPASS@$MYSQL_IP/neutron
-
Configure Networking to use the Identity service for authentication:
Edit the
/etc/neutron/neutron.conf
file and add the following key to the[DEFAULT]
section:[DEFAULT] ... auth_strategy = keystone
-
Add the following keys to the
[keystone_authtoken]
section:Replace
$NEUTRON_PASS
with the password you chose for the neutron user in the Identity service.[keystone_authtoken] ... auth_uri = http://$PUBLIC_CONTROLLER_IP:5000 auth_host = controller auth_protocol = http auth_port = 35357 admin_tenant_name = service admin_user = neutron admin_password = $NEUTRON_PASS
-
Configure Networking to use the message broker:
Edit the
/etc/neutron/neutron.conf
file and add the following keys to the[DEFAULT]
section:Replace
$RABBIT_PASS
with the password you chose for the guest account in RabbitMQ.[DEFAULT] ... rpc_backend = neutron.openstack.common.rpc.impl_kombu rabbit_host = controller rabbit_password = $RABBIT_PASS
-
Configure Networking to notify Compute about network topology changes:
Replace
$SERVICE_TENANT_ID
with theservice
tenant identifier (id, obtained with the commandkeystone tenant-list
) in the Identity service and$NOVA_PASS
with the password you chose for the nova user in the Identity service.Edit the
/etc/neutron/neutron.conf
file and add the following keys to the[DEFAULT]
section:[DEFAULT] ... notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://controller:8774/v2 nova_admin_username = nova nova_admin_tenant_id = $SERVICE_TENANT_ID nova_admin_password = $NOVA_PASS nova_admin_auth_url = http://controller:35357/v2.0
[Note] Note To obtain the service tenant identifier (id) you can also run:
$ source admin-openrc.sh $ keystone tenant-get service
which should show an output like this:
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Service Tenant |
| enabled | True |
| id | f727b5ec2ceb4d71bad86dfc414449bf |
| name | service |
+-------------+----------------------------------+
-
Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
Edit the
/etc/neutron/neutron.conf
file and add the following keys to the[DEFAULT]
section:[DEFAULT] ... core_plugin = ml2 service_plugins = router allow_overlapping_ips = True
[Note] Note We recommend adding
verbose = True
to the[DEFAULT]
section in/etc/neutron/neutron.conf
for eventual troubleshooting.
-
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances. However, the controller node does not need the OVS agent or service because it does not handle instance network traffic.
Edit the
/etc/neutron/plugins/ml2/ml2_conf.ini
file:Add the following keys to the
[ml2]
section:[ml2] ... type_drivers = gre tenant_network_types = gre mechanism_drivers = openvswitch
Add the following key to the
[ml2_type_gre]
section:[ml2_type_gre] ... tunnel_id_ranges = 1:1000
Add the
[securitygroup]
section and the following keys to it:[securitygroup] ... firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True
-
By default, most distributions configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.
Edit the
/etc/nova/nova.conf
and add the following keys to the[DEFAULT]
section:Replace
$NEUTRON_PASS
with the password you chose for the neutron user in the Identity service.[DEFAULT] ... network_api_class = nova.network.neutronv2.api.API neutron_url = http://controller:9696 neutron_auth_strategy = keystone neutron_admin_tenant_name = service neutron_admin_username = neutron neutron_admin_password = $NEUTRON_PASS neutron_admin_auth_url = http://controller:35357/v2.0 linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver security_group_api = neutron
[Note] Note By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the
nova.virt.firewall.NoopFirewallDriver
firewall driver.
-
Restart the Compute services:
# service nova-api restart # service nova-scheduler restart # service nova-conductor restart
-
Restart the Networking service:
# service neutron-server restart
Edit /etc/sysctl.conf
to contain the following:
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
Implement the changes:
# sysctl -p
To install the Networking components
# apt-get install neutron-plugin-openvswitch-agent neutron-l3-agent neutron-dhcp-agent
The Layer-3 (L3) agent provides routing services for instance virtual networks.
Edit the /etc/neutron/l3_agent.ini
file and add the following keys to the [DEFAULT]
section:
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
[Note] Note
We recommend adding verbose = True
to the [DEFAULT]
section in /etc/neutron/l3_agent.ini
to assist with troubleshooting.
The DHCP agent provides DHCP services for instance virtual networks.
Edit the /etc/neutron/dhcp_agent.ini
file and add the following keys to the [DEFAULT] section:
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
[Note] Note
We recommend adding verbose = True
to the [DEFAULT]
section in /etc/neutron/dhcp_agent.ini
to assist with troubleshooting.
The metadata agent provides configuration information such as credentials for remote access to instances.
Edit the /etc/neutron/metadata_agent.ini
file and add the following keys to the [DEFAULT]
section:
Replace $NEUTRON_PASS
with the password you chose for the neutron user in the Identity service. Replace $METADATA_SECRET
with a suitable secret for the metadata proxy (for example, you can generate a string with the openssl
command, as shown at the beginning of this page).
[DEFAULT]
...
auth_url = http://controller:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = $NEUTRON_PASS
nova_metadata_ip = controller
metadata_proxy_shared_secret = $METADATA_SECRET
[Note] Note
We recommend adding verbose = True
to the [DEFAULT]
section in /etc/neutron/metadata_agent.ini
to assist with troubleshooting.
Edit the /etc/nova/nova.conf
file and add the following keys to the [DEFAULT]
section:
Replace $METADATA_SECRET
with the secret you chose for the metadata proxy.
[DEFAULT]
...
service_neutron_metadata_proxy = True
neutron_metadata_proxy_shared_secret = $METADATA_SECRET
On the controller node, restart the Compute API service:
# service nova-api restart
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build virtual networking framework for instances.
Edit the /etc/neutron/plugins/ml2/ml2_conf.ini
file.
Add the [ovs]
section and the following keys to it:
Replace $INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
with the IP address of the instance tunnels network interface on your network node (usually the private IP; NB: an IP address is expected, not a name).
[ovs]
...
local_ip = $INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
tunnel_type = gre
enable_tunneling = True
The OVS service provides the underlying virtual networking framework for instances. The integration bridge br-int
handles internal instance network traffic within OVS. The external bridge br-ex
handles external instance network traffic within OVS. The external bridge requires a port on the physical external network interface to provide instances with external network access. In essence, this port bridges the virtual and physical external networks in your environment.
Restart the OVS service:
# service openvswitch-switch restart
Add the integration bridge:
# ovs-vsctl add-br br-int
Add the external bridge:
# ovs-vsctl add-br br-ex
Add a port to the external bridge that connects to the physical external network interface:
Replace $INTERFACE_NAME with the actual interface name (n our case eth0
):
# ovs-vsctl add-port br-ex $INTERFACE_NAME
Adding the port to the external bridge you may loose connectivity.
Connect to the controller node from the private interface and configure the network as in the interfaces-post-bridges.sample, remove the IP from the eth0
interface with the command
# ifconfig eth0 0.0.0.0
restart the network services with the command
# /etc/init.d/networking restart
or with
ifdown $INTERFACE_NAME && ifup $INTERFACE_NAME
for any interface, and check that the public IP is now assigned to the br-ex
bridge and not to the public interface with the command ip a
.
[Note] Depending on your network interface driver, you may need to disable Generic Receive Offload (GRO) to achieve suitable throughput between your instances and the external network.
To temporarily disable GRO on the external network interface while testing your environment:
# ethtool -K $INTERFACE_NAME gro off
Restart the Networking services:
# service neutron-plugin-openvswitch-agent restart
# service neutron-l3-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart
Create the network executing the command:
# neutron net-create ext-net --shared --router:external=True
Install the packages:
# apt-get install apache2 memcached libapache2-mod-wsgi openstack-dashboard
Remove the Ubuntu logo:
# apt-get remove --purge openstack-dashboard-ubuntu-theme
Modify the value of CACHES['default']['LOCATION']
in /etc/openstack-dashboard/local_settings.py
to match the ones set in /etc/memcached.conf
CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '127.0.0.1:11211'
}
}
Update the ALLOWED_HOSTS
in /etc/openstack-dashboard/local_settings.py
to include the addresses you wish to access the dashboard from; for example, if you want to access the dashboard only from localhost
, from your desktop (my-desktop
) and from host1
and host2
, insert:
ALLOWED_HOSTS = ['localhost', 'my-desktop', 'host1', 'host2']
Otherwise, you may want to access the dashboard from any host, in which case you should set:
ALLOWED_HOSTS = ['*']
Edit /etc/openstack-dashboard/local_settings.py
and change OPENSTACK_HOST
to the hostname of your Identity service (in this case the controller node; this can be used to run the dashboard on a separate host):
OPENSTACK_HOST = "controller"
Restart the apache2
and memcached
services:
# service apache2 restart
# service memcached restart
You can now access the dashboard at http://$CONTROLLER_PUBLIC_IP/horizon, replacing $CONTROLLER_PUBLIC_IP with the public IP address of the controller node. Login with credentials for any user that you created with the OpenStack Identity service.