Skip to content

1. General Configuration

florianagiannuzzi edited this page Dec 3, 2014 · 57 revisions

Introduction

In this guide, the step-by-step installation of OpenStack Icehouse is illustrated. The commands after a # must be executed as the root user, while the ones after a $ can be executed with any user (including root). Features like IP addresses or passwords are kept general, with environment variables used instead (for example, $MYSQL_IP is used to replace 10.10.10.7).

Testbed Infrastructure Deployment

Each Openstack installation will have the following nodes with the relative configuration:

  • node01 - controller node & network node on which Keystone, Nova, Neutron and Glance services run

  • node02 - compute node 1 on which Nova-Compute and Swift services run

    • sdb 100G per ceph
    • sdc 100G per Swift
  • node03 - compute node 2 (Nova-Compute) necessary to implement the Cluster Storage

    • sdb 100G per ceph
    • sdc 100G per Swift
  • node04 - compute node 3 (Nova-Compute) necessary to implement the Cluster Storage

    • sdb 100G per ceph
    • sdc 100G per Swift
  • node05 - node on which Ceilometer and Heat services run

    • mongodb data stored on /var/lib/mongodb

Note: in the present guide, sdb/c volumes have been considered as an example. Check the names of the corresponding devices in your system.

The architecture to be deployed is sketched in figure.

prompt_install

The installation roadmap

The installation guide is organised as follows:

  • Pre-requirements:

    • Step 1: Network interfaces configuration
    • Step 2: Install Network Time Protocol (NTP)
    • Step 3: Install the MySQL Python library
    • Step 4: Install Openstack Packages on all nodes
    • Step 5: Modify the /etc/hosts file
    • Step 6: Install the distributed filesystem (Ceph)
  • Basic services installation:

    • Controller and Network node installation:
      • Step 1: MySQL installation/configuration
      • Step 2: Install the message broker service (RabbitMQ)
      • Step 3: Install Identity service (Keystone)
      • Step 4: Install Image service (Glance)
      • Step 5: Install Compute service (Nova)
      • Step 6: Install Networking service (Neutron)
      • Step 7: Install the dashboard (Horizon)
    • Compute node installation:
      • Step 1: Create nova-related groups and users with specific id
      • Step 2: Install Compute packages
      • Step 3: Install Networking packages
      • Step 4: Configuring Nova with Ceph-FS
      • Step 5: Configure Live Migration
  • Advanced services installation:

    • Swift
    • Cinder
    • Ceilometer
    • Heat

Pre-requirements

Step 1: Network interfaces configuration

Each host has three NIC, attached to the following networks:

  1. PUBLIC network (in this guide 10.10.10.0/24)
  2. PRIVATE network (in this guide 10.10.20.0/24)
  3. second PUBLIC network (in this guide 10.10.30.0/24)
| node   | eth0         |  eth1        |  eth2       |
| -------------- -------| ------------ |-------------|
| node01 | 10.10.10.11  |  10.10.20.11 | 10.10.30.11 |
| node02 | 10.10.10.12  |  10.10.20.12 | 10.10.30.12 |
| node03 | 10.10.10.13  |  10.10.20.13 | 10.10.30.13 |
| node04 | 10.10.10.14  |  10.10.20.14 | 10.10.30.14 |
| node05 | 10.10.10.15  |  10.10.20.15 | 10.10.30.15 |

For the basic configuration only the first two interfaces need to be configured. The third is used for "flat public network" configuration.

The first two NICs must be preliminarly configured statically, as shown in the interfaces-pre-bridges.sample file included.

In Ubuntu 14.04 the configuration for each interface is in a separate file (/etc/network/interfaces.d/ethX.cfg, with X the number of the interface: eth0.cfg, eth1.cfg and so on). The sample files are in the repository (eth0.cfg-pre-bridges.sample, eth1.cfg-pre-bridges.sample and eth2.cfg-pre-bridges.sample).

Restart the network interfaces on every node:

# ifdown eth0; ifup eth0
# ifdown eth1; ifup eth1

(repeat for all the interfaces you have configured. note that if you turn off the interface you are connected to, e.g. via ssh, you loose connectivity)

Step 2: Install Network Time Protocol (NTP)

# apt-get install -y ntp

Step 3: Install the MySQL Python library

On all nodes other than the controller node, install the MySQL Python library:

# apt-get install python-mysqldb

Step 4: Install Openstack Packages on all nodes

  1. Install the Ubuntu Cloud Archive for Icehouse (not needed for Ubuntu 14.04):

     # apt-get install python-software-properties
     # add-apt-repository cloud-archive:icehouse
    
  2. Update the package database and upgrade your system:

     # apt-get update
     # apt-get dist-upgrade
    
  3. If you intend to use OpenStack Networking with Ubuntu 12.04, you should install a backported Linux kernel to improve the stability of your system. This installation is not needed if you intend to use the legacy networking service.

    Install the Ubuntu 13.10 backported kernel (not needed for Ubuntu 14.04):

     # apt-get install linux-image-generic-lts-saucy linux-headers-generic-lts-saucy
    
  4. Reboot the system for all changes to take effect:

     # reboot
    

Step 5: Modify the /etc/hosts file

Modify the /etc/hosts file on all nodes, as done in the sample file hosts.sample

Step 6: Install the distributed filesystem (Ceph)

In this guide we make use of a distributed filesystem based on CEPH. Details about the cluster deployment and installation can be found here

After installing CephFS, create the file /etc/apt/preferences.d/icehouse.pref with the following content:

Package: *
Pin: origin ubuntu-cloud.archive.canonical.com
Pin-Priority: 999

and execute:

apt-get update
apt-get dist-upgrade