-
Notifications
You must be signed in to change notification settings - Fork 4
3. Compute node Installation
In this installation guide, we use the KVM Hypervisor
### Step 1: Create nova-related groups and users with specific id To enable live migration of VMs among different compute nodes, the user id and the group id of the involved services must be the same on all compute nodes. For this reason, create the following users and groups on all compute nodes, specifying their id, associated home directories and shells:
# groupadd -g 199 nova
# groupadd -g 198 kvm
# groupadd -g 197 libvirtd
# useradd -N -m -d /var/lib/nova -g 199 -u 199 -s /bin/bash nova
# useradd -N -m -d /var/lib/libvirt -g 198 -u 198 -s /bin/bash libvirt-qemu
# apt-get install nova-compute-kvm python-guestfs
If prompted to create a supermin appliance, respond yes.
For security reasons, the Linux kernel is not readable by normal users which restricts hypervisor services such as qemu
and libguestfs
. To make the current kernel readable, run:
# dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-$(uname -r)
To also enable this override for all future kernel updates, create the file /etc/kernel/postinst.d/statoverride
containing:
#!/bin/sh
version="$1"
# passing the kernel version is required
[ -z "${version}" ] && exit 0
dpkg-statoverride --update --add root root 0644 /boot/vmlinuz-${version}
Remember to make the file executable:
# chmod +x /etc/kernel/postinst.d/statoverride
Edit the /etc/nova/nova.conf
configuration file and add these lines to the appropriate sections:
[DEFAULT]
...
auth_strategy = keystone
...
[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://nova:$NOVA_DBPASS@$MYSQL_IP/nova
[keystone_authtoken]
auth_uri = http://$CONTROLLER_PUBLIC_IP:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = $NOVA_PASS
Configure the Compute service to use the RabbitMQ message broker by setting these configuration keys in the [DEFAULT]
configuration group of the /etc/nova/nova.conf
file:
[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = $RABBIT_PASS
Configure Compute to provide remote console access to instances.
Edit /etc/nova/nova.conf
and add the following keys under the [DEFAULT]
section:
[DEFAULT]
...
my_ip = $COMPUTE_NODE_PRIVATE_IP
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $COMPUTE_NODE_PUBLIC_IP
novncproxy_base_url = http://$CONTROLLER_PUBLIC_IP:6080/vnc_auto.html
Specify the host that runs the Image Service. Edit /etc/nova/nova.conf
file and add these lines to the [DEFAULT]
section:
[DEFAULT]
...
glance_host = controller
You must determine whether your system's processor and/or hypervisor support hardware acceleration for virtual machines.
Run the following command:
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one or greater, your system supports hardware acceleration which typically requires no additional configuration.
If this command returns a value of zero, your system does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.
Edit the [libvirt]
section in the /etc/nova/nova-compute.conf
file to modify this key:
[libvirt]
...
virt_type = qemu
[Warning]
On Ubuntu 12.04, kernels backported from newer releases may not automatically load the KVM modules for hardware acceleration when the system boots. In this case, launching an instance will fail with the following message in the /var/log/nova/nova-compute.log
file:
libvirtError: internal error: no supported architecture for os type 'kvm'
As a workaround for this issue, you must add the appropriate module for your system to the /etc/modules
file.
For systems with Intel processors, run the following command:
# echo 'kvm_intel' >> /etc/modules
For systems with AMD processors, run the following command:
# echo 'kvm_amd' >> /etc/modules
Remove the SQLite database created by the packages:
# rm /var/lib/nova/nova.sqlite
Restart the Compute service:
# service nova-compute restart
Edit /etc/sysctl.conf
to contain the following:
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
Implement the changes:
# sysctl -p
install the Networking components
# apt-get install neutron-common neutron-plugin-ml2 neutron-plugin-openvswitch-agent
To configure the Networking common components
The Networking common component configuration includes the authentication mechanism, message broker, and plug-in.
Configure Networking to use the Identity service for authentication:
Edit the /etc/neutron/neutron.conf
file and add the following key to the [DEFAULT]
section:
[DEFAULT]
...
auth_strategy = keystone
Add the following keys to the [keystone_authtoken]
section:
Replace $NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
[keystone_authtoken]
...
auth_uri = http://$CONTROLLER_PUBLIC_IP:5000
auth_host = controller
auth_protocol = http
auth_port = 35357
admin_tenant_name = service
admin_user = neutron
admin_password = $NEUTRON_PASS
Configure Networking to use the message broker:
Edit the /etc/neutron/neutron.conf
file and add the following keys to the [DEFAULT]
section:
Replace $RABBIT_PASS
with the password you chose for the guest account in RabbitMQ.
[DEFAULT]
...
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = controller
rabbit_password = $RABBIT_PASS
Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
Edit the /etc/neutron/neutron.conf
file and add the following keys to the [DEFAULT]
section:
[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
[Note] Note
We recommend adding verbose = True
to the [DEFAULT]
section in /etc/neutron/neutron.conf
to assist with troubleshooting.
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances.
Edit the /etc/neutron/plugins/ml2/ml2_conf.ini
file:
Add the following keys to the [ml2]
section:
[ml2]
...
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
Add the following keys to the [ml2_type_gre]
section:
[ml2_type_gre]
...
tunnel_id_ranges = 1:1000
Add the [ovs]
section and the following keys to it:
[ovs]
...
local_ip = $INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
tunnel_type = gre
enable_tunneling = True
Replace $INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
with the IP address of the instance tunnels network interface on your compute node (usually the private IP).
Add the [securitygroup]
section and the following keys to it:
[securitygroup]
...
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
To configure the Open vSwitch (OVS) service
The OVS service provides the underlying virtual networking framework for instances. The integration bridge br-int
handles internal instance network traffic within OVS.
Restart the OVS service:
# service openvswitch-switch restart
Add the integration bridge:
# ovs-vsctl add-br br-int
To configure Compute to use Networking
By default, most distributions configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.
Edit the /etc/nova/nova.conf
and add the following keys to the [DEFAULT]
section:
Replace $NEUTRON_PASS
with the password you chose for the neutron user in the Identity service.
[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
neutron_url = http://controller:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = $NEUTRON_PASS
neutron_admin_auth_url = http://controller:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron
By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the nova.virt.firewall.NoopFirewallDriver
firewall driver.
To finalize the installation
Restart the Compute service:
# service nova-compute restart
Restart the Open vSwitch (OVS) agent:
# service neutron-plugin-openvswitch-agent restart
- Install packages:
# apt-get install ceph-fuse
-
Copy the keyring (file
ceph.client.admin.keyring
) from the controller node to the/etc/ceph/
directory on the compute node, in order to use cephx for authentication -
Mount ceph-fs:
# mkdir /ceph-fs
# ceph-fuse -m <mon IP>:6789 /ceph-fs
- Stop nova service:
# service nova-compute stop
- Move the instances directory to ceph-fs
# mkdir -p /ceph-fs/nova
# cp -r /var/lib/nova/* /ceph-fs/nova/
# rm -r /var/lib/nova/
# ln -s /ceph-fs/nova/ /var/lib/nova
- Change the owner of the nova dir
# chown -R nova:nova /ceph-fs/nova
- Start nova services
# service nova-compute start
Update the libvirt configurations. Modify /etc/libvirt/libvirtd.conf
in the following way:
before : #listen_tls = 0
after : listen_tls = 0
before : #listen_tcp = 1
after : listen_tcp = 1
add: auth_tcp = "none"
Modify /etc/init/libvirt-bin.conf
in the following way:
before : env libvirtd_opts="-d"
after : env libvirtd_opts="-d -l"
-l
is short for –listen
Modify /etc/default/libvirt-bin
in the following way:
before :libvirtd_opts=" -d"
after :libvirtd_opts=" -d -l"
Restart libvirt
. After executing the command, ensure that libvirt
is successfully restarted.
$ stop libvirt-bin && start libvirt-bin
$ ps -ef | grep libvirt
Apply the following patch that fixes this bug. Modify file /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py
on the compute nodes:
before: self._caps.host.cpu.parse_str(features)
after: cpu = vconfig.LibvirtConfigCPU()
cpu.parse_str(features)
Edit /etc/nova/nova.conf
adding the following flag in the [DEFAULT]
section:
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE