In order to run the pl
command in an linux machine you can execute the following command:
pip install git+https://github.com/PL-PROJECT-20/pl-topo.git
Don't forget to upgarde pip
. And also need to install Git beforehand.
In order to run the tree topology first need to create a pl-tree.yml
file where there are specific names of the containers and their configurations. By default, we have 3 kinds of images of containers:
- sw-1, sw-2 : These are the ovs-switches.
sw-1
is considered as the core switch of the tree topology and has direct connection to the 'GATEWAY'. - ctr: is the SDN controller with an image() including "ryu".
- usr-1,usr-2 : These are the hosts which contain basic ubuntu image with
nping
traffic generator. You can also generate this "pl-tree.yml" file from the setup.py file
cmd to run pl-tree.yml
-
$> cd .../All_Topology
$> pl --create PL-tree.yml # don't need to add sudo here
pl --help
usage: pl [-h] [-d] [--create | --destroy] [-s] [-a] topology
Tool to create docker topologies
positional arguments:
topology Topology file
optional arguments:
-h, --help show this help message and exit
-d, --debug Enable Debug
Actions:
Create or destroy topology
--create Create topology
--destroy Destroy topology
Save:
Save or archive the topology
-s, --save Save topology configs
-a, --archive Archive topology file and configs
A successful result will look like following-
In case of editing the topology, you can connect a new docker container to the existing topology using the following command. It uses namespace configurations with veth links.
$> sh pl_c2c.sh <container_name1> <container_name2> <veth_name_at_container1> <veth_name_at_container2>
The command has 4 paramaters.
- <container_name1>: name of the existing container of tree topology.
- <container_name2>: name of the new container which will be connected.
- <veth_name_at_container1>: name of the new interface in the existing container.
- <veth_name_at_container2>: name of the new interface in the new container.
This script adds the eth interfaces to the OVS bridge (foo) and reconfigures the IPs, netmasks, and gateways of the containers, including sw, ctr, and usr.
sudo
command is mandatory to run this script. This is not working with python 2.7. Python3 is mandatory.
$> sudo python3 ovs_br_script.py -file .../All_Topology/PL-tree.yml
A successful result will look like following-
To verify the OVS configuration, you can run the following command in "sw-X" containers.
ovs-vsctl show
(In progress)
-with ubuntu image 2.1.0 is running ok. But use the 2.1.4 for the latest updates this is based on the new_sw13 folder
-with new_sw13 image. But use the 2.4.2 for the latest updates. we also changed the supervisored.conf this is based on the new_sw13 folder.
-with pritom_liz:2.4.5 image.Changed the PATH of ovs scripts. Every service in ovs running fine. Use pritom_liz:2.5.1 image as latest update
-this is the base file with ubuntu image. In case anything happes use this
-with the cent os partially working. Need to install ovs and create the image.
-This is the script for connecting two docker container of a same docker network using the veth.(currently we are not using it. But in future it will help us to connect different containers)
-This is the script to run the "pl" command. This file create the docker topology. Current version supports only binary tree tropology.
- An example that is supported by our "pl" command supports.
- usr: is the host machines. -sw-1: is the Core Switch. -ctr: is the controller.
- How to run:
- sudo python3 ovs_br_script.py -file "All_Topology/xxx.yml"
- This file should be executed after creating the topology. This script is to add all the containers to the controller and delete the eth0 so that the containers follow the topology we assigned.
- How to run:
- sudo python3 clean_up.py -file "All_Topology/xxx.yml"
- This script can be used anytime once the topology has been created. It cleans the ARP, Flow and Group tables.
- ryu-manager --observe-links controller.py
sudo docker build -t="pritom_liz:2.1.3" .
sudo docker ps -a
sudo docker run -itd --name=pritom_liz/centos pritom_liz:2.1.3 /usr/bin/supervisord
sudo docker exec -it pritom_liz /bin/bash
sudo docker commit 99583457a8b2 pritom_liz:2.1.4
-pritom_liz:2.2.0 has the ryu controller installed
-ovs-sw:latest this image is with (new_sw13+ryu) just a checkpoint
-pritom_liz:2.2.1 has the ryu controller installed + nping traffic generator
-pritom_liz_centos:1.0.2 CentOS image + ovs (with supervisored)
-usr:1.1.1 user ubuntu image ifconfig + nping
-usr:1.1.2 user ubuntu image ifconfig+ping+iperf
-usr:1.1.3 user ubuntu image ifconfig+ping+iperf+ arp table command
-ryu_ovs:2.1.0 has ryu and ovs installed and it is a bit optimized. but the ovsdb is not running yet
-covs:1.0.0 make it default image. it has ovs and ryu controller and it is optimized.
-pritom_liz:2.5.1 has the ryu controller installed_ ovs services are running
-tovs:1.1.2 has ovs and supervisord. It came from 1.1.1 and then 1.1.0. Can be used in topology.
-tovs_ryu:1.1.0 has ovs and ryu. No supervisord installed
-tovs_ryu:1.1.2 has been executed. Can be used as ctr in the topology. No supervisord here. Came from tovs_ryu:1.1.1
-tovs_ryu:1.1.3 is the lastes came from the tovs_ryu:1.1.2. It a qos_simple_switch_13.py in the usr/local/lib/python2.7/dist-packages/ryu/app folder.