I've created an "exact" replica of lab environment that we're supposed to work with in Emulab using docker/compose.
Note
Almost everything from the lab can be tested except for experiments that involve changing the TCP modes (Tahoe, Reno, cubic...) in the kernel. This is a limitation of docker containers as they share the same kernel as their docker host and must therefore use the same mode. So you sadly can't test the last experiments from the lab that test the effect of different TCP modes.
The topology is a Dumbbell topology like the following one.
sender1
and sender2
can't communicate with receiver1
and receiver2
directly, and have to do so though the two routers router1
and router2
.
I've also added an extra student
computer that's connect diretly to all the nodes (computers/routers) and contains all the scripts for the first session for you to test out.
- Install docker and docker compose (find the instructions on the internet)
- Inside this folder run
docker build -t rn-lab-sess-1 .
- Inside this folder run
this will setup all the clients/networks
docker-compose -f docker_compose.yaml up -d
- when you're done running the scripts and testing stuff you can stop the environment and clean everything using
docker-compose -f docker_compose.yaml down
The username and password to all the machines is "root" and "password"
- ssh to the
student
machine using this command (same password)now you are thessh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null [email protected] -p 2222
student
machine (inside the docker virtual environment) cd scripts
- Now you can run any of the scripts in this folder and the ouptut will be written to the folder
output
ON YOUR HOST which means you can access the files directly from your computer without having to copy them back and forth between docker and the host - Now you can also ssh to any machine (from the student VM) using
password is always "password"
ssh root@sender1 # router1, router2, receiver1...
./get_nodes_infos.sh
will:
- run
ifconfig
on all the nodes in the network - from every node to each other node (NxN) it will run
ping
to get the RTT - from every node to each other node (NxN) it will run
traceroute
to get the hops along the way
./udp_stress_test.sh
will:
- run UDP stress test using iperf between
sender1
->receiver1
andsender2
->receiver2
at the "same" time (miniscule difference in time) - retrieve the logs from
receiver1
andreceiver2
and save them underoutput
as CSV file with headers
./tcp_stress_test.sh
will:
- run UDP stress test using iperf between
sender1
->receiver1
with 1,2, and 4 parallel streams - retrieve the logs for each test from
receiver1
and save them underoutput
as CSV file with headers
./plot_stress_test.py
will:
./run_disable_tso_on_targets.sh
will:
-
copy
./disable_segmentation_offload.sh
to all the nodes and run it to disable TSO on all nodes at the same time -
delete the file after it's run
./tcp_stress_test_tso_off.sh
will:
- start
iperf
server onreceiver1
- start
tcpdump
(with filters) onrouter1
- start the TCP stress test from
receiver1
- retrieve the dump file from
router1
you can access the PCAP file, like I said, directly from your computer.
The file can be huge (5 GB)
./set_link_capacity.sh
will:
- limit the capacity to 20 Mbit with 10ms delay (on the specified interface)
- set the buffer size as 10 packets (on the specified interface)
./test_buffer_size.sh
will:
- Disable TSO on all nodes
- Set link capacity on router 1 and router 2 to 10
- Run iperf tcp stress test between sender1 and receiver1
- Copy results back and save them in
output
- Set link capacity on router 1 and router 2 to 100
- Run iperf tcp stress test between sender1 and receiver1
- Copy results back and save them in
output
./tcp_stress_test_tcp_and_udp.sh
will:
- Send TCP traffic using iperf from sender1 -> receiver1 for 120 seconds
- After 30 seconds send 10 Mbps UDP traffic from sender2 -> receiver2
- After 30 seconds send 30 Mbps UDP traffic from sender2 -> receiver2
- After 30 seconds send 50 Mbps UDP traffic from sender2 -> receiver2
- Retrieve the TCP throughput from receiver1 and receiver2