diff --git a/experiments/fig_13_nginx-perf/README.md b/experiments/fig_13_nginx-perf/README.md
index 8a8bce33..18e403eb 100644
--- a/experiments/fig_13_nginx-perf/README.md
+++ b/experiments/fig_13_nginx-perf/README.md
@@ -1,18 +1,34 @@
-# NGINX Throughput baseline
+# NGINX throughput comparison
-This experiment provides data for Fig. 13. We evaluate the performance
-of NGINX with wrk (1 minute, 14 threads, 30 conns, static 612B HTML
-page).
+
-## Usage
+We measure the throughput of [NGINX](nginx.org/) in a wide range of
+systems, including:
+
+ * [HermiTux](https://ssrg-vt.github.io/hermitux/) on [uHyve](https://github.com/hermitcore/uhyve);
+ * [Lupine](https://github.com/hckuo/Lupine-Linux) on [Firecracker](https://firecracker-microvm.github.io/);
+ * Lupine on KVM;
+ * Linux on Firecracker;
+ * Linux on KVM;
+ * Linux as a userspace binary;
+ * [OSv](https://github.com/cloudius-systems/osv) on KVM;
+ * [Rumprun](https://github.com/rumpkernel/rumprun) on KVM;
+ * Docker; and,
+ * Unikraft on KVM.
-Run instructions:
+We also compare [MirageOS](https://mirage.io) on Solo5, however, it does support
+running NGINX as it is a Domain-Specific Language unikernel library Operating
+System. Instead, we use their [template TCP HTTP server](https://github.com/mirage/mirage-skeleton/tree/master/applications/static_website_tls)
+capable of serving static content over HTTP and measure this with the same tools
+and payload.
-```
-cd experiments/15_nginx-perf
-./genimages.sh
-./benchmark.sh
-```
+We evaluate the performance with [`wrk`](https://github.com/wg/wrk) for 1 minute
+using 14 threads, 30 connections, and a static 612B HTML page.
+
+## Usage
-- `./genimages.sh` takes about 4 minutes in average.
-- `./benchmark.sh` takes about 40-45 minutes in average.
+* `./genimages.sh` downloads and builds the tested images and takes about 4
+ minutes on average;
+ * `./benchmark.sh` runs the experiment and takes about 40-45 minutes on
+ average; and,
+ * `./plot.py` is used to generate the figure.
diff --git a/experiments/fig_14_unikraft-nginx-alloc-boot/README.md b/experiments/fig_14_unikraft-nginx-alloc-boot/README.md
index a6a0f23d..b48e1011 100644
--- a/experiments/fig_14_unikraft-nginx-alloc-boot/README.md
+++ b/experiments/fig_14_unikraft-nginx-alloc-boot/README.md
@@ -1,6 +1,6 @@
# Unikraft NGINX boot time with varying allocators
-This experiment provides data for Fig. 14.
+
We measure the guest boot time (not including VMM overhead) and
provide a per-component breakdown to highlight the impact of memory
diff --git a/experiments/fig_19_compare-dpdk/server/unikraft/uk_test_suite/run_vhost_net.sh b/experiments/fig_19_compare-dpdk/server/unikraft/uk_test_suite/run_vhost_net.sh
index f2c19b78..7833c464 100755
--- a/experiments/fig_19_compare-dpdk/server/unikraft/uk_test_suite/run_vhost_net.sh
+++ b/experiments/fig_19_compare-dpdk/server/unikraft/uk_test_suite/run_vhost_net.sh
@@ -35,6 +35,7 @@ qemu-system-x86_64 \
-device virtio-net-pci,netdev=testtap0,addr=0x4,ioeventfd=on,guest_csum=off,gso=off \
-kernel build/uk_test_suite_kvm-x86_64 \
| tee results.txt
+RET=$?
# destroy network setup
ip link set dev $BRNAME down
@@ -42,4 +43,4 @@ ip link set dev $TAPNAME down
ip tuntap del dev $TAPNAME mode tap
ip link del $BRNAME
-exit $?
+exit $RET