Skip to content

Latest commit

 

History

History
25 lines (14 loc) · 2.25 KB

reproducibility.md

File metadata and controls

25 lines (14 loc) · 2.25 KB

Reproducibility

Artifact identification

This document describes how to reproduce the results of the paper

A. Detti, L. Funari, L. Petrucci, "µBench: an open-source factory of benchmark microservice applications", IEEE Transactions on Parallel and Distributed Systems

A. Detti, L. Funari, L. Petrucci are with the Dep. Electronic Engineering of the University of Rome Tor Vergata

The paper presents µBench and some performance evaluations aimed at comparing the advantages and disadvantages of microservice architectures versus monolithic ones, and analyzing the performance impact of key architectural choices, such as service graph/mesh topology and the use of replication. For this analysis, µBench has been used to generate several microservice applications with different properties, and two of them are derived from a real cloud dataset.

Artifact Dependencies and Requirements

The measurements of the paper have been made using a Kubernetes cluster with 10 worker nodes. The cluster uses Azure virtual machines run in the Western Europe region, with 4 CPUs at 2.3GHz (without Hyper-Threading), 16 GiB of RAM and a Gigabit Ethernet interface. All the VMs run Ubuntu 18.04.4 LTS with the 64-bit version of the x86 instruction set architecture (ISA). The throughput of internal communications is 1 Gbit/s, and the VM-to-VM RTT is less than 2ms. The request stream is generated by a different Azure VM with 8 CPUs.

Artifact Installation and Deployment Process

µBench software has been deployed in the Kubernetes master-node of the cluster as described in the manual. The related workmodel files of the considered applications are in:

  • the folder examples for what concerns Fig. 6 and Graph C and D of Figs. 7.
  • the zipfile examples/Alibaba/traces-mbench.zip for Topologies A, B of Fig. 7,8,9. Once unzipped, Graph A and B files are in traces-mbench/seq/app18 and traces-mbench/seq/app22, respectively. These folders contain also the traces used to load the application and the workmodel.json file we used to deploy them.

Reproducibility of Experiments

For all tests, we used Apache Jmeter to load the application either. The .jmx files used by jmeter are in the examples/jmeter folder.