This benchmark measures the latency of a web service that stores a large (250K item) hashtable, in multiple languages.
Garbage collected languages are of greater interest, and the Swift version is mostly used as a reference of what would be ideal.
- Each HTTP request will add one new item (and remove one if count > 250K)
- Each item will be a new 1KB buffer with every byte initialized to a value.
- Programs will listen on port 8080 and respond with status 200, body "OK"
- All program directories are listed in programs.txt
- Every directory should contain
build.sh
andrun.sh
- wrk2 must be installed and linked in $PATH as
wrk2
- Warmup: 9K req/s are sent for 60s (initial hash table content)
- Test : 9K req/s are sent by 99 clients concurrently for 180s
- All reports are in
reports
directory. - Charts from both tests can be generated by using hdrhistogram
- go - Go version is go1.6.2 darwin/amd64; uses fasthttp. Using the built-in http library results with a lot more work for the GC, and a chart similar to the one seen for Haskell.
- ocaml-reason - This is built with OCaml 4.03+flambda (ReasonML is currently not compatible
with OCaml 4.03 so the ML version was generated from it using
refmt
first). Follow the instructions here: https://ocaml.org/docs/install.html - node - Requires node 6.x (
Buffer.alloc
API) - haskell - Should build easily if you have stack
- swift-zewo - You should setup swiftenv to the right snapshot
Want to build low-latency, memory-intensive services in a GCed language? Use OCaml (or Reason if you prefer the syntax).
Other things that were attempted to make the latency worse for OCaml:
- random buffer sizes
- longer running time
- simulate extra GC work on a percentage of requests (i.e. creating T temporary objects of size S)