Running benchmarks #2421
juanpicado
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Since few months go I've been running benchmarks to compare performance improvements over the time and compare with previous releases.
Where it runs
It runs benchmarks for specific common steps like:
npm info
, fetch tarball for all major versions. The test runs every day.https://github.com/verdaccio/verdaccio/blob/master/.github/workflows/benckmark.yml
How it works
We run
hyperfine
andautocannon
on every test and version, we collect that information and feed an API that at the same time feed a database.hyperfine
The script run several runs by using
npm
runninginfo
andinstall
.autocannon
The script hits an specific endpoint instead running directly a package manager, the outcome is more complete, here an example:
Data
The current data is going to a DynamoDB (free tier 😊, we have a tight budget) where is stored until could be processed, in the next steps.
Next steps
What's missing by far is process that data and great a nice dashboard that represent the changes over the time.
If you want to contribute on bench-marking verdaccio or have any idea that could improve this process, feel free to drop your thoughts here.
The raw data for each run can be find here
Beta Was this translation helpful? Give feedback.
All reactions