Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Track library performance over time #163

Open
rlouf opened this issue Feb 1, 2022 · 1 comment
Open

Track library performance over time #163

rlouf opened this issue Feb 1, 2022 · 1 comment
Labels
documentation Improvements or additions to documentation help wanted Extra attention is needed important

Comments

@rlouf
Copy link
Member

rlouf commented Feb 1, 2022

Benchmarks should serve three functions. First prevent performance regression when changes are made on the libray's core. Then give potential users an idea of what to expect when they use the library on their problem. Finally inform users about the trade-offs of each of the algorithms.

We should thus select a few problems that are notoriously hard to sample from and setup a few of benchmarks. The selection should include problems where one (family of) algorithm(s) clearly outperforms all others to fulfill goal (3), as well as common problems to fulfill goal (2). We should measure raw speed, but also other important metrics that assess sampling quality like the number of ESS/s.

The repo already uses pytest-benchmark to run a few benchmarks, but this is rather useless since the results are not stored. In order to keep track of history, and display the results we can use this Github action. In the benchmark branch of this repo there's an example of model implementation and data fetching in the /benchmark directory. We will need to add a datastructure that specifies which algorithm can be run on this example with which parameter values.

The results should be computed at every merge on main and should be displayed in the documentation (and not be erased by a new build).

Documentation build should not erase previous benchmarks, benchmark publishing should not erase docs

@rlouf rlouf added documentation Improvements or additions to documentation help wanted Extra attention is needed labels Sep 19, 2022
@rlouf rlouf pinned this issue Sep 19, 2022
@junpenglao
Copy link
Member

For v1 release, here are some items to close out:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation help wanted Extra attention is needed important
Projects
None yet
Development

No branches or pull requests

2 participants