-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix support for running tlc benchmarks #8
Comments
@shonfeder Is TLC support still of interest? Also, how reliable are the results obtained by running the benchmarks on shared Github action instances? |
Hi!
Afaik, we have no pressing need for this, but we want to track it for the long term. IMO, I'd not consider integrating it into the current framework, but would instead subordinate this functionality to the longerterm plan to rewrite the benchmarking framework.
Probably not very. I don't think we'd use these results for scientific publication, but they are useful as a sanity check lest we introduced a change that caused a huge regression in performance. |
FWIW: The now-defunct performance/scalability test infrastructure for TLC used sponsored, bare-metal instances from https://metal.equinix.com |
Thanks! |
FYI: TLC now has this with https://github.com/tlaplus/tlaplus/actions/workflows/perf.yml, which runs a number of benchmarks upon request on a bare-metal instance hooked into Github actions via a self-hosted runner. |
At some point we lost support for running TLC. We should restore it by fixing the TODO in the
tool_cmd
function in https://github.com/informalsystems/apalache-tests/tree/master/scripts/mk-run.pyThe text was updated successfully, but these errors were encountered: