Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor benchmarking scripts and update generated plots #274

Open
5 tasks
denisalevi opened this issue Mar 1, 2022 · 0 comments
Open
5 tasks

Refactor benchmarking scripts and update generated plots #274

denisalevi opened this issue Mar 1, 2022 · 0 comments
Labels

Comments

@denisalevi
Copy link
Member

Make benchmark scripts easier to use and update the plots that are automatically generated, e.g. add plots form paper.

  • Get rid of benchmark related things in the Brian2 patch. Either add to Brian2 or move it all into Brian2CUDA repository.
  • Refactor the run_benchmarks.py, such that config / benchmarks / sizes configuration is separate (and clean and tidy) from the actual run script. Maybe just a config file for that? And a parameter to the bash script that chooses the config file?
  • Add the plotting code from the paper to the brian2CUDA repo and adapt the automatically generated plots. The plots should also be chosen in the config file? Or everything is always plotted?
  • Add an option to e.g. only compare stateupdaters across configurations with a simple bar plot? Or maybe the profiled bar plots from the paper are already good enough?
  • Change DynamicConfigCreator to automatically generate the config name to avoid different names for same configs in the future...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant