-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add automatic benchmarking to FLORIS #992
base: develop
Are you sure you want to change the base?
Conversation
This is great!! On my end, I would love to see performance on, let's say, AEP calculation for a 100-turbine farm. Would also be nice to see this for different wake model set-ups, in case modifications are made to specific submodels. Benchmarking parallel floris would be a nice plus. |
Sounds good @Bartdoekemeijer ! Added a short todo-list above to track the intention |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this pull request is still in progress, but just wanted to leave some comments for future reference. Good stuff @paulf81
name: Floris Benchmark | ||
on: | ||
schedule: | ||
- cron: '0 3 * * *' # Runs daily at 3am UTC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest tying this to a commit rather than a date since any given day might contain multiple commits and many will contain no commits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that makese sense, maybe commits to develop?
on: | ||
schedule: | ||
- cron: '0 3 * * *' # Runs daily at 3am UTC | ||
workflow_dispatch: # Allows manual triggering of the workflow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's neat
|
||
permissions: | ||
contents: write | ||
deployments: write |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the need for access to deployment rights?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would avoid adding yet another input file (that will need to be maintained) just for this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's a good point, this can be switched to one of the existing inputs
@@ -45,6 +45,7 @@ | |||
}, | |||
"develop": { | |||
"pytest", | |||
"pytest-benchmark" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably not needed yet, but if the list of dependencies only used in automated systems grows, you could add a new target to capture those and separate them from the ones we'll all install for development
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that makes sense
It would be good to add a scaling test, as well. This should be a test of something like 10, 100, 1000 turbines across a meaningful number of conditions (~1000). |
Thank you for your comments @rafmudaf ! I'll hope to take another pass at this soon and incorporate your suggestions |
Add automatic benchmarking to FLORIS
This draft PR is meant to add automatic code benchmarking to FLORIS. Proposed solution is to use pytest-benchmark to implement set timing tests:
https://pytest-benchmark.readthedocs.io/en/latest/
https://github.com/ionelmc/pytest-benchmark
And then try to schedule some semi-daily execution of these tests with logged performance checks so we can track changes over time. Here focused on:
https://github.com/benchmark-action/github-action-benchmark
To this end I added a first test to the tests folder including benchmarking to the
tests/
folder and confirm the command line:pytest floris_benchmark_test.py
Produces a benchmark result. At this point might open up for discussion, or others research:
To include: