Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add automatic benchmarking to FLORIS #992

Open
wants to merge 5 commits into
base: develop
Choose a base branch
from

Conversation

paulf81
Copy link
Collaborator

@paulf81 paulf81 commented Oct 4, 2024

Add automatic benchmarking to FLORIS

This draft PR is meant to add automatic code benchmarking to FLORIS. Proposed solution is to use pytest-benchmark to implement set timing tests:

https://pytest-benchmark.readthedocs.io/en/latest/

https://github.com/ionelmc/pytest-benchmark

And then try to schedule some semi-daily execution of these tests with logged performance checks so we can track changes over time. Here focused on:

https://github.com/benchmark-action/github-action-benchmark

To this end I added a first test to the tests folder including benchmarking to the tests/ folder and confirm the command line:

pytest floris_benchmark_test.py

Produces a benchmark result. At this point might open up for discussion, or others research:

  1. Do these benchmarks belong in tests? Won't that cause them to be run with normal CI process?
  2. What should we test?
  3. We should set a test results page
  4. Test parallel?
  5. Want to track changes coming from our own work, and also those coming from improvement to python, should make sure CI checks most recent version of python?

To include:

  • AEP for 100-turbine farm
  • Analysis for different wake models
  • Parallel FLORIS

@paulf81 paulf81 added the enhancement An improvement of an existing feature label Oct 4, 2024
@paulf81 paulf81 requested a review from misi9170 October 4, 2024 21:42
@paulf81 paulf81 self-assigned this Oct 4, 2024
@Bartdoekemeijer
Copy link
Collaborator

This is great!! On my end, I would love to see performance on, let's say, AEP calculation for a 100-turbine farm. Would also be nice to see this for different wake model set-ups, in case modifications are made to specific submodels. Benchmarking parallel floris would be a nice plus.

@paulf81
Copy link
Collaborator Author

paulf81 commented Oct 7, 2024

This is great!! On my end, I would love to see performance on, let's say, AEP calculation for a 100-turbine farm. Would also be nice to see this for different wake model set-ups, in case modifications are made to specific submodels. Benchmarking parallel floris would be a nice plus.

Sounds good @Bartdoekemeijer ! Added a short todo-list above to track the intention

Copy link
Collaborator

@rafmudaf rafmudaf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this pull request is still in progress, but just wanted to leave some comments for future reference. Good stuff @paulf81

name: Floris Benchmark
on:
schedule:
- cron: '0 3 * * *' # Runs daily at 3am UTC
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest tying this to a commit rather than a date since any given day might contain multiple commits and many will contain no commits.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that makese sense, maybe commits to develop?

on:
schedule:
- cron: '0 3 * * *' # Runs daily at 3am UTC
workflow_dispatch: # Allows manual triggering of the workflow
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's neat


permissions:
contents: write
deployments: write
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the need for access to deployment rights?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would avoid adding yet another input file (that will need to be maintained) just for this

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's a good point, this can be switched to one of the existing inputs

@@ -45,6 +45,7 @@
},
"develop": {
"pytest",
"pytest-benchmark"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably not needed yet, but if the list of dependencies only used in automated systems grows, you could add a new target to capture those and separate them from the ones we'll all install for development

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that makes sense

@rafmudaf
Copy link
Collaborator

It would be good to add a scaling test, as well. This should be a test of something like 10, 100, 1000 turbines across a meaningful number of conditions (~1000).

@paulf81
Copy link
Collaborator Author

paulf81 commented Oct 25, 2024

Thank you for your comments @rafmudaf ! I'll hope to take another pass at this soon and incorporate your suggestions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement An improvement of an existing feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants