Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add automatic benchmarking to FLORIS #992

Open
wants to merge 5 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 61 additions & 0 deletions .github/workflows/benchmark.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
name: Floris Benchmark
on:
schedule:
- cron: '0 3 * * *' # Runs daily at 3am UTC
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest tying this to a commit rather than a date since any given day might contain multiple commits and many will contain no commits.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that makese sense, maybe commits to develop?

workflow_dispatch: # Allows manual triggering of the workflow
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's neat


permissions:
contents: write
deployments: write
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the need for access to deployment rights?


jobs:
benchmark:
name: Run FLORIS benchmarks
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.9, 3.10, 3.11] # Or whichever versions you support
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
with:
python-version: ${{ matrix.python-version }}
- name: Install project
run: |
python -m pip install --upgrade pip
pip install -e ".[develop]"
- name: Run benchmark
run: |
cd benchmarks
pytest bench.py --benchmark-json output.json

- name: Store benchmark result
uses: benchmark-action/github-action-benchmark@v1
with:
name: Python Benchmark with pytest-benchmark
tool: 'pytest'
output-file-path: benchmarks/output.json
# Use personal access token instead of GITHUB_TOKEN due to https://github.community/t/github-action-not-triggering-gh-pages-upon-push/16096
github-token: ${{ secrets.GITHUB_TOKEN }}
auto-push: true
# Show alert with commit comment on detecting possible performance regression
# alert-threshold: '200%'
# comment-on-alert: true
# fail-on-alert: true
# alert-comment-cc-users: '@ktrz'

- name: Store benchmark result - separate results repo
uses: benchmark-action/github-action-benchmark@v1
with:
name: Python Benchmark with pytest-benchmark
tool: 'pytest'
output-file-path: benchmarks/output.json
# Use personal access token instead of GITHUB_TOKEN due to https://github.community/t/github-action-not-triggering-gh-pages-upon-push/16096
github-token: ${{ secrets.BENCHMARK_ACTION_BOT_TOKEN }}
auto-push: true
# Show alert with commit comment on detecting possible performance regression
# alert-threshold: '200%'
# comment-on-alert: true
# fail-on-alert: true
# alert-comment-cc-users: '@ktrz'
# gh-repository: 'github.com/benchmark-action/github-action-benchmark-results'
44 changes: 44 additions & 0 deletions benchmarks/bench.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@

from pathlib import Path

import numpy as np
import pytest

from floris import (
FlorisModel,
)
from floris.core.turbine.operation_models import POWER_SETPOINT_DEFAULT


TEST_DATA = Path(__file__).resolve().parent / "data"
YAML_INPUT = TEST_DATA / "input_full.yaml"

N = 100

def test_benchmark_set(benchmark):
fmodel = FlorisModel(configuration=YAML_INPUT)
wind_directions = np.linspace(0, 360, N)
wind_speeds = np.ones(N) * 8
turbulence_intensities = np.ones(N) * 0.06

benchmark(
fmodel.set,
wind_directions=wind_directions,
wind_speeds=wind_speeds,
turbulence_intensities=turbulence_intensities,
)


def test_benchmark_run(benchmark):
fmodel = FlorisModel(configuration=YAML_INPUT)
wind_directions = np.linspace(0, 360, N)
wind_speeds = np.ones(N) * 8
turbulence_intensities = np.ones(N) * 0.06

fmodel.set(
wind_directions=wind_directions,
wind_speeds=wind_speeds,
turbulence_intensities=turbulence_intensities,
)

benchmark(fmodel.run)
90 changes: 90 additions & 0 deletions benchmarks/data/input_full.yaml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would avoid adding yet another input file (that will need to be maintained) just for this

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's a good point, this can be switched to one of the existing inputs

Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@

name: test_input
description: Single turbine for testing
floris_version: v4

logging:
console:
enable: false
level: WARNING
file:
enable: false
level: WARNING

solver:
type: turbine_grid
turbine_grid_points: 3

farm:
layout_x:
- 0.0
layout_y:
- 0.0
turbine_type:
- nrel_5MW

flow_field:
air_density: 1.225
reference_wind_height: 90.0
turbulence_intensities:
- 0.06
wind_directions:
- 270.0
wind_shear: 0.12
wind_speeds:
- 8.0
wind_veer: 0.0

wake:
model_strings:
combination_model: sosfs
deflection_model: gauss
turbulence_model: crespo_hernandez
velocity_model: gauss

enable_secondary_steering: true
enable_yaw_added_recovery: true
enable_active_wake_mixing: true
enable_transverse_velocities: true

wake_deflection_parameters:
gauss:
ad: 0.0
alpha: 0.58
bd: 0.0
beta: 0.077
dm: 1.0
ka: 0.38
kb: 0.004
jimenez:
ad: 0.0
bd: 0.0
kd: 0.05

wake_velocity_parameters:
cc:
a_s: 0.179367259
b_s: 0.0118889215
c_s1: 0.0563691592
c_s2: 0.13290157
a_f: 3.11
b_f: -0.68
c_f: 2.41
alpha_mod: 1.0
gauss:
alpha: 0.58
beta: 0.077
ka: 0.38
kb: 0.004
jensen:
we: 0.05
turboparkgauss:
A: 0.04
include_mirror_wake: True

wake_turbulence_parameters:
crespo_hernandez:
initial: 0.01
constant: 0.9
ai: 0.83
downstream: -0.25
1 change: 1 addition & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@
},
"develop": {
"pytest",
"pytest-benchmark"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably not needed yet, but if the list of dependencies only used in automated systems grows, you could add a new target to capture those and separate them from the ones we'll all install for development

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that makes sense

"pre-commit",
"ruff",
"isort",
Expand Down
Loading