Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add e2e parameterization using file #119

Merged
merged 14 commits into from
Sep 13, 2024

Conversation

erman-gurses
Copy link
Member

This PR and future related PRs will be transformed into performance-CI, benchmarking different operators with various input parameters. For now, we are starting with GEMMs. In the future, we will add convolution, flash attention, and other operators as needed. The ultimate goal is to create performance plots for those operators so we can see their performance daily.

The perf.yaml file is added to the repo to distinguish between Performance and CI related operations.

@erman-gurses erman-gurses changed the title Add e2e parametrization using file Add e2e parameterization using file Sep 6, 2024
@erman-gurses erman-gurses marked this pull request as ready for review September 6, 2024 09:43
@erman-gurses erman-gurses marked this pull request as draft September 8, 2024 06:39
@erman-gurses erman-gurses force-pushed the add-perf-yaml branch 2 times, most recently from 7324565 to 94358d4 Compare September 9, 2024 07:50
@erman-gurses erman-gurses marked this pull request as ready for review September 9, 2024 08:02
Copy link
Contributor

@Hardcode84 Hardcode84 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Signed-off-by: erman-gurses <[email protected]>
Signed-off-by: erman-gurses <[email protected]>
Signed-off-by: erman-gurses <[email protected]>
Signed-off-by: erman-gurses <[email protected]>
Signed-off-by: erman-gurses <[email protected]>
Signed-off-by: erman-gurses <[email protected]>
Signed-off-by: erman-gurses <[email protected]>
Signed-off-by: erman-gurses <[email protected]>
Signed-off-by: erman-gurses <[email protected]>
Signed-off-by: erman-gurses <[email protected]>
Signed-off-by: erman-gurses <[email protected]>
Signed-off-by: erman-gurses <[email protected]>
Signed-off-by: erman-gurses <[email protected]>
Signed-off-by: erman-gurses <[email protected]>
@erman-gurses erman-gurses merged commit bae8e10 into iree-org:main Sep 13, 2024
8 checks passed
harsh-nod added a commit to harsh-nod/iree-turbine that referenced this pull request Sep 16, 2024
Comment on lines +14 to +18
default_test_shapes = [(1, 128), (256, 64), (256, 128), (256, 256), (256, 1024)]

user_specified_test_shapes = ""

test_params_path = os.environ.get("TEST_PARAMS_PATH", None)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Saw this after a comment on #163 (comment))

Did you consider using pytest marks instead of environment variables and json files?

If all you want to enable is running different sets of tests in different workflows, pytest marks may be a better tool for the job than this configuration coming from files and environment variables.

With marks, you can define all the tests in the source file and then developers and automated workflows can choose to skip tests with a given mark (e.g. "large", "large_shapes", "benchmark", etc.).

This environment variable approach gives you more flexibility to customize the shapes without modifying the test source, which can help when running the tests from outside of this repository, but it doesn't look like that is a case you want to support at this moment. I think the ergonomics of setting an environment variable to a specific file path are quite poor for local development, especially if you plan on adding more of these files (one per test? each with its own environment variable?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants