Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate verification from performance tests #119

Open
Nanoseb opened this issue Aug 16, 2024 · 1 comment
Open

Separate verification from performance tests #119

Nanoseb opened this issue Aug 16, 2024 · 1 comment
Labels
test To do with the unit test and CI

Comments

@Nanoseb
Copy link
Collaborator

Nanoseb commented Aug 16, 2024

Right now, tests both check the performance by computing BW, timing, running the same test n_iter times, as well as the accuracy of the solution by comparing the results to a known analytical solution.

Even if the implementation of the test is similar, the two type of tests require different input parameters. i.e. no need to run the same computation 1000 times to check its accuracy and performance tests require larger meshes for example.

Several options could be used:

  • Writing distinct tests depending on the goal and have a cmake option to switch between one set or the other.
  • For each feature, having both tests in a single file and change the inputs based on:
    • an environment variable
    • a command line parameter
    • a compile time variable
    • ...?
  • other option?

Any other option to consider?
@pbartholomew08 @semi-h

@Nanoseb Nanoseb added the test To do with the unit test and CI label Aug 16, 2024
@Nanoseb
Copy link
Collaborator Author

Nanoseb commented Aug 16, 2024

(I think I'd vote for an environment variable as that's the easiest to handle and use in this kind of scenario)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
test To do with the unit test and CI
Projects
None yet
Development

No branches or pull requests

1 participant