Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Converted tests to use BaseRegTest #52

Closed
wants to merge 9 commits into from

Conversation

sseraj
Copy link
Collaborator

@sseraj sseraj commented Jul 25, 2021

Purpose

I converted the tests to use BaseRegTest instead of calling subprocess on the examples and parsing the output. I converted all the examples scripts into test scripts but kept the BWB script so that it could still be included in the tutorial.

The test coverage now also includes the boundary conditions, which is one of the additions mentioned in #35. mdolab/cgnsutilities#43 needs to be merged before this PR.

Type of change

What types of change is it?
Select the appropriate type(s) that describe this PR

  • Bugfix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (non-backwards-compatible fix or feature)
  • Code style update (formatting, renaming)
  • Refactoring (no functional changes, no API changes)
  • Documentation update
  • Maintenance update
  • Other (please describe)

Testing

Run the tests as before.

Checklist

Put an x in the boxes that apply.

  • I have run flake8 and black to make sure the code adheres to PEP-8 and is consistently formatted
  • I have run unit and regression tests which pass locally with my changes
  • I have added new tests that prove my fix is effective or that my feature works
  • I have added necessary documentation

Copy link
Contributor

@marcomangano marcomangano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a long needed refactoring! Just a doubt: is is alright/consistent to have the same class name TestExtrusion for all the different test files?

@sseraj
Copy link
Collaborator Author

sseraj commented Jul 26, 2021

This looks like a long needed refactoring! Just a doubt: is is alright/consistent to have the same class name TestExtrusion for all the different test files?

testflo seems to be fine with it. I'm not sure what's the best practice though

Copy link
Contributor

@eirikurj eirikurj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have mixed feelings about the examples being gone, as I think they are useful to run standalone. In some sense they are there, but now wrapped as tests.

That aside, for regression testing, I am not convinced the json data structure is an appropriate one, since eventually we want to compare grid coordinates, which will blow up the file size. In general, I would like to compare more directly the binary output. This means that we would need to store the cgns reference files, but by doing so we retain all the information we ever need to compare, as well as have smaller reference file sizes compared to the json files.

For instance, I think we could probably utilize the existing cgnsdiff tool with appropriate flags to check and compare the final cgns file against some reference cgns file.
Alternatively, we could read the cgns file in using cgns_utils and compare the blocksizes directly of the two.
Let me know what you think.

@ewu63
Copy link
Collaborator

ewu63 commented Jul 29, 2021

I have mixed feelings about the examples being gone, as I think they are useful to run standalone. In some sense they are there, but now wrapped as tests.

Right, we have some ways of wrapping examples as tests while keeping examples intact. See for example pytacs tests. We can talk offline, but either way it is important to test any examples we keep, because otherwise they may be broken in the future without us knowing.

That aside, for regression testing, I am not convinced the json data structure is an appropriate one, since eventually we want to compare grid coordinates, which will blow up the file size. In general, I would like to compare more directly the binary output. This means that we would need to store the cgns reference files, but by doing so we retain all the information we ever need to compare, as well as have smaller reference file sizes compared to the json files.

For instance, I think we could probably utilize the existing cgnsdiff tool with appropriate flags to check and compare the final cgns file against some reference cgns file.
Alternatively, we could read the cgns file in using cgns_utils and compare the blocksizes directly of the two.
Let me know what you think.

What about using some sort of reduction on the grid coordinates, such as summing coordinates in each direction? I think this type of thing is done in ADflow and we could do something similar here, without having to store binary files.

@sseraj
Copy link
Collaborator Author

sseraj commented Jul 29, 2021

I'll take a look at the pytacs tests and see if we can keep the examples.

As for the references, we could store the reference CGNS files on AFS like we do for inputs on other repos. I'll take a look at cgnsdiff and see if it will do the job. I do also like the reduction approach, but I'm more inclined to use cgnsdiff if it works out.

@sseraj
Copy link
Collaborator Author

sseraj commented Jul 29, 2021

cgnsdiff -d works nicely for comparing node values, blocking, and BCs. The only problem I see is that there is no way of specifying a relative tolerance, only an absolute tolerance via the -t flag. We could still make it work, but it will be less intuitive and also mesh size dependent.

@sseraj
Copy link
Collaborator Author

sseraj commented Aug 3, 2021

Another issue is that we would have to compile CGNS with cgnstools to use cgnsdiff in the tests.
@eirikurj What do you think? Should we use cgnsdiff?

@ewu63
Copy link
Collaborator

ewu63 commented Aug 3, 2021

Another issue is that we would have to compile CGNS with cgnstools to use cgnsdiff in the tests.
@eirikurj What do you think? Should we use cgnsdiff?

The docker builds do not have cgns tools enabled, so there's some extra work to be done there.

@eirikurj
Copy link
Contributor

eirikurj commented Aug 8, 2021

@sseraj I would be inclined to use cgnsdiff as I have used it in the past and found it to be useful. As suggested, storing the reference files on AFS (as we do for other repos) would work fine. I made a PR here https://github.com/mdolab/docker/pull/150 to make this feature available in the images.
I dont think the absolute tolerance is an issue since anything relative we would run into issues with zeros (unless we implement our own relative comparison).
Alternatively, we could look into improving cgnsutils to do this type of comparison, but that is long term.

@sseraj
Copy link
Collaborator Author

sseraj commented Aug 8, 2021

Thanks for the Docker update. I am going to close this PR and open one that:

  1. Uses cgnsdiff
  2. Preserves the examples

@sseraj sseraj closed this Aug 8, 2021
@sseraj sseraj mentioned this pull request Aug 23, 2021
12 tasks
@sseraj sseraj deleted the convert-tests branch March 16, 2022 14:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants