-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
143 configyaml input validation #185
Conversation
…allation part of #151.
Codecov Report
@@ Coverage Diff @@
## main #185 +/- ##
==========================================
- Coverage 88.37% 86.74% -1.63%
==========================================
Files 25 27 +2
Lines 1591 1396 -195
==========================================
- Hits 1406 1211 -195
Misses 185 185
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice 😃
I've added a couple corrections on the schema but I may have missed some things, can you check the schema is consistent with the latest documentation on the config options?
I've added a few comments comments on the code in there too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks good but I'm worried we are loosing information in the error messages. Could we have an example of what the validation error message would look like?
I'll check it out and confirm. |
I'll get you an example of failed validation. Stand by. |
OK, the validation will throw an exception, with a list of the offending entries like so. I think it would be useful to have a CLI routine to validate a config prior to attempting to run benchcab. I'll raise an issue for that.
|
@ccarouge, @SeanBryan51. OK, I think I have implemented all of your changes. I note the coverage has gone down by virtue of simplified config tests. Re-requesting reviews. Thanks for the feedback! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a small comment regarding constants in internal.py.
What sort of integration tests are you doing? E.g. I usually cook up a small bash script that verifies things are working (see #182 (comment))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's good for me once @SeanBryan51's comment is solved.
OK, I will take a look at your integration test example. It might be worthwhile coding this up as something that can be triggered by a developer. Perhaps including a canned example in the data/tests directory? Regarding the internal.py example. It is a touch out of scope of this issue and I'm not familiar with the system enough yet to remove things wholesale. Perhaps we could raise issue(s) to clean up some of these items along the way to keep things clean. I'll get back to you with those integration tests. |
Updating with main changes.
Integration tests have now been added to the data/tests directory. Just a pre canned script from @SeanBryan51's example. PBS output attached to verify correct execution.
|
Nice. I'll try run the integration test on Gadi. I suspect I'd have to replace |
@@ -0,0 +1,38 @@ | |||
#!/bin/bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we set the file to be executable? Prior to running the script, I run chmod +x benchcab/data/test/integration.sh
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't a huge issue, so I'm happy either way (i.e. regular file vs executable).
Co-authored-by: Sean Bryan <[email protected]>
Merged PR for input validation. It appears that the merge has added all of the recent changes as "changes" on this branch - that being said, GitHub is smart enough to put only the files I've touched into the PR. Yay!
I've put you both in as reviewers as it is quite the structural change for configuration loading/validation...
Please pay particular attention to the config validation - I think I've covered all of the tests using the schema validation, but fresh eyes would really help in case I've missed anything.
Given the adjustment to the conda meta.yml, we may consider an action or local test to validate the conda install (if we don't have it already?) - I'll open an issue for this.
Any questions, please plop it in the discussion.