-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setup for runs - example WarpX on Summit #30
Comments
Might also be useful, though not urgent, to have it an option that when templater is run with current dirs present, that it only copies if will be a diff. The reason I'm thinking this is that if I re-run the templater it copies the input files into the runs that have already been done. |
Perhaps we should separate this to another issue, but we should also think about checking of results. Using the compare npy scripts are limited, and do not really work for the WarpX tests. We could look at some produced value ensemble value, but as these tests are mostly about libEnsemble infrastructure, we might want to do things like check row count and 'Completed' in libE_stats.txt. Maybe grep for 'Failed' in ensemble.log. Or look for non-empty std error files. If we wanted to go into more detail we could use the json files to specify expected run lines. Currently I look at all these things manually, but as we move towards CI will want to consider more automation. |
@shuds13 , For re-running the templater, you're imagining a scenario where it detects if any to-be-output files are different than what's already produced, and replace those files if so? In any case, a better solution than my method, where I delete the output between runs, needs to be found. For the second issue, I've worked on some results-checking for Forces already (see forces_support.py, either here or on libEnsemble/develop), they mostly check that ensemble directories have the correct structure or the log files have the correct contents in the event of an error. I need to add Row count and 'Completed' checks. Checking expected run lines is also a good idea. I'll look into checking solutions for WarpX based on the above and previous work. |
Some timestamp check might be enough - for straight copies might just be like a "cp -u" If I do this now, and just re-run templater I get new set of inputs for each test, after my outputs, and you cant tell if those outputs match those inputs. So my idea would be you could tell it to only produce input for modified tests. I've not really thought how awkward this is to do. Alternative might be to specify test/s, but I think I'd prefer the former. I'll take another look at the forces checks, thanks! |
How can we streamline set up for runs on different platforms.
Instructions (in top and/or platform directories??) and automation where possible.
E.g. To run WarpX tests on summit I need to.
Prep:
Make sure warpx is built (and matching exe path/name in machine_specs template).
Need to have set up a conda environment - matching name in template
To run:
Clone templater
module load python
Copy run_all.sh from forces dir - can this be universal?
Run tests.
Comparing/checking results...
Note: I'm inclined towards getting rid of the machine_specs file, using templating in place.
The text was updated successfully, but these errors were encountered: