-
Notifications
You must be signed in to change notification settings - Fork 197
Testing Procedures
Contents
This page provides links to documentation describing the testing procedures for each component / repository that makes up CESM (or in some cases this page may provide this documentation directly). The audience is CESM software engineers who are familiar with CESM / CIME system testing in general, but who need to run testing on a component that they do not regularly work with.
Many components have different testing procedures that are run depending on what has changed. This page is not meant to be exhaustive, but rather is meant to capture the most common testing procedures – e.g., procedures that should cover about 90% of cases.
Components should aim to include at least the following information. Ideally, the documentation's organization will allow readers to quickly find these pieces of information. The goal is to make it relatively easy for CESM software engineers to run the testing for a different component when the need arises; needing to spend an hour reading through documentation defeats that goal.
Give a brief overview of the testing, merging and tagging process for this component. Some questions to cover are:
- Is it expected that an issue is opened for every planned change?
- How should you checkout the code for testing? Does this component support a standalone checkout for testing, or does it need to be checked out in the context of CESM or some other "umbrella" repository? If the latter, describe how to determine a reasonable version of the umbrella repository to use for testing – and in particular, how to determine what versions will have baselines available for system tests of the component.
- Does full testing need to be run on each set of changes? Or is it common to combine multiple changes / PRs together and have a component SE run full testing on the batch of changes? If the latter, is there some minimum expected / recommended testing on the individual changes?
- Does a branch need to be up-to-date with the latest main branch before final testing? If so, there will typically be some process for tag ordering, so briefly describe that process.
- Is there a ChangeLog entry for each merge to the main branch?
- If this component has its own externals (e.g., CTSM's FATES, or CAM's CARMA, CLUBB, PUMAS, etc.): provide links to any documentation on procedures for changing those externals.
Give details of the exact test commands to run to achieve full system testing, and on what machine(s). This does not need to exhaustively cover all possible testing scenarios. Rather, it is meant to cover the typical cases – say, the common 90% of scenarios. The testing commands should be detailed enough that someone can essentially copy and paste the commands into a terminal (replacing some placeholder text, e.g., for the baseline tag name to be used for comparison). Different components have different procedures for how to kick off their test suites – for example, whether a separate create_test
invocation is made for each compiler in the test suite – so please be very explicit in these instructions.
These instructions should include:
- The exact test commands to run and on what machine(s)
- Location of baseline directories for each test machine
- If there is anything non-standard in how to check the test results, document that as well
- How expected failures can be determined for any given tag
- Rough estimate of test turnaround time and core-hour cost
- Any other important notes about the testing process, including references to additional relevant documentation
3.1 CESM integration testing (prealpha, prebeta)
3.2 CIME
Run scripts_regression_tests
on your branch. To do this, you need to checkout the externals that CIME needs by running checkout_externals
from the top level of your CIME clone. (Note that manage_externals/checkout_externals
does NOT come bundled with CIME: You will need to have a copy of checkout_externals
available to you that you can use for this, e.g., via cloning the manage_externals repository.)
See CIME's documentation in the CIME User's Guide and in the CIME wiki for more details.
3.3 CMEPS
The typical CMEPS testing process is to run full CESM prealpha testing on cheyenne. Check out your CMEPS branch in the context of the latest CESM prealpha tag and run prealpha testing with comparisons against the relevant prealpha baselines.
3.4 CDEPS
The typical CDEPS testing process is to run full CESM prealpha testing on cheyenne. Check out your CDEPS branch in the context of the latest CESM prealpha tag and run prealpha testing with comparisons against the relevant prealpha baselines.
3.5 FMS
3.6 CAM
The process for requesting changes to be incorporated into the CAM code can be found at <https://github.com/ESCOMP/CAM/wiki/CAM-Development-Workflow-in-GitHub#how-to-submit-code-changes-to-be-included-in-escompcam>
The specific testing procedures are at <https://github.com/ESCOMP/CAM/wiki/CAM-SE-Workflows#run-cam-tests>
3.7 CTSM/CLM
CTSM encourages collaborators to open issues early in their development process, though this is not strictly required.
CTSM supports a standalone checkout for testing, and all baselines are generated using tags from this standalone checkout.
In general, full system testing is required for every branch that changes Fortran source code (exceptions need to be approved by CTSM SEs). However, for small, bit-for-bit changes (or changes that only change a few diagnostic fields), we often combine multiple changes together and then a CTSM SE will run full testing on the batch of changes. In this case, we recommend running the clm_short
test list on cheyenne
, or some other small, targeted set of tests, for each individual branch. (This can be done using create_test
or the run_sys_tests
wrapper described below. Baselines for the clm_short
test list should exist for all tags, since it is a subset of the main aux_clm
test list.)
Branches need to be up-to-date with the latest master
branch before running final testing for the merge back to master
. Tag ordering is discussed during weekly CTSM-software meetings and tracked via the "In progress - master" column in this project board. However, we encourage running at least a few tests (e.g., those in the clm_short
test list, or a small set of tests targeting your changes) earlier in the process.
In general, each merge to master is tagged. A ChangeLog entry is required for each tag.
For some additional details on this process, see the checklist in the CTSM repository.
For changes involving the FATES external, see this wiki page.
See the "Notes for integrators" section of CTSM's System Testing Guide.
Note that CTSM provides a run_sys_tests
command that wraps create_test
; location of baseline directories, integration of expected failures, etc. are handled via this wrapper. More information about run_sys_tests
can be found here. If you prefer to use create_test
directly, you can first run run_sys_tests
with the arguments --dry-run --verbose
: this will show you the create_test
commands that would be run without actually running them.