-
Notifications
You must be signed in to change notification settings - Fork 313
Meeting Notes 2020 Software
- Bill: GitHub discussions
- Negin/Sam: Toolchain update
- Erik: Note this cime bug regarding coszen forcing: https://github.com/ESMCI/cime/issues/3802
- Erik: Do we need a particular test level for FATES tags that come to main dev?
- Erik: Do we want BHS to be allowed for
clm4_5
clm5_0
? Will need fields added to their paramsfiles if so.
Advantages I see of GitHub's are:
- Everything will be in one place, rather than having a separate site for discussions vs. the model repo
- GitHub's discussions offer some nice features like up-voting and marking a question as answered, as well as a clean interface
- Avoids having CTSM's support tied in with CESM
Disadvantages of GitHub's are:
- The separation from CESM forums may be a disadvantage as well as an
advantage
- One particular disadvantage is the separation from the infrastructure (cime) support forum, which is a more appropriate place for questions relating to porting and other cime-related issues.
- At least for now, it seems that watching a repo is all-or-nothing: I
don't think you can just watch discussions or just watch issues, but
instead would need to watch everything (though you can still watch
individual threads that you're interested in). However, it is possible
to filter on discussions vs. issues in your GitHub notification center
if you get web-based notifications https://github.com/notifications,
or via email filters if you get email-based notifications. So, for
example, you can filter to see all discussion items then mark them all
as read if you're not interested in discussions.
- Update (2020-12-19) This is wrong: you can choose to watch just discussions, just issues, PRs, etc., or some combination
What would / wouldn't go there?
- I would envision that internal discussions, for example, "how should we implement X", would still be done via issues. For example, I have used issues for discussions around water isotope design. It's helpful to have an issue so that it can be prioritized in a project, assigned, marked as resolved, etc.
- I would also envision that science discussions that are aimed towards new developments would still be done via issues, so we can track these planned enhancements, assign them, mark them as resolved, assign them to milestones, etc.
- So I guess that leaves these discussions mainly about user support. Perhaps also a venue for preliminary discussion of ideas if people are more comfortable raising these in something that isn't labeled an "issue".
Keith answers a few questions a day in the CESM forums. An advantage of having it there is that questions can be transferred back and forth between CTSM and other CESM components.
The counterargument to that is: as we start to get questions from WRF users, where will they go?
Fei: WRF support relies on the WRF help desk. Land-related questions would get directed to Fei or Mike Barlage... maybe order 10 questions a week.
Dave points out that this might require discussion at the CESM, WRF, SIMA, etc. levels. Fei suggests Wei Wang as most appropriate to weigh in on WRF support.
Alper found https://github.com/ESMCI/cime/issues/3802
Note that this differs from https://github.com/ESMCI/cime/issues/3380
There is a fates test list that Ryan definitely wants to maintain: they use that for internal FATES testing.
We may want to rework the set of FATES tests included in aux_clm: less emphasis on testing different grids, more on testing different options.
In terms of what should be tested when bringing a FATES update into
CTSM: Bill feels: should generally run aux_clm
testing, with the
possible exception of when the ONLY thing that has changed is the
version of the FATES external. We could possibly run more testing when
bringing a new version of FATES in (adding the fates tests as well).
No need to allow this for clm4_5
or clm5_0
.
Note that Sean's changes also include a change in the stocking density.
- Erik/Will/Ryan: veg-modes and FATES: https://docs.google.com/spreadsheets/d/1wwI_RuQ87FNk4PdrSLtlplisEMTRqDCxFeqG6Wbxt_c/edit#gid=0
- Erik/Will: We thought that ctsm5.2 series should be defined by changes to the surface datasets. Does this sound right to everyone else?
- Erik/Will/Ryan: We came up with a document to describe veg-modes and FATES. Ryan is out this week, so I propose that we talk about it more next week.
- Erik: Have some subtle issues I found in cn-matrix by painstakingly going through all of the "use_matrixcn" if statements in CNPhenology and making sure they matched the non-matrix version. I'd like to show these and make sure we agree the non-matrix version is correct. I need to do this with all the code, then I'll ask for help again from Keith/Chris. I had hoped this would solve the current Nitrogen balance problem, but it's still there.
- Erik: I sent email on this as well. But, it seems like the next priority for mizuRoute is getting irrigation demand working. Does that sound correct?
- Erik: FYI. I need to make a MOSART/RTM tag for pio2.
- Erik: I'm going to need to spend a majority of time on the Delft project for rest of 2020.
- Bill: For ChangeLog, should we actually remove various lines rather than saying 'none'? I realized that I may often miss things in the ChangeLog because in my quick skims, I could miss information embedded amongst the 'none' responses.
There are various updates that require new surface datasets, such as:
- Gross Unrepresented Landuse
- etc.
Erik & Will's feeling is that we will make changes that don't require changes to the surface dataset first, and include those in 5.1. Then we'll make the changes that require changes to the surface dataset later - tentatively in 5.2 rather than 5.1.
Will's rationale: PPE and the parameter changes that come from it will essentially become 5.1. (5.1 would also include FATES and FAN changes, but those don't impact the standard configuration.)
We agree with this as a tentative plan (which we may revisit later).
- Erik: How should fates-sp mode be handled? And the internal logical flags for it. #1182
- Erik: Danica ran the matrix version of TRENDY. So we should let people know about it.
- Erik: Have some people stay on to look at some CN-Matrix code with me.
- Erik: Contact Chris regarding the trouble I'm having?
- Dave: dlnd questions
Charlie points out that some of the nomenclature doesn't make sense anymore: e.g., FATES controls some of the biogeophysics. CN is also weird in that it controls both soil and veg.
We'll probably come up with a short-term plan for PR 1182 (maybe something that we know won't be kept long-term), but maybe informed by a longer-term plan.
Ryan has suggested a -veg-model
flag (bigleaf vs. fates), and a
-veg-structure
(data vs. prognostic). Charlie notes that there is also
a static-stand-structure option; also, he's working on a fates bigleaf
mode, so we may need a different name for the legacy veg model.
What is (in)compatible?
- If you have veg-structure prognostic, you need some sort of belowground BGC (with or without nitrogen)
- If you have fates, are there any additional constraints on
below-ground?
- Probably, with carbon-only, fates doesn't care about what's happening in the soil in terms of BGC: you probably could run without any BGC in the soil
- If you have data veg, can you use prognostic belowground?
- Maybe this could be done with something like Will's soil testbed (forcing soil with NPP)
- Note that the veg model doesn't conserve carbon mass in data mode, which could make having a soil BGC model challenging....
In terms of namelist variables vs xml flags:
- Bill proposes getting rid of
BLDNML_OPTS
and replacing those with individual xml variables. - Namelist vs. xml variables is a bit subjective
- xml variable needed when you want to set it via the compset
- xml variable feels appropriate when you set multiple namelist defaults based on one setting
See table: https://docs.google.com/presentation/d/1HerZtaoO4LWFeriHzfcoUVoZeR2rOKcFwlhb7kOVWpE/edit#slide=id.p
Would "other FATES modes" be xml variables or namelist flags? One question is whether they would imply different initial conditions.
Rosie's PR includes both SP mode and fixed biogeography
What about crop? That is currently a separate flag. We could keep a separate flag, or maybe combine it with the biogeochemistry flag, given that you need C & N for crop.
How does this interact with the modes for NWP? There are currently two
xml variables, structure
(fast vs. standard) and configuration. To
some extent that's separate from this discussion, but there may be some
interaction if (for example) we want fast
structure to imply something
about the FATES modes.
Back to Rosie's PR: The way she's doing things now is with some extra namelist flags (no xml variables). Erik raised the question of whether this makes the current bgc flag and the 'use_cn' more confusing. Suggestion of bringing in her PR essentially as-is, then move towards our long-term vision as a follow-up.
Next steps for the long-term: Starting with the table above:
- Determine which of these are xml variables vs. just namelist flags
- How the xml variables map to namelist flags
- How other namelist defaults depend on these xml variables
- Which of these are (in)compatible
Erik & Will & maybe Ryan will work together on the above
Then would be the (possibly challenging) task of going through the
Fortran code and replacing uses of use_cn
with the appropriate new
flag(s).
Charlie: Should we be thinking about the hillslope model and the CPT work with different ways of coupling the model.
- CPT probably doesn't interact with this, and is a ways off
- Hillslope
- Currently controlled by what fields are on surface dataset (we think)
- Other than incompatibility with transient landuse, it should be compatible with anything on this table
Timeline for this: It's not blocking anything, but it is something we'd like before too long to avoid user confusion.
Should we try to coordinate this with ELM to keep things consistent between the two? It may be worth at least having a conversation with ELM folks.
C balance errors. One of the problems Erik is having relates to the dribbling... this is related to the xsmr pool (which is handled differently for the matrix code).
Erik has gotten the CN matrix code to give same answers with matrix off. Now he's having tons of balance issues with matrix on.
The TRENDY branch has a version where Erik thinks matrix is working correctly. Since then, there have been changes to CTSM master.
Suggestion of bringing Chris and Keith in to help.
- Erik: Mike Levy gave a presentation on python CESM diagnostics package. Want to make sure CTSM scientists know about it.
- Erik: Is ctsm5.2.0 a "we tag it when it has what's required?" or a when need to tag by a certain date? Or a mix of the two?
- Erik: I moved the current ctsm5.1.0 milestone issues/PR's to ctsm5.2.0. This means ctsm5.1.0 is complete. Is my ctsm5.2.0 list correct? Is FAN required? Is the FATES work required as part of ctsm5.2.0? Should we be thinking about both ctsm5.2.0 AND ctsm5.3.0?
- Erik: Suggestion to keep the -fast-maps option in the surface dataset prototype just to make testing faster/easier. See my comment here. https://github.com/ESCOMP/CTSM/issues/644#issuecomment-725896674.
Idea is that we'd tag the first ctsm5.2.0 dev tag when 5.1 is "done".
There are a few things we want for 5.1.
Will's medium-range goal is to have a version with FATES that can be containerized - e.g., in a CESM2.2 version.
More generally, Will suggests: a good milestone for 5.1 would be a CTSM-FATES version that can be run within CESM.
In terms of GitHub milestones: For things we want before 5.1 is "finalized", what is this called? We'll use a milestone like "5.1" for things that we want sometime in the 5.1 dev series - i.e., before 5.1 is "finalized".
FAN? Not absolutely required, but want to get this finished before too long.
There's an option both in mkmapdata and in mksurfdata that skips the 1km mapping. Erik suggests keeping this in the new tool chain to keep testing faster.
Negin and Sam would like to get a working version then get feedback on it.
Suggestion of opening fairly frequent PRs that can be brought to master, even when things aren't yet working fully.
Suggestion of putting python code in CTSM's python directory.
- Sunniva: Could we also talk about merging criteria from your (NCARs) side? We are discussing how to organize our GitHub here. I think we need some input from your side.
- Will: FATES
main_api
&CTSM5_1
main? Merge for CESM2.2 simulation & containerization? - Erik: I put a FATES main_api tag naming convention on the wiki. Make sure it's right.
- Dave/Danica: List of variables for CLM5. CLM4.5 list - http://www.cesm.ucar.edu/models/cesm1.2/clm/models/lnd/clm/doc/UsersGuide/history_fields_table_45.xhtml
- Will: Newsletter, including dumb questions about images https://github.com/ESCOMP/CTSM/wiki/CTSM5.1-developments
- Erik: I don't feel confident in merging CN-matrix into the PPE branch. And still having trouble. I'm thinking the way forward right now would be to keep them separate. So I'd create a PPEcnmatrix branch that has matrix and PPE, and it would be used for spinup while PPE would be used for normal simulations. Once, I can fix the CN Matrix issues so answers are identical I'd then feel confident in merging it into the PPE branch at a later date.
- Erik: I'd like to meet with Bill and Negin (and whoever else willing to) to talk about the CN matrix issues.
- Erik: I did create a SMYLE tag, I feel like it needs more testing though.
- Bill: working on land ice stuff for the next few months
Erik: In theory, we want to periodically merge back and forth to keep the two in sync. We haven't done it as often as we should lately.
We would need fates_main_api
to be updated to the latest version of
master first, then we could merge it back into master. That first step
is something Greg may be able to do in the next couple of weeks.
Will: one goal is to have a version of the latest FATES included in Brian's containerized version.
- Greg: it isn't absolutely necessary to have a version officially included in order for someone to use it, but it would be nice.
Greg's view is that, after the v1 release, it might make sense to keep FATES development closer to the main (master) CTSM development branch.
Ryan: One reason for having the fates_main_api
branch is that it helps
deal with the complexity of needing to remain compatible with both E3SM
and CTSM. But wonders if the net value is worth it, given the time
required to maintain this separate branch.
Ryan: maybe give a shot to doing direct integration (without
fates_main_api
) and see if it feels easier or harder.
Feeling is that we'll go with the approach of doing this based on runtime outputs, such as via the netcdf header.
To do this, we'd probably want a namelist flag that turns on all of the inactive fields.
With this approach, we'd need to pick a particular configuration to document. We'd probably use our standard configuration(s) for this. In addition, we may want to make it easy for users to run this method themselves in order to get the list for their current configuration.
Erik hasn't been able to fully resolve the issue of answers changing on the CN-Matrix branch when cn matrix is off. Suggests using a different branch for the spinup (with CN Matrix) and the main case (which would use a branch without the CN Matrix code).
Note that you do expect different answers with matrix on vs. off (but just roundoff-level). But you do not expect different answers when matrix is off (but you have the CN matrix code).
Dave supports Erik's plan.
- Dave: User's guide
- Erik: Single point discussion. I didn't understand that we resolved anything. The singlept script is being used for the PLUMBER project. It makes some sense because it's fast as it's just grabs a cell from an existing surface dataset. Do we want that to be the way that people do all single-point work? Should the new tool chain process also create single point surface datasets -- or will single point work always be a separate process? I think the main concern with PTCLM is the time and machine requirements to create mapping files. If the new tool chain is reasonably fast and easy is it the way to always create surface datasets? Do WRF users do single point tower sites?
- Erik/Greg/Ryan: Proposal for fates tags:
ctsm1.0.dev105_fates_api14.0.0.n01
? - Bill: Documentation build
Will: Isn't sure if many people use PTCLM. Question of whether to keep that around or deprecate that and just have people use Sean's scripts.
Also, better engineered process for running the PLUMBER sites and other similar workflows.
Erik: PTCLM is built on the toolchain, meaning that you need to make the mapping files. If we improve the toolchain, do we want to build PTCLM on top of that, or use a separate process that doesn't use the toolchain.
Will: Maybe worth considering different use cases:
- Pulling out a grid cell from a global dataset
- Setting up a run at a point like a flux tower site
For the latter, there could be some theoretical benefit from using the high res datasets, but in practice, it probably doesn't matter that much.
Dave would vote for deprecating PTCLM to avoid its maintenance cost, and spending that energy on other things like making it easy to run existing tower sites.
Path forward:
- Deprecate PTCLM
- In the toolchain work: we will not implement the single point override capability in the new scripts for now
- Once we have the new toolchain, we can evaluate: does it make sense to try to use this for single point (adding in the override capability), or stick with Sean's scripts? This will depend on factors like: how much work would this be to implement? how time consuming is the tool chain? how much do we want the processes to be consistent?
We need to add something to the user's guide describing how to use Sean's and/or Keith's scripts.
Erik suggests then moving stuff out of contrib, and adding some testing to it.
Will: Sean's script also needs to be updated to work with python3.
We'll need to come up with a different mechanism to make the smallville dataset for testing. Can probably do something hacky for that. We also have single-point urban datasets; not sure if we still need those (probably not really needed for testing; will check with Keith on whether he still uses those).
Suggested: ctsm1.0.dev105_fates_api14.0.0.n01
The n01
would be incremented if you do a second tag with the same CTSM
and FATES API version. May get rid of that (since it will be uncommon),
instead using a letter if you have a second tag.
Separate "directory" or at top level? Greg feels it would be helpful to have these at the top level to see where fates tags fall relative to the ctsm tags.
No other strong opinions, so let's go with that.
- More on tool-chain?
- Greg/Erik: Tags/releases for
fates_main_api
discussion. The Nordic ESM team is looking to integrate CTSM-FATES into their Galaxy project tools. Currently their toolchain downloads source tarballs from tagged/released versions of their fork of CTSM. If we tag/releasefates_main_api
:- What should the testing requirements be for fates branch tags? (the
branch naming convention would be:
branch_tags/fates_main_api.n02_ctsm1.0.dev105
) We might also want the fates version in there as well. - What should the naming convention be?
- What are the documentation requirements?
- Alternatively, what about using the NGEET/CTSM fork for initial tooling integration?
- Relatedly, coordination with other containerization workflow efforts at NCAR?
- What should the testing requirements be for fates branch tags? (the
branch naming convention would be:
- Dave: Anomaly forcing documentation
- Will: Forcing data for CESM-lab and dockerized CTSM simulations
- Will: SMYLE user-mods-directory
Would it make sense to coordinate this with CESMLab (Brian Dobbins' containerization efforts)?
Tagging fates_main_api
: fine to have tags of this, but question of
whether an alternative would be to more frequently sync fates_main_api
to CTSM's main branch, allowing tags to just exist on the main branch.
For now, at least, we'll have tags of fates_main_api
... but there's
also agreement that, as a general thing, we should try to keep things
relatively well synced between our main branch and fates_main_api
, so
we don't have a divergence between those branches / development
communities.
Documentation: in user's guide, special cases.
For training purposes, could:
- Use Qian forcing
- Use just a few years of data
But then there's a question of how to handle this for actual production simulations. If we're going to use a particular cloud provider, like Amazon, then we could store the data via Amazon S3.
- Tool-chain follow-up
- Bill: Big file / module rename: Do we want the
ctsm_
prefix on files as well as modules?- It is needed on modules to avoid namespace collisions
- I have mixed feelings about having this on files
- See https://github.com/ESCOMP/CTSM/pull/1143#issuecomment-707891792
- What do people think, maybe based on looking quickly at this?
- Erik: Should we have a
clm5_1
physics check box for changes toclm5_1
while it's being built? I left this off with the idea that this only matters whenclm5_1
is more or less frozen with simulations that support it. - Erik: Adding a milestone for ctsm5.2.dev001 that will represent
when
clm5_1
physics is "frozen". - Erik: Some lessons learned about ctsm5.1.dev001 tag. Our baseline
comparison testing for physics versions catches a ton of issues. I
skipped some testing for
clm5_1
because I didn't have baselines. I should've documented my testing forclm5_1
, as well as validated the refactor I did forclm5_1
with my own baselines. When I went back and did this I caught most of the critical issues that Bill found. It was also good that Bill did his review. Another lesson might be to be more careful about the introduction of a new physics option (because it will have a lack of baselines). We should make sure more people look at it for example. - Erik: We are going against something that Fang asked for in her set of changes. We think it's right though. How should we handle this?
- Erik: Should Matrix be considered another BGC mode (as in similar to SP, CN, BGC, FATES, but it's own called cn-matrix maybe). See my note in the cn-matrix PR (#640). Since it's a fundamental part of the model it needs more comprehensive testing to make sure it's working well.
- Erik: Should CRU-JRA 2019 be added to cime and ctsm? And maybe cruncep deprecated?
- Erik: Should SMYLE use-cases go into CESM2.1.x tags?
- Erik: maxpft removal in FATES? -- https://github.com/NGEET/fates/issues/427#issuecomment-706449078
- Will: tracking and documenting answer changing tags e.g. CTSM5.1dev007 -- https://github.com/ESCOMP/CTSM/wiki/Answer-changing-tags
Mariana points out: We could do away with domain files entirely. She has thought about this for NUOPC. Bill points out that we could do something already with LILAC, because we already ask the atmosphere to send the land mask for consistency checking: instead, we would use this land mask to actually set the mask within CTSM.
We'll start a new numbering at the inception of a new physics version. For example, ctsm5.1.dev001 was at the inception of 5.1 physics, and ctsm5.2.dev001 will be at the inception of 5.2 physics.
We'll keep in the check box in the ChangeLog to track tags that significantly change answers for 5.1, even if that ends up being a lot of tags for now.
Should we somehow denote the time when 5.1 is stabilized, which may be before the inception of 5.2? We could do this via a 5.1 release (documenting this on the web page). We can figure this out when we get to it.
For documenting the answer-changing tags (https://github.com/ESCOMP/CTSM/wiki/Answer-changing-tags): we'll stick with testing 5.0 physics periodically, since 5.1 is still rapidly evolving.
Feeling is: the BGC modes really give different physics / biology. The matrix solution doesn't fall into that category. Also, matrix can be combined with the other BGC modes, so there's a combinatoric issue.
Dave & Will agree this should be added, and cruncep should be removed or at least deprecated.
Discussion about the piece of the tool chain that generates the surface dataset (after first generating the necessary mapping files).
We're gelling around this idea:
- We use the functionality of PR #823: mapping is done with no masking
- For a given raw dataset resolution, we have a single definitive file describing the mesh
- Then each raw dataset has a piece of global metadata that points to the associated grid file
Then, for a user bringing in a new raw dataset, they would add this global metadata. It could either:
- Point to one of our existing grids
- Point to a new grid file of their own creation
One of the arguments to the tool will be a path to a directory that will
contain the outputs - chiefly, the various mapping files. Then, if you
rerun it and point to the same directory, it will be able to determine
which mapping files already exist and just recreate the ones that do not
yet exist. So if they have all been created, it would skip the creation
of mapping files and go right on to mksurfdata_map
.
We had some discussion about how input should be provided. Most of us
feel that there should be some high-level inputs (like year) to a tool
that creates the default namelist for you. You could override this using
either a user_nl
-like mechanism, or there could be a separate first
step of creating the default namelist, and then you modify this default
namelist directly. We're leaning towards the latter.
When should the user run the mksurfdata_map
tool-chain relative to the
LILAC build_ctsm
script? Often they would want to create the surface
dataset before they're ready to build & run the model. But there are
some advantages to fitting this into the build_ctsm
process - running
the mksurfdata_map
piece after at least the first stages of
build_ctsm
. The build_ctsm
script does things like:
- Sets up a location for files from the inputdata repo. (Side note: need
to think about how to leverage
check_input_data
in themksurfdata_map
toolchain.) - Specifies build options for a user-defined machine
- etc.
It could be useful to leverage some of this in the mksurfdata_map
toolchain.
- Tool chain deep dive: go through the current tools and their data flow, start thinking about what a unified tool could look like
Ben agrees that using a container could make sense - though it does introduce a new dependency on Docker (or whatever).
Negin asks: could OCGIS be bundled with ESMPy, since that is already available on a lot of systems.
- The issue is that the GIS dependencies are a limitation
- Regarding dependencies: They have talked about removing the big dependency, gdal. That isn't needed for our purposes.
Sam points out that using OCGIS basically means a single line of code change in mkmapdata. So we could use it now and not be too tied to it if it isn't working out long-term.
-
NetCDF inputs and outputs from each tool
-
User input for each tool
-
How important is it that we make it possible / easy to run individual pieces of the tool chain?
- Possible reasons this is important: If you only want to run one or two pieces; if you want to intervene to make some changes or set some options in between different pieces.
- It seems important to keep
mksurfdata_map
separately runable, for example. - If we want to keep the different pieces separate, then this could be
done by:
- Keeping individual tools that can each be run separately, and then having a wrapper script that calls each of them.
- Integrating the tools into a single utility, but having options to just run parts of this overall utility. In this case, you may need to provide some additional file names to the tool - for example, if you just want to run step (2), then you need to tell it where to find the outputs from step (1).
-
Would it work well for the user to provide input in a single place for the entire tool-chain?
-
Is the user input limited enough that command-line flags would be sufficient, at least for the vast majority of cases? Or is it complex enough that we would want to use something like a config file? (Actually, this probably isn't too important to resolve now.)
It feels like we should have two separate tools: one to create the domain file, and another to create the surface dataset.
For creating the domain file, a lot of the complication arises from how to specify the ocean / land masks. There are currently a few ways to do this, and it's not clear exactly how we want to support this moving forward.
For creating the surface dataset: We probably want a single tool that both creates the necessary mapping files and runs mksurfdata_map. The mapping files could be stored in a temporary location so that they don't need to be recreated if you need to rerun it - or so that (for example) you can reuse the same mapping files when creating two different years of surface dataset for the same resolution. However, there is general agreement that we should probably move away from storing these mapping files in our xml database.
There are some complications relating to how to allow the user to override the out-of-the-box namelist options. This mainly comes up in terms of pointing to custom raw datasets.
- Currently, we try to save time in the mkmapdata step by reusing a mapping file for multiple raw datasets if they use the same grid & mask (or, with https://github.com/ESCOMP/CTSM/pull/823, just the same grid).
- If a user wants to point to their own raw data set, we could let them point to their own scrip grid file that is associated with it, but that wouldn't allow us to leverage the above point. Having them specify an existing grid and mask could be trickier.
Currently, users sometimes run mksurfdata.pl to generate a default
namelist, but then modify it by hand. We'd probably want a process where
the user can provide all of their desired overrides up-front, to avoid
this extra manual intervention step. We could possibly use something
like the mechanism for setting CTSM runtime namelists (with a user_nl
file).
- Bill: Dynamic lake
mksurfdata_map
work:- Looking back at the outstanding todos, a lot of them have to do with
mksurfdata.pl, or interfacing between mksurfdata.pl and
mksurfdata_map
. I feel like this is outside the scope of what we typically (or ever) ask outside collaborators to do. Inne may be able to handle it, but I wonder if we should just offer to do these final pieces ourselves. - Who and when?
- Looking back at the outstanding todos, a lot of them have to do with
mksurfdata.pl, or interfacing between mksurfdata.pl and
- Erik: Chris got the Trendy-2019 branch working. Should we have Danica continue her simulation to after where it died?
- Erik: What are views on a CESM2.2.1 release versus going directly to CESM2.3.0? Maintaining both main-dev and release-clm5.0 is expensive as it is, adding the release-cesm2.2 is even more so. I already have something on it that needs to come to main-dev. If I understand Mariana's view the reason for needing CESM2.2.1 is to prevent people who need CESM2.2 from having to get the extra stuff that will come in CESM2.3 (such as NUOPC). If we really need to maintain three branches -- so be it -- but let's make sure we don't do so needlessly.
- Bill: https://github.com/ESCOMP/CTSM/issues/1166 follow-up
- Bill: https://github.com/ESCOMP/CTSM/issues/1167 follow-up
- Probably a couple of days of work, for analysis, backwards compatibility, testing
- My intuition is that it isn't very important, though if we want it done, it would be at least slightly faster for me to do it now, since my mind is in it.
- Bill: Should I fix fire issue https://github.com/ESCOMP/CTSM/issues/1170 ? It will change answers.
- Erik: My priorities for tags to bring to main-dev outside of PPE work?
Agreement that we should do these last pieces ourselves. These last pieces aren't too urgent.
Eventually, we may have mizuRoute predict lake area changes (similar to glacier), but that's relatively far out.
For now, we'll get the new lake areas for present day, as well as her transient data set, then we'll take it from there.
Dave: At this point, we're already working on next year's trendy. She should just do the run that she did for this year's trendy, not the 2019 one.
Erik's hope is that we don't need to put too much time into maintaining the 2.2 branch.
One thing we could do is to avoid updating our 2.2 branch unless it's really causing major problems in a coupled context.
Do we need to fix the dribble thing in the 2.2 branch? Probably not.
- Erik: What name should we give the PPE branch? I think it should be pushed to ESCOMP
- Erik: Looks like the release-cesm2.2.0 tag was just made. The announcement will likely be done when web-pages are done.
- Sam's priorities
- Upcoming tags
- PPE Project
Seems to be working without restarts.
Restart issue: the indexing of columns, etc., is changed with hillslopes. Ryan is making progress towards fixing this.
Erik suggests pushing this to ESCOMP. Bill agrees; leave it there as long as it's still being used, then just leave tags in place.
Name: ppe
Replacing parts of the mkmapdata toolchain with Ben Koziol's ocgis tool.
Latest status: not flexible with configurations that involve more than 8 CPUs.
Main concern is making sure that this works on a range of systems with different memory availability - e.g., cheyenne's regular nodes (not large memory), izumi and/or other cgd systems.
Ease of installation: Sam has gotten this installed on izumi... there were some initial snags but they got past them, and then were able to get it installed the same way on another CGD machine. (See https://github.com/ESCOMP/CTSM/issues/645#issuecomment-573371962)
Also need to test to make sure that surface datasets created with these new mapping files are the same as old within roundoff.
https://github.com/ESCOMP/CTSM/pull/1086
Ready to go
https://github.com/ESCOMP/CTSM/issues/201
Sam started this a bit
Workflow / toolchain: replacing the series of tools with (ideally) a single, streamlined tool.
CSU SAM model... unclear whether this will be Sam or Negin primarily, but in any case will be a few weeks off.
CN critically negative when changing the time step of the model to 20 min, for some carbon pool. Recollection is that it runs for a few days before dying.
Based on line where it's dying: it looks like this is leafc.
Unsure whether this is a crop point... could determine that (by turning off the crop model or adding a write statement).
Could this be related to xsmr? If so, the bug fix related to xsmr could fix it. This came in in dev093, but is off by default.
Actually, no: still ran into this error with crop off, and the xsmr fix is only for crop.
May want to rule out bad forcing from CAM... we may want to start by seeing if an I compset crashes with a 20-minute time step.
- Erik: Note labelling suggestion in FATES for loops/if statements. Should we do the same? https://github.com/NGEET/fates/issues/691
- Erik: I do spend time getting things to align in columns in CTSM, since this is an established standard (commas in use statements for example). This does require effort, should we abandon this effort for ongoing changes?
- Erik/Naoki: Good news -- we have HDMA grid working for regional for just over the Amazon. Took more than an hour to read for a very small grid. Running was quick, but the setup was really long.
- Erik/Naoki: We have two other grids, CONUS, and higher resolution (MERIT-Hydro 10x more polygons) to try. Naoki is going to work with Ben about these two. We think MERIT-Hydro might have fewer vertices per gridcell and that might actually work better.
- Erik/Naoki: Much of the problems we are having is due to the difference between WGS84-ellipsoid and spherical coordinates. When converting to ESMF_Mesh OCGIS is having to convert the coordinates. Since, these are very intricate gridcells the normal tricks of expanding by a small amount aren't working reliably. One option is to have Naoki create mapping files that we use in place of ESMF regridding. The Python script he has to do this is OK (it is part of the National Water Model). They don't do great user-support and can't be shared. We think getting this to work in ESMF would be better long term. We will have to figure out how to convert the file format to one that can be read by NUOPC.
- Erik/Naoki: Tried running a case for an offline regridding case that worked (but has fractions > 1, and degenerate points) (and before we tried some new things in OCGIS, that now aren't regridding anymore). This case took nearly 6 hours of wallclock (90 tasks, on 30 large memory nodes), just to do some of the initialization. It died with a subscript overflow in mizuRoute.
- Erik/Naoki: We think a workable workflow would be to have a few maps created offline, and then change the NUOPC cap to read in a location stream rather than a mesh. So then the mesh read wouldn't take so long. Right now the mesh read takes too long, and we have to use the large memory nodes. We probably need to bring in Mariana and the ESMF team to go over that approach. And Mariana should be brought up to speed on the problems we are having.
- Erik: CESM2.1 release testdb https://csegweb.cgd.ucar.edu/testdb/cgi-bin/viewReleaseTag.cgi?tag_id=506
Motivation: Rosie has said that she spends a lot of time trying to figure out where loops and if statements begin and end, so suggests labeling them.
Discussion: Some general agreement that it could be helpful to label loops when there are very long or deeply nested loops. Some editor support can help with this, but it can be helpful to not require this editor support.
Conclusion: we'll try to do this where it makes sense, but not make it a high priority / enforced standard.
Bill feels: do it if it doesn't take too much time, but don't keep it as a required standard, or feel like we always need to do it. Erik agrees.
(See agenda above for a bunch of notes.)
Acceptability of using pre-made maps? Probably fine as long as mizuRoute is not the standard way of running. But we will likely need to resolve some of these issues long-term: if we want mizuRoute to be the standard way of running, then we probably need to be able to generate mapping files online - but this should be discussed with Mariana.
- Erik/Naoki: Meeting with ESMF people tomorrow. Trying some of the smaller grids, but having trouble.
- Erik: Should we continue support for mkprocdata_map? Keith used it because it was faster. Adam and Patrick don't have a robust replacement for it.
- Erik: Current version of cime for
fates_main_api
doesn't run on izumi, need cime5.8.24 or after. Should we update the ctsm version or cime version? - Erik: Note, I'm keeping
user_nl_clm
in ctsm5.1 We could have. it change based on version, or move touser_nl_ctsm
always. But,user_nl_clm
is hardcoded into some cime code. I'm also allowing compname to be: clm2, clm4, clm5, or ctsm5. Some, scripts assume clm2, but could be more general. Some would be harder to do, especially for ctsm5, is this OK? - Erik/Bill: Note, Ryan Johnson volunteered to help us with the latest CTSM user's-guide. I'm hopeful about this, and hope there might be people like this that could help with certain tasks.
- Erik: CESM2.2.0 release should be two weeks away. There is an alpha/beta tag to do, then a release tag, and two CAM release tags. I think we are done now. cesm2 release tags aren't automatically going out. I'm going to leave it like that unless we start doing more of them.
- Bill: Switching to stub glc for now rather than CISM2%NOEVOLVE.
- Plan is to switch to a data glc within the next year. But this will speed our testing in the meantime.
- Any objections?
- Plan to change existing noevolve compsets to use sglc, NOT adding
Gs
to the compset aliases
- Bill: okay for me to spend time on reworking test list?
- Negin: rename PR #1143 questions and discussions. Potential timeline to avoid any conflicts.
- Erik: Fire changes modify btran2. For backwards compatibility could add a new variable or a switch. If switch -- should probably have another switch besides fire_method and coordinate it with the fire switch.
Sean ran into a problem with single precision fields on the surface dataset: leads to garbage values.
- Bill: this is a known issue with pio1. We're planning to switch to pio2 relatively soon, so are leaving this issue unresolved for now.
Sean also asks why we use double precision on our surface datasets. We could consider moving at least some fields to single precision, though Erik remembers running into problems with having single precision variables a while ago.
Sean: merged fates_main_api
with hillslope and ran the two
together. It ran! Jackie will look at FATES stuff.
However, it died trying to write the restart. Ryan can help with that.
Also, issues with init_interp
.
- Bill: this is known not to work with FATES
Keith is using it. We'll keep it as long as people are using it.
user_nl_ctsm
vs. user_nl_clm
: Feeling is: let's wait to change this
until we're ready to change everything at once (all "clm" to "ctsm").
History file names: Again, let's wait to change this until we change everything at once.
Overall feeling: let's back out the compset naming of 'CTSM', going back to 'CLM51', then think about how & when we want to rip the band-aid off (i.e., changing all names at once).
- Erik: We are planning on removing the fully coupled CLM4.5 compsets: BC5L45BGC, B1850L45BGCR, B1850C5L45BGC, BRCP85L45BGCR, BRCP85C5L45BGC, BC5L45BGCR, B1850C4L45BGCRBPRP, B1850C4L45BGCBPRP, B1850L45BGCRBPRP, E1850L45TEST. Let us know if NOT OK.
- Erik/Greg: Note, we noticed with Jackie that you can't out of the box read a NetCDF-4 file on cheyenne. The easy workaround is to "nccopy -k 5 ". This was created with a subsetting script from Sean, so others might run into this.
- Erik/Naoki: Ben is looking into "degenerate cells" in the HDMA mesh. We are trying some things without that.
- Erik: Bill inspired me to take a second look at ctsm5_1 without a second directory, and I got it working.
- Erik: I have one bug to fix for the CESM2.2.0 release. That will go into a release tag, but also into main-line development. There is still one cesm alpha tag and two CAM tags to go, as well as a cime tag.
- Bill: dynamic lakes
- I'm working on avoiding changes to TWS (due to #658)
This just means that there won't be an alias to make it easy to run fully-coupled CLM45 compsets; it will still be possible via the long name. It won't be tested in the fully-coupled sense.
Long-term support plan for CLM45: Probably not indefinitely, but unclear how long.
Probably reasonable NOT to have CLM45 tests with FATES (in an effort to reduce test turnaround time).
It's just an issue with the initial conditions file.
- Greg/Erik/Bill: Validation of
fates_main_api
and discussion of next steps (porting to escomp github repo, merging into ctsm master) - Erik/Naoki: Ben is back, and helping. Naoki has a few things to try. Will try to possibly meet with Ben/Bob next week. We now have 57 problem elements, down from 114. One problem is likely the shared line for elements that go across the date-line. It likely recognizes that both elements share the same line, so one way to handle it is to shift one of the polygons a tiny amount. Naoki is looking into this.
- Erik/Naoki: I've been thinking about how to eliminate needing to read in the mesh. I think using "Location streams" for the component is the way to go. There would be a separate file that just has the center coordinates for each polygon. You'd have the option to read either one, but location streams couldn't do the mapping. We would need to extend capability in CMEPS to allow this. And it would need to recognize that it couldn't compute maps for it. We plan to meet with people to see if this makes sense and is the best direction. Ben got back this week. Bob is busy with another project and has PTO rest of this week. So possibly in the next week or two.
- Erik/Bill: Should we use stub ROF and stub GLC in more of our tests?
- Erik: Kate needs some work on the release branch for a simulation. How do we prioritize and who works on what?
- Erik: FYI CESM2.2.0. There is one alpha tag in test, and one more alpha tag to do. My once again "last" CTSM tag is done. At this point there are only a two component tags to finish up (one cime, one CAM).
- Erik: I may need to have a
ctsm_cime_config
directory separate fromcime_config
for CTSM as a component. On the release branch I seemed to be able to get this to work without that. It actually might be an advantage to have a new directory though. What are people's thoughts on that? For example, I can probably pretty easily change the XML variables to all start withCTSM_
with no penalty. CLM45/CLM50 will still have variables withCLM_
. The one I'd likely leave alone isCLM_USRDAT_NAME
. - Bill: issue 1001 on mksurfdata_map wetlands:
- The biggest issue is that this doesn't do what we said we wanted,
which was: if using no_inlandwet, then wetlands should only be 0 or
100%
- What do we want in this case? Renormalize the existing special landunits to be 100%??
- What is the priority of this?
- A secondary issue is the possibility of some edge cases right now, particularly if we have glacier (or lake??) area > 0 but less than 0.5%.
- The biggest issue is that this doesn't do what we said we wanted,
which was: if using no_inlandwet, then wetlands should only be 0 or
100%
Regression testing looks good (except for one build-namelist issue related to recent lightning changes). Answer changes, but that's expected. FATES scientists will look at and sign off on diffs.
We're fine with Greg pushing the branch to escomp himself whenever he's ready.
We'll get the new FATES API into 5.1, but after the PPE stuff is done.
Is there anything past dev 093 that FATES folks want?
- They are interested in the hillslope hydrology, but that hasn't come to master yet.
- Some of the other things are irrelevant with FATES (LUNA bug fix, bioenergy crops)
- Would be nice to get cime updates, though
Should fates next api branch update cime by itself? General feeling is to just update to more recent ctsm that brings in a more recent cime, unless there's a specific need to get a more recent cime.
Dave suggests just having them do a namelist change if they need this within the next couple of weeks.
Bill: Hesitant to have duplication that may live for a relatively long time. Let's plan to use "clm51" for now. Wait to do the rename until we're ready to do it in basically one fell swoop - within, say, a couple-week period. It's okay to have this duplication live for a short time (to give a week or two to update fully coupled and CAM configurations).
Issue #1001: feeling is, let's not worry about this for now.
- Bill/Will: Talk about component information under CESM2.2.0 release: http://www.cesm.ucar.edu/models/cesm2/land/
- Erik/Naoki: FYI: Fairly stuck. Identified all 114 polygons with
problems. Not quite sure what the problem is. Have a conversation on
slack about it. Ben and Bob from ESMF will start helping next week.
Also plan to meet with Jim/Mariana sometime about not needing to read
in the mesh file (or read a grid or location stream in it's place. I
think this might require us getting a change into ESMF, so that on
ESMF_MeshCreate
it doesn't read the vertices. We do need to provide some description of the grid for history files and for describing the parallel distribution. The one thing we are working on is the5x5_Amazon_HDMAmz
grid. Also need to bring in NUOPC updates and mizuRoute updates into components. - Erik: New bug in MOSART (likely RTM) with a cime update it reports
problems in
SMS_Ly3_Mmpi-serial.1x1_numaIA.I2000Clm50BgcCropQianGs.cheyenne_intel.clm-clm50dynroots
as it reports that the direction file is just a directory ininput_data_list
, and cime now flags this as an error. So we could either change to a compset that has a stub ROF. And/or we could correct MOSART/RTM for this. Plan to fix MOSART/RTM. - Erik: Should we change all our tests from gx1v6 mask to gx1v7? Single point tests to stub ROF?
- Erik: Should I try to ensure that the cime update that allows CTSM51 in compsets is NOT in CESM2.2.0? It shouldn't harm anything, but it also wouldn't work and might be confusing.
- Erik: FYI: Problems with default PE layouts for the new CAM grids. Some don't even allow us to run, others are specific to cheyenne, and should be labeled that way, with another set for any machine.
- Erik: FYI: In CSEG we decided we needed reasonable default PE layouts for the new SE/FV3 grids that were created for the CESM2.2.0 release. This will require additional CTSM, CESM, and CAM tags. I'm doing the I compsets. Brian Dobbins will create the PE layouts for F and B, and he'll create the CAM tag. I'll create the CESM PR.
- Erik: Initialization of previous year for LUNA (see #1060)
- Erik: Note changing default last year for GSWP3 data will have an apparent change in answers (for some compsets), and requires bringing in a new cime tag.
- Erik: FYI: GSWP3v1.1 2014 data update. Softlinks vs. xml overrides (see cime #3653)
- Erik: I want to change the waccmx_offline test in my bit-for-bit tag, is this OK? I'll just be changing the start date, which should help with #1101.
- Erik: What do we need to do in #1001 again?
- Erik: FYI plan for PPE work is to branch off of ctsm5.1.dev001 and create a branch with three things on it: CN-matrix, hardcodep2, Arctic with Kattge. Those changes will come in as tags to mainline development. There will be a parameter file for ctsm5.1 that Keith manages that we will bring in possibly as the starting version, and then certainly as the final version.
- Erik: We should schedule to meet with Katie.
- Erik: Bill and I had an interesting discussion about taking advantage of people's strengths (like Bill's sometime magical ability to find problems). We should keep this kind of thing in mind. But, doing it requires that we know something about each other's strengths/weaknesses. Something to be thinking about.
- Bill: Dynamic lakes
- In the initial development, I had been helping with the Fortran source code changes and (I think) Erik had been helping on the mksurfdata_map / dataset side. Should we keep this division of labor for the final pieces, or should I take over all pieces at this point?
- Dave: do you know if Inne has a new lake dataset that we should be using?
- Proposed plan: have dynamic lakes on all landuse timeseries files moving forward, but have transient lakes off by default. Sound good?
- We'll need to get out-of-the-box transient lake data for this
- I realized a possible gotcha: if the available years differ for transient lakes vs. transient PFTs. (See https://github.com/ESCOMP/CTSM/pull/1073#issuecomment-673216033)
- (For Erik) All file names in one text file vs. separate text file for lake files? (Moot point if we want to allow different years for lakes.)
- Erik: I've been doing an experiment with the fire changes on doing a rebase vs. using patch. The two methods are inherently very different, I think there might be times when one or the other is preferred. Doing both side by side is helping me to understand each. The rebase is more predictable and maybe a better standard way to do it.
- Bill: remaining needs for LILAC-WRF (diagnostic outputs)... who?
Erik's thought is that we could provide mapping files for common resolution combinations, then at runtime just read in a mapping file and a grid rather than a mesh. (If someone wants to use a different resolution combination, they'd still need to create a mapping file, which would take a very long time.)
Naoki may also look into simplifying polygons.
New version of cime adds some checks....
Feeling is that it would be okay to switch to using SROF for single point moving forward, so Erik will do that rather than changing MOSART & RTM.
We'll stick with the workaround of skipping the last year for some time, then come back to this.
Dave: we do want to update the present-day lake dataset to what Inne has been using. We'll do that in a tag with other surface dataset updates.
We will have transient lake area on all transient datasets moving forward; we'll get that from Inne, check the format, import to svn, etc.
Plan for now: separate the Fortran changes from the tools changes; Bill will bring the Fortran changes in soon, and we'll probably defer the tools changes for a bit.
- Greg: Rebase of Charlie's land use PR and Sam's lightning PR into
fates_next_api
rebase is complete. New branch is located here: https://github.com/glemieux/ctsm/tree/fates_main_api. Testing is underway with some tests coming back with RUN failures in both the fates and clm_aux suites. - Erik/Naoki: Checking the HDMA is problematic. Gone through about half in 6-hours wallclock. Naoki working with ESMF folks to finish it. Also has to figure out how to fix it. Might just be that some vertices are duplicated. Looks like there will be around a hundred problematic points. The GIS system might have an option to fix it?
The checker completed and found hundreds of issues (e.g., duplicate node points). Naoki isn't sure how to fix these issues.
At least for the point they spot-checked yesterday: It looks like the points are truly an exact duplicate.
- Ryan: sparse/non-continuous grid setup
- Erik: Ran CESM regridding offline for f19 to HDMAmz grid, it ran for 2 hours before finding a problem. I had to mess with the processors to get enough memory for it to even work.
- Erik/Naoki: HDMAmz grid has some "holes" in it. We are working with ESMF to find and fix these. Ben has created a script to find them. ESMF will make this part of their process. Once we have one that's verified we'll try again.
- Erik/Naoki: HDMAmz grid will likely take substantial time to regrid. So using mapping files will be a normal part of the operation. I think Mariana said the process will make a mapping file that you can use later.
- Erik/Naoki: The HDMAmz mesh is 5.7GB and hence just reading the mesh will take a lot of time and memory. The PE Layout will have to be adjusted to compensate for this. We might need to use large memory nodes for example.
- Erik/Naoki: Naoki is setting up a HDMA mesh for just over the
5x5_amazon
We'll call this5x5_amazon_HDMAmz
. We can work on this while we are getting the global grid to be verified. - Bill: land ice project
Ryan raises this question for the sake of FATES at individual points.
Will: Similar needs for NEON.
Negin: ESMF has a notion of location streams (alternative to grids and meshes).
Ryan is running into gsmap errors when using more than 1 point.
Dave suggests contacting Keith, because he's been doing this. He thinks there are scripts to do this.
Will also points to the tools/contrib folder for two python scripts that Sean made for single-point runs (though that doesn't concatenate things together for multiple points). It sounds like Keith's workflow / scripts are really what Ryan should look at.
Will asks if Sean is doing something similar for Fluxnet. Dave: no, this is harder for Fluxnet, because each site has a different set of years. (And it's not clear at this point how much performance benefit you gain from doing a bunch of points in one run vs. doing multiple runs like this.)
More on Keith's workflow: people think that he has just masked out certain points, rather than making a new "grid".
Note that, for typical I compsets, datm runs on the same grid as CTSM, and does some bilinear mapping on the fly between the forcing data and the land grid. But Erik says there are some restrictions (in terms of what the grid can look like).
mizuRoute's global grid leads to a 5.7GB mesh. This is because catchments can have ~ 5000 vertices for each polygon.
It feels like the large mesh could be a roadblock to more general use. We wonder if it would be feasible to come up with a mesh with a reduced number of vertices in polygons.
The current plan is to test out the infrastructure with 5x5 amazon.
Is it possible that CESM could avoid reading the mesh if you already had a mapping file, given that (it sounds like) the mapping files are significantly smaller than the mesh description? (This might get around the issue of the 5.7 GB mesh.)
Erik points out that, for CAM's 2010 F compsets, we've been using a 2000 surface dataset (which is close enough to 2010 for their purposes), but a 2010 initial conditions file. That 2010 initial conditions file ends up being interpolated to 2000. So he proposes changing the logic so that it picks up an initial conditions file based on the year of the surface dataset rather than the simulation year - so it will end up picking up a year-2000 initial conditions file in this case. The exception would be a transient case, in which case we'd still use simulation year to choose the initial conditions file.
- Erik/Naoki: Trying out coupling with HDMA grid for mizuRoute. Having trouble. Seems to have trouble creating the mesh. Will likely need to bring in Mariana/Jim and ESMF folks.
- Erik: One thing we should try for the HDMA grid is to create a mapping file offline. That could isolate issues to ESMF. We can try using the mapping file if it works.
- Naoki: For lakes will need to send precip and evaporation to mizuRoute from CTSM. This will be changes in CTSM (from Inne?), nuopc coupler, and mizuroute
- Erik/Mariana: Mariana would like me to move the mizuRoute and nuopc updated branches to CTSM mainline development. Figure on doing that as part of the upcoming work. I had to update CMEPS and CDEPS just to get the nuopc test to work already.
- Erik/Naoki: As part of that we'll make a branch tag for mizuRoute, and we figure we'll have a CTSM tag that includes mizuRoute as a possible component. We can test it with the MOSART half degree grid, the amazon 5x5 grid, and the HDMA grid.
- Erik: FYI, FATES people are excited about the NLDAS2 grid and data. This is an example of something where work on the NWP side is helping the CLM side. That's good to know about.
For lakes will need to send precip and evaporation to mizuRoute from CTSM. This will be changes in CTSM (from Inne?), nuopc coupler, and mizuroute.
This is not something that Inne is working on. Martyn & Naoki will work on this, but will probably need help hooking things up.
It sounds like it should be fine to just send net P-E.
We already have column-level P (needed for downscaling over glacier columns). So in the lake code, need to create a net P-E using that and column-level evaporation.
Erik will create an issue. Would ideally like this on the month-ish timeframe, but maybe not critical on quite that timeframe.
Dave: Need ability to have lakes track water volume. Naoki: that's in place.
For now, we aren't going to worry about lake areas changing due to lake water volume changing (conceptually, lakes have vertical sides).
- Erik: Updating ending ndep year from 2005 to. 2015 changes answers at startup, so. need an extra tag in CESM2.2.0 with this one small change.
- Erik: Any feedback on proposed branch tag names. for. mizuRoute? cesm-coupling.n001_v1.1
- Erik: I met with Simone, Louisa, Julio, Adam H., and Francis to talk about IC files for CESM2.2.0 when coupled to CAM. They have specific files for 2013-CONUS, 1979-ARCTIC/ARCTICGRIS, and then high. resolution match ne120 2000, and low resolution match the older 1-degree 2010 file or the 2-degree 2000 file. For 2013 CONUS they will also have REFCASES for. each month.
- Bill: hybrid fun
- Aleah's issue
-
init_interp
plans
Updating ndep file changes answers. Suspicion is that it's due to the 0th time step, and resulting interpolation done now from 2015 rather than 2005.
We should look into whether this is possible. May require conversations with CAM folks.
Biggest cost is confusion... but could be a bigger issue with coupling to regional atmosphere models via LILAC.
Erik points out that it could be useful to have capability to output an initial history file that gives the starting condition of the run - so one could see the 0th time step history file as a feature rather than a bug. Bill suggests adding a new capability to allow this, where you define an initial history file that can have some instantaneous values of requested states.
Bill asks if the REFCASEs are needed in the development code (which could be more work to maintain) or just release code. Erik will check.
This has been progressing. There are a few works-in-progress on the old branch (Sam's fire/lightning; Charlie's landuse integration; maybe some stuff from Rosie). Need to get those resolved and get those rebased before they can finish the rebase.
They will probably rename the branch. Will make a tag of the old branch and delete it.
-
Tracking todos in a PR - and code reviews in general
- Button to show all conversations
- Filter - like tagging / labeling / flagging a comment so you can just see those
-
Syncing default branch between forks (allow in repository settings for your fork)
- Erik: Mariana's recommendation is that we end mizuRoute grids with
"mz" to designate it's for mizuRoute. So we'll have a r05mz grid for
example that doesn't have ocean.
- Everyone seems to be OK with this nomenclature. So we'll do the same thing for the HDMA grid (it'll be called HDMAmz in CESM/cime).
- Erik: Good news. OK, with mizuRoute we found out that the wrong mapping files
were being used by default (we thought the default was to calculate one the
fly). Naoki showed that either of these work correctly now.
- Dave asked what we need to do so that mizuRoute can couple to MOM?
- Erik thinks this will be handled OK if we pass fluxes to river mouth. The ocean model picks up river output close to river mouths.
- What happens in closed basins? We probably need to do the same as what goes on in MOSART.
- Maybe Sean can help with ocean coupling, since he did this with MOSART.
- CESM3, shouldn't wait too long before deciding to bring mizuRoute in for CESM3,
- Dave suggested that mizuRoute would need to offer new features (e.g., Lakes).
- MizuRoute needs new capabilities for CESM too (e.g. passing ice to oceans, tracers[esp. water isotopes]).
- Dave suggested that lakes should be the first priority, as hydrologists are more interested in this capacity & CESM coupling will be a heavier lift.
- Erik: Naoki's plan is to work on the HDMAmz grid now. We also have
some cleanup work to do, and PR's for CMEPS/CDEPS/cime. What other
work with mizuRoute in CESM needs to be done coming up?
- This mesh is on -180-180 degrees grid (rather than 0-360), but requires some testing. ESMF is supposed to handle this and tests for this.
- Erik/Naoki: The ctsm-coupling branch is off of Naoki's fork, and we need
to get this on the NCAR/mizuRoute main version (with a tag for CESM
to use it). This likely needs some work with stakeholders (Martyn, Andy...) to
decide on process and testing. How should we move forward on getting this
to happen?
- Erik recommended getting everything together on NCAR/mizuRoute with a branch
- So we'll move the ctsm-coupling branch from Naoki's private fork to the NCAR fork.
- And we'll do branch tags on the ctsm-coupling branch.
- We'll need to come up with a naming convention for these branch tags.
- Erik: What's the plan forward for landuse change for FATES? Stream
files or both mass/area on landuse.timeseries?
- Dave summarized our discussion with Charlie on FATES LULCC in FATES
- Current CTSM harvest information being pre-processes from LUH2
- FATES really just wants regridded information from LUH2, can we input this with streams file?
- With mapping files we can to this with MCT (current coupler) (just have to create them in advance)
- Conservative regridding on the fly should be possible with NUOPC, meaning the host land model would be responsible for providing this data (since ELM isn't using NUOPC).
- Ryan agreed that using the raw LUH2 data with regridding if possible.
- Ryan asked if we should do something similar with lightening data.
- Erik noted that this is similar (via streams) but with bilinear, non-conservative, interpolation.
- Charlie's PR will come in as-is for now for testing. Down the road we'll generate better input data.
- Dave will follow up on NUOPC and look into raw LUH data (Peter's already downloaded).
- Erik: I talked with Keith O. about getting his urban changes
in. And as a result did some updates to the upcoming tags pages. We
need to plan on doing an update of surface datasets. Are we going to
use the new datasets for
clm4_5
/clm5_0
as well asctsm5.1
?- These include: changes to urban (minor changes), gross land use changes (can be turned off) and pft/crop distributions (which seems like a bug.
- Will and Dave don't think it's necessary For now let's not maintain different surface datasets.
- Erik: Resorption PR is showing unexpected differences (#1063).
- Erik will track this down and make sure CLM4.5 answers don't change
- Erik: Arctic changes should be a CTSM5.1 feature right? So we need
to have them replicate clm5_0 answers and only change for ctsm5_1.
- Yes, it's a CTSM5.1 feature
- Will thinks this should be straight forward in CNPhenology for Arctic stress decid. pfts
PPE branch. It might be OK for the PPE branch to be hardcoded to assume CTSM5_1, but this will make it harder to come to main development. So we'd prefer having CTSM5.1 on main development and the PPE branch pointing to it. There is a bunch of little details that are needed for the PPE branch.
There are two tags required by CESM2.2.0 for us. I want to get this done before the cheyenne power down. Hopefully, can work on branches on izumi during the week, just can't make main development tags.
There are then two tags after that for PPE development work. That would be a good point to be at for the PPE branch.
- Erik/Naoki: Center coords are required for nuopc history output. We have several issues in CMEPS/CDEPS. Slack has helped to have faster communication/ response. The regional amazon grid works correctly if you change it so that mizuRoute runs after CTSM. The global case for the MOSART grid remaps incorrectly. Unfortunately we have a cime branch for mizuRoute again. We did reach out to ESMF to ask about the remapping problem. We are also working on a regional grid that has some ocean in it.
- Erik: nuopcdev slack page... https://app.slack.com/client/THARDLWKY/C0150BMDSVC/details/info
- Erik: We are running into issues with filename length issues with mizuroute cases. Looks like NetCDF has hardcoded a 256 character limit. We are using ".mizuRoute." in the filenames should we shorten to ".mz."? What else can we do?
- Erik: Go over PPE branch project board, make sure we have everything covered (https://github.com/ESCOMP/CTSM/projects/30)
- Erik: CESM2.2.0 release freeze deadline has been extended in order to do the long list of tags required. CAM has a bunch. We have three. MOM has one, POP has a few. There are two CESM alpha tags to make for this.
- Greg: Update on
fates_next_api
rebase progress and next steps - Erik/Will: #1065 about the wiki
- Erik: Can I have Bill and Negin look at some perl code I'm struggling with? There's a change I've done that I think shouldn't change results -- and yet it does. Maybe you'll see something I'm missing.
Toby Ault wants to make some contributions. This could be a good motivation for pulling out some things into routines that could be shared between FATES and big-leaf CTSM.
Phenology could be a good initial target. (This would be less involved than photosynthesis.)
Where would we want this? In CTSM? In FATES? In a 3rd repo?
- It can't live in CTSM, because we don't want FATES to depend on any CTSM code
- Bill's feelings: if the management (gatekeepers, etc.) would be about the same for this shared library as for FATES, then it would probably be simplest to keep this in FATES rather than introducing a new repository.
- Ryan: Feels it would be okay to have a 3rd repo.
- Overall, it seems like making a subdirectory in FATES for shared code is probably the best way to go.
A complicated piece of this is how to write a routine that isn't dependent on the data structures of either FATES or CTSM. Bill notes that, for performance reasons, we probably want to keep the looping inside this shared routine. This is complicated by the fact that FATES loops over linked lists, whereas CTSM uses a vector with filters. We'd need to come up with a universal way to handle this, which could involve copying in/out of temporary arrays. (Though for stuff that just operates on a daily time step, performance may not be an issue.)
- Bill's thought afterwards: May want to look at what we set up on the AgSys branch: I think this involved some copies in/out of data structures (though just on a daily time step).
Plan is to talk to some FATES folks about the priority of this.
Greg has finished the rebase. He ran the aux_clm
tests on
cheyenne. There were a bunch of diffs in the testing, but all within
FATES stuff.
Will try to get Charlie's changes in soon; will rebase his changes onto this rebased branch.
They'll ask FATES users (like Jackie) to run any important cases and make sure things look good.
Erik suggests comparing the result of the rebase with the result of the earlier merge attempt.
No decision yet about having a new branch name or overwriting the
old. Greg leans towards a new name. Erik suggests something like
fates_dev
. Ryan suggests something like fates_main_api
(noting that
this branch has an API that is compatible with the main
branch in
FATES).
At some point we should delete the old fates_next_api
, but we should
create a tag of the end point in case we want to go back to it.
- Erik: Note the list of supported grids in the SE grid PR
- Erik: Talk about having FATES have it's own landunit, and possibility of running non-FATES natural veg independent of FATES.
- Erik: Any takeaways from meeting we need to work on? document? etc?
- Erik: I looked for an issue about isotopes in FATES and didn't see it. Should I ask Charlie to create an issue, and also create an issue on CTSM?
- Bill: Out next week, and likely for part of one or both of the following two weeks
Currently, if you run FATES, you can't turn on use_cn
. But this makes
it impossible to run with crop.
Should we have FATES run on its own landunit? No objections to this, but some question about how much this would take. Erik will look into this. At the same time, will look into introducing some higher-level flags like is_natveg.
A somewhat related idea is introducing a pasture landunit. Might want to consider that at the same time. Pasture is a bit of a hybrid between crop and natural veg. The hardest part of introducing new vegetated landunits may be the need to introduce higher-level flags rather than checking specific landunit types... so once we've done that (which would have other benefits), then introducing new vegetated landunits may not be so bad.
Tied in with this, Erik thinks that some of the work along these lines
could also enable you to run SP for natural veg but also crop. Dave:
there could be some value in running SP-crop (with generic crop) but BGC
for natural veg. If we had flags like, "this is a landunit with BGC
active", rather than checking explicitly things like istsoi
or
istcrop
, then we could mix and match things more easily.
An issue with this mix is that it wouldn't work well for transient cases (in terms of what to do about belowground C & N).
Land use / land cover change: Would this require crops? Not necessarily, though it would certainly be good to have.
We should ask Rosie which configurations / combinations are important.
It sounds like getting correct time values for time-average fields could be a high priority (Bill will be in a follow-up discussion on this tomorrow). One possibility is that we may require instantaneous fields to be on separate streams from time-average fields. We're fine with this; the question is who will do it and when - because of competing priorities.
The LUNA bug-fix branch is ready to go. This is important for the PPE branch. (Fire changes and Leah's other changes would be nice, but not order-one important.)
- Bill: From Wim Thiery, model output requests: (i) outputting subgrid-scale variables as gridded netcdf output next to the current vector-based files, (ii) outputting crop yields (grid-scale and per CFT) directly by the model
- Erik: cism-wrapper needs the nuopc update branch to go to master. This would allow me to test with compsets with cism. Should I do that or get Kate involved?
- Erik: FYI: MOSART and RTM also need a new nuopc update to come to master.
- Erik: Have we run test suite with nuopc for all tests?
- Erik: One of the changes on Mariana's branch has to do with CNFire, and changing stream data type to pointer. Bill is that one of the things you don't think are required?
- Erik: FYI, release freeze delayed until Jul/2nd. I still don't have a PR on SE grid updates. So I think we should modify the order of tags a bit.
- PPE tag. Need to make sure each branch is on the same version. What is the branch name for Keith's branch (hardcodep)? Does Daniel bring anything in here?
- Bill: quick lilac update
When we have some changes that are answer changing (e.g., relaxing a cap for LUNA): let's bring this separately to master, so that Keith's parameter branch can remain bfb with master.
- Actually, it sounds like that particular change may be bfb with the current default parameter set.
There will likely be some changes to default parameters. We'll update the parameter file for ctsm5.1 physics for this sake.
As much as possible, it's best if we bring things to master when they're ready-ish. Keith's branch is probably ready to bring to master, for example.
Will: Danica has a nearly-complete script for taking vector output and making it in a format that's gridded. That's probably a better approach than having the model output gridded output with an extra dimension, because this will result in very large files. She could probably use some help with the final steps of this: converting it to a user-friendly script, maybe some performance work.
- We also need to do better about letting people know about tools like this - via User's Guide, emails, etc.
Outputting crop yields: Danica has scripts for this.
- Note, though, that crop yields aren't output by default. We may want
to consider making that default.
- Erik had some good ideas for how to do this, but we haven't had time to implement it
- Another option is to just instruct people how to set the --usermods-dir option to get this extra output. We could add this to the user's guide.
We have a lot of mailing lists
- ctsm-software: the small list
- ctsm-core: started as NCAR people & friends; a science-focused list for ctsm
- ctsm-dev: auto tag emails; feels like a more "insider" list, in that the emails won't really make sense to people who don't already know what's going on
- lmwg
- bgcwg
We should consider having an additional mailing list for people interested in using CTSM for NWP / regional applications.
- Erik: Should we convert more PR's to "draft"?
- Erik: mizuRoute. We met with Mariana/Ufuk they think our use of a
partial grid (just over land) should be OK. They've helped with some
tools and such to use to figure out what's going on. Mariana pointed
me to her CESM
nuopc_dev
branch, but I haven't been successful in using it. I suspect there are missing changes that haven't been committed or pushed. Mariana will look at my failing cases today to figure it out. I've also setup a PR to add mizuRoute to thenuopc_dev
branch of CESM which should make it easy for Mariana/Jim to replicate our mizuRoute cases. And we should be able to more easily replicate what they are doing. Once, this is in place we'll be able to submit issues and have them work on it. They also thought that newer versions of cime/CMEPS should solve some of the problems we are seeing. - Erik: Note, there are two tags that I think are required for CESM2.2.0. One is for SE grids and 1979 ICs. I'm waiting for Chris and Adam on that. The other is a bit-for-bit one that just includes some miscellaneous updates that I'm thinking should be in CESM2.2.0 (see the cesm2.2.0 milestone in issues for the list). Our progress on CTSM5.1 can be outside of those two.
- Erik: izumi update seems to require a cime update for both master and release?
- Erik: PIO2 problems?
- Erik: #1010 WACCMX problem and memory leak. I'm confident in being able to fix the memory leak, but the simplest fixes to that are still having the problem, so I wonder if it's beyond the memory leak? Should we properly fix the memory leak completely first?
- Bill: status of WRF integration / next steps
MizuRoute is important so that Inne can make progress. But short term priorities are in this order:
- CESM2.2 (don't want to hold up the release)
- Perturbed parameter
- MizuRoute
Let's plan to have things done by the end of next week. Then we could make a release branch. Let's not get the LUNA fixes into cesm2.2. We could get the fire changes in for cesm2.2, as well as biomass heat storage (but neither of those are critical).
- Erik: mizuRoute. The 5x5_amazon grid is now working gives correct answers for small number of processors. I'm going to create a similar grid with a small amount of ocean so we can see how the mapping will handle removing ocean for mizuRoute. And Naoki is creating a mesh with proper HRU's for the amazon. We are meeting with Mariana/Jim this afternoon to share progress and problems we are finding with CMEPS for PE layouts. Some mizuRoute grids are on -180 to 180 and can't easily change, but we've been told ESMF handles this.
- Erik/Will: Sean's PR -- who will work on?
- Erik: Current status of TRENDY/matrix work. Have to limit amount of GRU. I don't quite have the limitation right and get Carbon balance error. Chris has isotope spinup in place. He has concerns with it. The limitation I put into place for GRU is time-consuming and error prone. Chris suggests he has a better way that sounds promising.
- Erik: Status of LUNA bugs work tuning?
- Erik: mizuRoute status. It's running now with proper area. The mapping is obviously incorrect though. When we compare CTSM history fields for runoff versus what mizuRoute is running with -- it's obviously wrong. The new thing we are going to try is a tiny grid for the 5x5_amazon regional grid. We can run everything on a single processor, and we should be able to see what's wrong with the mapping.
- Erik: FYI. Note from CSEG. Alper is doing some interesting work with run-time parameters. There may be an additional option to handling run-time parameters in CIME/CESM as a result of this. This is likely months in the future, but it's promising and likely could help us.
- Erik: Small glitch with the TRENDY/matrix branch. I haven't been able to reproduce Danica's issue with a slightly modified case. It looks like I'm going to have to exactly reproduce her case, including using the ndep and other boundary files she is using. I was hopeful the error would just show up at that year, the problem is the specific behavior of harvesting and shifting-cultivation. It's surprising to me that other files like ndep would matter.
- Erik: Let's talk about the general problem with the current matrix code. We can look at the TRENDY branch to see it.
- Erik: Can we look at the PR list? Will added a new one. What do we plan with #970? Does #834 go away now?
We want a branch soon that integrates the following:
-
Fire changes
- Hopefully on master
-
Will's recent PR
- Hopefully on master
-
Some or all of Leah's changes
- Hopefully on master
-
CN Matrix
-
Keith's changes (extracting parameters)
We want these integrated in the next month. Ideally, the top 3 will be on master by then, then we can update the last two to latest master, then we can make a one-off branch that merges CN Matrix with Keith's changes.
- Erik/Keith: Constants branch status?
- Erik: Bug fix/arctic-fix branches status?
- Erik: Note, we noticed a problem with cime in the version Danica was using where the number of days wasn't correct. It seems to be fixed in a later version of cime, so my inclination is to ignore it. But, it is useful to document bugs in various versions of cime, it's just doing the documentation would take time, and if it's already fixed it doesn't seem worth it.
- Erik: MizuRoute. We had problems from assigning area to zero. Naoki is doing some simulations over a few days to make sure things are working. We also want to verify that files with longitude from -180-180 are as fine as 0 to 360. There is a mystery error we see from ESMF that doesn't show up for the MOSART case.
- Erik: Will give Naoki access to cheyenne inputdata/rof/mizuRoute via ACL. Won't give him svn inputdata or general cseg access. Told him about making sure to have creation dates on files, never overwrite, make sure have good metadata and filenames, and use NetCDF-5 or NetCDF-3 rather than NetCDF-4.
- Erik: Finally got the release to master tag done. Yay.
- Erik: Design for FATES Fire data and possible refactor put in issues for. Note, had Sam add an if statement rather than set a array to a single value (#982, #983)
- Erik: Note CAM is adding half a TByte for new CLM grids. There are also FV3 grids at 1.2TBytes (most of that is high resolution landuse-timeseries)
- Erik:
CLM_USRDAT
question from Mariana.CLM_USRDAT
provides a resolution for an arbitrary CLM grid. Having that is useful. But, it was also setup years ago for users to do their own datasets. There are better mechanisms now. We could do some simplification to this system both in CTSM and cime. I also wonder about moving some of the CLM specific grids out of cime with the same capability that has been added in cime forconfig_grids
for variable mesh grids.
Agreement that it could be good to make things consistent with newer mechanisms for the sake of consistency from a user perspective. But this doesn't seem high priority.
- Erik: we have mizuRoute roughly working with some issues. We have to run mizuRoute with a r05 mesh without ocean. I think we need to make a new grid for this (r05no?)
- Erik: I solved the Nitrogen balance error by setting up a reproducible case for Chris (rubber duck debugging). It turns out it was a bug that we already fixed on the CNMatrix branch, but not the TRENDY branch (this is a traditional development problem). I'm adding some tests to prevent this type of problem in the future (will put on TRENDY branch and cn-matrix)
- Erik: What are the things we should do on the CNMatrix branch for sure and which can wait?
- Erik: CESM2.2 planning, current plan is for June, CAM folks don't think that's realistic.
- Erik: Tag ordering, still haven't made it back to my tag...
- Erik: LUNA problems make me wonder about changes to our process to detect these problems sooner. Note the cost of fixing problem has been shown to be exponentially increasing with time. Early on the costs are low. But, very late (such as in this case) it's very high (and think about the cost if we decided we would've had to redo simulations). We do seem to have students that dig into parameterizations and find issues later on, so I suppose that's a good aspect of the wide community.
- Bill: tool for finding outstanding todo items in a PR
- Bill: tag planning: cime update, etc.
Long-term we probably will need MizuRoute to handle inputs everywhere, doing whatever RTM and MOSART do over ocean points. But that can possibly be deferred - maybe until we decide if we're adopting MizuRoute as our default for CESM3.
At what point should we consider whether we're going to switch to having MizuRoute as the default? Let's get it working first, then discuss what capabilities need to be added to make it viable as the CESM3 default (ice runoff, irrigation, etc.). It could be that we stick with MOSART as the default until MizuRoute has all of the features it needs plus additional capabilities (like reservoirs). But this needs further discussion.
If / when we start considering the possibility of switching to MizuRoute, we should bring some people in the ocean group into the loop.
Dave's inclination is to bring it to master if it's passing tests. Then fix things like isotopes.
Want to improve run sequencing of spinup, but that's not critical: we could have a script that facilitates this if needed.
- We still need to do some experimentation to learn some details of what spinup you need to do, etc.
Erik would like to clean up some of the namelist names. Can defer things like performance analysis.
- Will: Gogkhan's list for CTSM5.1/CESM2.2
- Erik: "Badges" on main CTSM page
- Erik: Include totvegcthresh change with Luna bug fixes?
- Bill: documentation
These have a pretty large impact, based on an 1850 run. Want to do a full evaluation, with a full spinup and a transient run.
Should we bring these in in a way that preserves old (buggy) behavior for CLM5.0, or let this differ? It may depend on whether this improves - or at least, doesn't degrade - overall behavior.
We may also consider the initialization of vcmax to be a bug that we'd fix at the same time (i.e., also in a non-backwards-compatible way).
At what point do we start pointing to master? We're already pointing to master in the beta tags for CESM2.2, so we'd have to change course if we wanted to use the clm5.0 release branch in CESM2.2.
There's some question of whether CESM2.2 will be used for scientifically-validated coupled configurations. We should double-check with Gokhan, Mariana and others whether people are happy with (possibly significant) answer changes in CESM2.2.
One thing we might want to target is having hillslopes in sooner rather than later. (Side-note: Sean has gotten atmospheric downscaling working with hillslopes.)
We still need to figure out what 5.1 is going to be. A working plan could be the outstanding PRs plus hillslopes and biomass heat storage.
We can also include the totvegcthresh change with the Luna bug fixes, in the same or different tags.
Another is the cime change for off-by-a-timestep solar forcing. We should also bring that in.
- Bill: Erik, do you want to review the documentation PR any further before I merge it?
- Bill: Dave, do you want us to increase the priority of getting restarts working in WRF-CTSM?
Let's introduce a new parameter, is_cold_pft
, on the parameter
file. For now that will be used to trigger the new phenology logic. We
might use this later to trigger Leuning vs. Kattge.
We could apply is_cold_pft
to all cold PFTs, but the phenology change
would only apply to PFTs that are both deciduous and cold.
- Erik: Should we check for identically zero array of nhx and noy
from the coupler and abort? It looks like that's going to be the best
signal from CAM (both short and long term) that ndep being sent from
CAM is valid or not.
- Bill: I don't like the idea of having code that checks whether all values on a given processor are zero (if that's what you're suggesting here): this leads to a dependence of the check on processor count. e.g., at very high processor counts, might it be that all of the grid cells on one processor have 0 values by chance, because there are only a few grid cells on that processor?
- Erik: That is true, actually I think that zero ndep is bad even for a single point.
- Erik: The other idea (that I don't recommend) is if an ndep file
isn't found and CAM is being run, build-namelist sets the ndep file to
"do_not_read_this_file.nc"
and if that's the file the model try's to read in -- it abort's at initialization.- Bill: Is your objection to this that it dies at runtime rather than
preview_namelists
time? My feeling is that it's good to catch problems atpreview_namelists
time when doing so is relatively easy, but that if we need to add significant extra complexity to do that, then dying at runtime is just fine. - Erik: Yes, runtime rather than preview_namelist I see as a problem. I think I'm OK with this dummy file situation if we check that CAM is running, so it would only happen in a case that ndep might be passed. I don't like doing this for cases with datm, where we know ndep will never be passed.
- Bill: Is your objection to this that it dies at runtime rather than
- Erik: We should update manage_externals on master.
- Erik: Should the 2100-2300 extension change go to master or does it just need to be on CESM2.1 release? Since, the datasets are created you could still run there, even if there aren't compsets for it.
- Erik: We've made progress in mizuRoute, but are currently stuck with it dying in esmf_mesh create (for the HDMA grid). Mariana had the idea to use the MOSART half degree grid, that get's a little further, but still dying. I do have the MOSART nuopc case to compare to, and using the same grid helps to figure things out.
- Erik: CN matrix spinup? Is the new procedure the same, but with AD spinup at step zero with matrix off for 20 years rather than 5? Are we going to try other resolutions or configurations?
- Bill: Leah Birch PR. Some big questions / needs:
- Note that I'm asking for quite a lot of changes, including a significant rework of the parameters governing offset
- Should at least some of these changes be done in a backwards-compatible way, controlled by one or more namelist flags and pulling parameters out to file? If so, should we do those changes or ask her to?
- What should be done together, and what should be separated into
separate PRs? This question is connected to the first: We generally
try to separate answer-changing modifications from those that are
introducing a new option but are answer-preserving for existing
configurations. So, if we're going to make some of this
non-answer-changing (via namelist flags), then that should be kept
separate from any answer changes.
- Especially note the one-line bug fix that seems unrelated to the rest of this PR
- Use of 3rd soil layer rather than a depth-based criteria
- Should have a full scientific review
- Bill - documentation source to master; new images repo
There was agreement that, long-term, atmosphere should always provide ndep. Note that we had the same problem with prescribed aerosols; it's really the same solution. However, this could take a while.
Could check if CAM is providing 0s for ndep. But feeling is that it could be reasonable for ndep to be 0 sometimes. For example, Negin points out that someone could be doing an experiment where they 0 out ndep.
Regarding putting in place additional checks at preview_namelists
time: feeling is that it's probably okay to leave things as is for now,
unless this is shown to be a common vulnerability.
Still doing some scientific evaluation of this.
Dave's feeling is that all these changes should probably come in independently of each other (as opposed to having some high-level flags).
- Erik: Note, this change somehow got left off of the fates update on release (https://github.com/ESCOMP/CTSM/pull/820/files). I'm not sure why. I'd like to figure it out so it doesn't accidentally happen again.
- Erik: I get a Nitrogen balance error when I test the TRENDY branch starting at 1700 using Danica's 1700-01-01 startup file, and shifting cultivation on, when it goes to 1701 (whether I start at 1700-12-31 or 1700-01-01). This is with isotopes and harvesting off. I have also found the bug that Chris found was completely my fault, and it wasn't in place elsewhere. I am trying to look into why I checked in something that I saw as an obvious bug, when Chris pointed it out.
- Erik: Note, I got permission from Will, Dave, and Danica to remove 360x720 as a supported grid (I'll replace it's use in testing with f09_g17, unless it's already done). I'm getting Adam and John T. to get their new grids to have SCRIP grid files, mapping files, as well as fsurdat files in their PR's for their support (and they'll have 1850, 2000, 1850-2015, and 1850-2100 for SSP5-8.5). I'm OK with having SCRIP grid and mapping files in place, but nervous about supporting lot's of experimental grids and having to create all them anytime we update surface datasets.
- Erik: I've created a start at how to update BGC or soil-BGC with matrix. Here:https://github.com/ESCOMP/CTSM/wiki/How-to-add-new-biogeochem-or-soilbiogeochem-changes-to-the-CN-Matrix-solution
- Erik: Note I found an issue with the use of a very small denominator 1.e-30 in the Matrix code. I added a note to the CN Matrix PR. Should we run with my change (likely small change to answers), or do something different, or don't worry about it until we run into it again?
- Bill: For bioenergy PR, I plan to introduce a single-point dataset for testing that has miscanthus and switchgrass, and run a multi-year cold-start ciso test with that dataset.
- Bill: bioenergy will require CN Matrix change
- Negin: For WRF-CTSM runs, I would like to create a Jupyter Notebook which includes all the steps necessary for creating surface datasets from WRF geogrid files. This will give all the future users to know exactly the steps necessary and give them the chance to explore the dataset, etc. As we are moving forward with OCGIS and the new toolchain, we will update this Jupyter notebook to reflect the latest developments.
- Negin: Virtual coffee time or maybe virtual lunch gatherings for the land group.
Bill: surprised that a division by 1e-30 would cause a crash, because double precision should allow representation of numbers up to 10^308, but using 1e-15 seems as good as 1e-30 here, so our feeling is go ahead with the change to 1e-15.
The standard thing we'll do is just create a year-2000 dataset. We'll only create others (1850, landuse_timeseries) if specifically requested for a given grid.
We'll do this on the release branch. Erik will create a tag that
includes the raw datasets, then another with the new
landuse_timeseries
and compset.
Meeting with Adam Herrington, Chris Fischer, Peter Lauritzen, Dave Lawrence, Erik Kluzek, Mariana Vertenstein, Bill Sacks
Feeling is that we're generally happy to bring in new grids that CAM needs, but for disk space reasons, we're more hesitant to support transient cases via landuse_timeseries files. Adam feels that's reasonable for at least the high-resolution grids: he's just been running F2000 cases; Peter agrees.
For new grids that are becoming supported (examples now are: ne30pg3, ne120pg3, ARCTIC, ARCTICGRIS, CONUS, FV3 (C96)), we are happy to bring them in as fully-supported in CTSM, including that we'll regenerate the surface datasets whenever needed.
However, for grids that are truly still experimental (only used by a couple of people, not yet hardened in terms of grid definition, unsure if they'll become supported, etc.), feeling from CTSM group is that it would be best to use the new (not-yet-implemented) cime feature that allows using a user_mods directory for this purpose.
We also asked for some way to know what grids are no longer supported. The list of grids we currently consider supported is (360x720cru,48x96,0.9x1.25,1.9x2.5,10x15,4x5,ne30np4, +(nldas2 for NMP, e16np4 for testing, f05 for full Chemistry, T42 for SCAM, ne120np4 for high-resolution SE) And we have an extensive list of single point and regional grids for use in CLM. Erik would like to remove T31 for example.
- Erik: nuopc wasn't working with ROF model (#940), which was going to be a problem for mizuRoute. Looks like Mariana figured out we need to use PIO1 rather than PIO2 (I suppose I'll have to hardwire these compsets to use PIO1 rather than PIO2). We do have the mizuRoute compset building/linking now. It died in initialization. So hopefully we can try with PIO1 and see if we can get further.
- Erik: I'm having problems with the release-to-master branch outputting 2d fates variables. It's assuming the wrong size for the history pointers. I don't see why this is different than for the release-clm5.0 branch.
- Erik: Ryan wants to make FATES have the ability to use the soil
BGC spinup. One problem is that fates and
use_cn
were mutually exclusive, butuse_cn
is used in soil-BGC to designate Carbon-only, but it probably shouldn't be used that way. It seems worthwhile to clarify whatuse_cn
actually means, and possibly have a different switch for soil-BGC. - Bill: bringing nuopc changes to master
- Bill: zoom?
- Erik/Will: LMWG side meetings. Let's assign the list of people
asking for help out to specific people. Those assigned should then
schedule with them over email. If they want to come to every second of
the LMWG meeting and aren't available afterwards, Thursday would be
the only time for discussion. Most people will be able to schedule
while something else is going on.
- List of people:
- Katie Murenbeeld (FATES LOGGING) (Ryan and Erik)
- Pei-Ling (soil change over time) (Peter)
- Yanyan (Porting problem to CONSTANCE for transient landuse) (?)
- wenFu (Fire in CAM) (?) (mainly needs to meet with scientists, but I will note that there's a good chunk of SE involved here as well)
- Grace (CLM driven with CAM) (?)
- Mingjie (SIF branch) (?)
- Marysa (SLIM) (?)
- Bill: See https://docs.google.com/spreadsheets/d/1OVSvjzcVRG5WJm9OUWiy4j0sH2h1cdthvXStyBY702w/edit#gid=0
- List of people:
- Erik/Will: Fan PR and biofuels PR. How can we encourage these to be completed?
- Bill: Following up about coupling CLM to GFS
- Erik: Have a branch for CMIP5 surface datasets. Peter had to recreate raw datasets for this. I assume this doesn't need to come in, in any form. There will be a few datasets that could be used for simulations though.
- Erik: Marius is having a Least Valid Date problem with datm for
some single point cases he's trying to run with FATES
(
fates_next_api
). I've looked over his files and don't see a problem. He doesn't get the problem when using CTSM-master, onlyfates_next_api
. And it worked previously for an olderfates_next_api
version. See issue #931.
Will and Dave suggest that we bring this in in its semi-working form, with caveats. We'll add a test for it, but not try to get it compatible with things like hillslope hydrology (multiple vegetated columns) for now. Then it will at least be available to people who want to use it. We can then reevaluate after some time to see if there has been interest in it; if not, then we could pull it.
We need to look at what she has done to address our points.
e.g., for https://github.com/ESCOMP/CTSM/issues/935
Feeling is: if the fix is fairly trivial, go ahead and do it.
https://github.com/ESCOMP/CTSM/issues/934
If implementation ends up being simple to bring to master, do so. Otherwise, maybe not.
- Bill: helping during LMWG: If people get in touch ahead of time, I'd be happy to schedule times to help individuals during some of the talks.
- Bill: replying regarding coupling CTSM to GFS
- Whether LILAC would be the right choice depends on their architecture: is this a hub-and-spoke system, or are they wanting to call CTSM directly from the atmosphere? At first I assumed the latter, but given that they also have an ocean component, that may not be right.
- Erik: TRENDY matrix simulation.
do_grossunrep=T
causes Carbon balance errors with matrix on. Even withdo_harvest=F
anduse_soil_matrixcn=F
. I also added the checks that Chris suggested, which were mostly about issues when grossunrep and harvest on at the same time. - Erik: mizuRoute we have the build working. Naoki is figuring out more details in the high level cpl code.
Mike thinks that land model is probably called from within GFS (not hub and spoke).
Bill will follow up with them about this.
Erik's guess is that the problem with do_grossunrep
may be due to the
lack of calling CNPrecisionControl
at intermediate times.
https://github.com/ESCOMP/CTSM/issues/928
Erik asks if this is useful for other purposes, so whether it's worth the time to put in more general applications. We can see theoretically interesting things you could do with this, though no specific near-term needs come to mind.
Dave suggests implementing this specifically for now, then we can generalize it later if that would be useful.
This is getting painful because master has diverged enough from the release branch. Moving forward, it could be best to merge things to both places right up-front rather than saving things for a big merge like this.
Bill & Dave feel that, long-term, it would be good to have consistency in compset long names: always using "CTSM" rather than "CLM", and also (as Dave points out) explicitly having something like "CTSM51%CLM-SP" (rather than having CLM implied and only being explicit about NWP).
But we could either get there incrementally or all at once.
- Bill: canceling next Monday's meeting? (I probably can't make it)
- Erik: concern over the demonstrated difficulty in maintaining both non-matrix and matrix version of BGC code. Currently requires expert help. I'm more spun up on it now. The fundamental issue is in the reason that the state updates are separated. We need a robust solution in the matrix version to prevent negative pools as processes are added into the system. We could develop an automatic check/change that happened when a new flux was added to the matrix solution.
- DML: From Maria Val Martin: I have created a surface file at 0.23x0.31 with 78 PFTS for 2000 and am happy to share it. It is about 29 Gb. I have also the user_datm.streams.txt.Anomaly.Forcing.* files for the ssp-rcp runs
- DML: Priorities for Sam
- Erik: We had some discussion about SLIM over email. Bill asks this question "One particular thing I'd want to understand is if there might be the desire to do a hybrid approach where you use SLIM for some things but not everything, which would argue strongly for incorporating SLIM in CTSM, but would also push us in certain directions with the design."
- Erik: Cubed sphere fv dycore files from John Truesdale.
We'll just stick with having meetings Thursdays. Usually it will be CLM software, but sometimes we'll use it for bigger picture CTSM things. Mike will just attend for the latter (we'll let him know ahead of time).
Erik is the only one here who has spent enough time getting up to speed with this that he could maintain it moving forward.
Dave feels that we should at least continue somewhat pushing forward with this, partly because of the benefits it could bring, and partly because the long-term vision of hydrology is to do a matrix solve, so it would be nice if BGC had that capability, too. (Even though we're not sure if the hydrology will ever reach the hoped-for point in this respect.)
One desire would be to have some checks that fail if you forget to make a necessary change in the matrix solution.
Plan is to provide out-of-the-box capability for this resolution for all of the SSPs.
Want this on both master and the release branch. (People will probably use it more from the release branch.)
Is there any need for a hybrid approach where you use CTSM for some things and SLIM for others?
- Dave's sense is probably not
Dave's desire is that it should be relatively seamless to turn on SLIM, so either having it in the CTSM repo or having it be an external of CTSM could work.
Dave's feeling: Let John maintain it until it becomes an official CESM resolution.
So we won't have tests for this ourselves; if something breaks, it's on the CAM group to fix it.
- Bill: upcoming tasks / priorities for WRF-CTSM coupling and NWP work
-
DONE: Things I have done in the last couple of days:
- Send roughness lengths
- Working on rework of sending stop time (nearly done)
-
Bill & Mike: there are still some things I want to go through with you in the WRF implementation
-
Bill: Add automated test for LILAC
-
Bill: Slightly improve ease of namelist generation
-
Bill: Bring lilac_cap branch to CTSM master
-
Bill: LILAC build system
-
Bill: a bit more software validation (manual examination of coupling fields)
-
Mike & Sam: Scientific validation & improvement
-
???: Some other smallish LILAC needs (see https://github.com/ESCOMP/CTSM/projects/23)
-
???: More work on CTSM performance
-
???: Restart capabilities in WRF-CTSM
-
???: Improved data model capabilities in LILAC (probably not needed for WRF, but may be needed for other atmosphere models)
-
???: Initialization method
-
Coupling aerosols + dust from WRF-Chem to CTSM. [ long-term?]
-
Sending emissions from CTSM to WRF-Chem. [long-term?]
-
LILAC for coupling river to WRF-CTSM. The capability is already there.
-
WRF vs. CTSM timesteps. Do we want to run CTSM on every WRF timestep?
-
What is the forcings timestep?
-
What is the timestep CTSM cycles through?
-
What timestep do we want to write CTSM output? (potential for speedup)
-
Mike: There are some other fields that we may need to send, depending on the boundary layer scheme being used.
- Some schemes want a surface temperature and exchange coefficient
- One possibility is to calculate things like bulk Richardsons number consistent with the bulk fluxes.
- One problem is fluxes that cancel out despite having an average temperature difference between the atmosphere and land surface.
Mike would like to talk to some people to see what is reasonable here. Mike doesn't feel we can handle every possible WRF scheme. There are probably 2-3 schemes that are used by nearly everyone (though unfortunately the most popular 2 schemes have different requirements).
One possibility would be to take the average fluxes coming out of CTSM and use a surface layer scheme to determine the fields needed by the boundary layer scheme.
- Feeling is that this is probably a reasonable approach
Mike: Would it make sense to have a small subset of fields that come out more frequently (e.g., hourly)?
- We could do this in the elegant, long-term way (which Erik proposed a while ago)
- But in the short-term, we can use the
user_nl
mechanism for this
Mike and Sam will work on this.
Mike suggests looking at two configurations
- The current "regional climate" configuration (1 month)
- A more NWP-ish case (~ 3 days)
- Initial conditions is very important for this
Dave: one approach would be starting by focusing on the 1 month (or even a full year) to work out basic issues, like are we passing some fields wrong - without needing to worry on initial conditions. Then once you figure that out, we can work on initial conditions.
- Mike: Hasn't yet looked at whether it's a growing bias or if it starts from day 1
Negin: should we also look at different regions?
- Mike: eventually, yes, but for now let's focus on CONUS. The community could help once we have a good CONUS simulation.
In Noah-MP, soil moisture & soil temperature taken from some source of atmosphere & land initial conditions. WRF preprocessor (WPS) interpolates source values to the model grid.
This leads to a model inconsistency. The preferred thing is to do a spinup with your actual land model, but people don't do that very often.
Mike would suggest that we do something like this for CTSM. For WRF and MPAS we can try to use the existing system.
Dave: Need to think separately about:
- What do we want to do for WRF?
- What do we want to do for other models in general?
- We could start by just providing a set of initial conditions from an offline run with observed atmospheric forcings.
WRF passes fields through to the land model in the first time step (or maybe could be in initialization): soil moisture and temperature fields, and some snow fields.
- We could think about doing this in CTSM.
- We could think about reusing some of the init_interp vertical interpolation for this purpose.
Mike notes that a lot of what we're talking about is to make things easy for an average WRF user. For more rigorous applications, you'd probably do an initial offline spinup.
What is the priority / time frame of this?
- Dave: maybe start by doing a brute force / manual thing where we get CTSM initial conditions to match Noah-MP's (from standard WRF) for our initial validation.
- In parallel - or maybe starting in a few months: work on passing information from WRF preprocessor into CTSM at initialization and overwrite information in CTSM based on that.
It might be a good idea to talk to another modeling group, like SAM, before embarking on this.
Lower priority than the scientific issues
Should we investigate the land model coupling frequency? Possibly.
One first thing to try is turning off output to see how much the hourly output cost.
Performance probably more important for operational community than research community.
Sean: Would it be possible to buffer output so that you wait until you have a bunch of output time steps before actually writing output?
Is there a significant cost in reading the aerosols?
- Erik: since the data are monthly, it shouldn't have a huge cost.
This is something we'll need at some point, but the question is the urgency.
Probably the current state is okay - that you need to manually set the initial conditions upon the restart.
Probably not high priority for WRF runs
Erik: Currently, we can't create a regional river file. But Sean points out that you can use a global grid for it.
- Bill: Do we still want to encourage people to use issues for support requests? Now that the forums are better, I'm somewhat inclined to point people there for support requests (e.g., https://github.com/ESCOMP/CTSM/issues/890)
- Negin: I've noticed many of CTSM issues are related to LILAC. But
currently, there is not a separate label for LILAC issues. Adding a
separate label such as
tag: LILAC
(or something similar) will make finding issues related to LILAC much easier. This can be a temporary label for when we are actively working on LILAC. - Erik: Sean identified a potential problem with the cosine(sol-zenith) angle forcing. I think there is an inequality that is off (something like should be >, but is >=, or vice-versa).
- Erik: Katie pointed out a problem she had with linking clm5.0.000 on cheyenne. I finally had her update the cime version, which we think should give identical answers. Should we point this out as a problem with the solution?
- Erik - Bin pointed out a inconsistency in directory names. Should I make an issue on this (as a WONTFIX) to document it for others that notice the same thing?
- Erik -- We are close on mizuRoute. I can now create a case with it. Still more work to get it to compile and run. Our first version may need to run with CTSM, so I don't have to modify to get standalone DLND working for ROF components. I will also need to put some grids in inputdata/rof/mizuRoute/ancil_dir (is it OK to store this in CESM inputdata and svn?). I have a branch of cime to get it started, will need to also add some grids that CESM recognizes.
- Erik - I figured out the matrix changes needed for shifting cultivation. There are a few detailed questions I may need verified from Chris and Peter L. (wood product gain, xsmrpool). And I have a question for Chris on one issue that only applies to harvesting for fine root Carbon. But, the same issue doesn't apply to shifting cultivation so it shouldn't affect the code changes I'm working on.
People agree: point users to forums for this.
Bill: We have the LILAC project; feels it gets cumbersome to have both a project and an issue.
We'll think about introducing additional columns in the LILAC project board to facilitate understanding the issues in there: e.g., a discussion column, and a longer term column.
Sean found an issue with this forcing. It seems like forcing is off by one time step.
This affects all land-only simulations.
Where should we fix this? Dave suggests fixing it both on the release branch and master. But don't need to get it into cesm2.1.2. (In fact, we should do a simulation before putting this change in.)
Review of https://github.com/ESCOMP/CTSM/pull/884
Some possible things needed:
- Create new variable distinguishing from food
- Need their new param file
Dave and Will would like an easier way to find what new features have been added to the model since X time.
We currently have wiki pages like https://github.com/ESCOMP/CTSM/wiki/Answer-changing-tags and we could add another for new features, but that doesn't feel ideal.
We could also use the ChangeSum and/or ChangeLog for this. Some ideas are:
- Come up with a short code for type of tag, like [F] for new feature. This would go in the one-line summary; then you could search the ChangeSum.
- Add a new section of the ChangeLog that asks about this - e.g., "Does this tag add a fully-functional, significant new feature", with a checkbox like we have for the major science changes in the ChangeLog, so you can search the changelog for checked boxes with some given text.
Bill asks Will and Dave to come up with a list of the kinds of things they'd like to query, then we can come up with a solution that lets them (and others) find this information more easily.
- Bill - Do we want SLIM coupled to WRF through LILAC? You made it sound like we might. If this is a desire, this would require a significant rethinking in either the LILAC design (decoupling it from CTSM) or the intended SLIM design (e.g., incorporating it as a top-level option in CTSM rather than as its own repository).
- Bill - What are our high-level priorities for the WRF-CTSM
coupling and the NWP configuration in general? And what is the
timeframe on which we want to deliver these things? Some major things
I can think of are:
- WRF scientific validation and improvement
- Improved build process
- Computational performance
- Restart capabilities in WRF-CTSM
- Improved data model capabilities in LILAC
- Improved UI for setting CTSM inputs
- Streamlined toolchain
- Initialization method
We ended up not having a full meeting, but here are some brief notes from a discussion between Bill S, Dave L and Mariana V on the above agenda items.
Not worth rethinking LILAC design at this point for the possibility of bringing SLIM into WRF rather than CTSM.
A more likely path forward, that Dave would like for a number of reasons, would be making LILAC part of the CTSM repository rather than a completely separate component.
Bill feels that would be doable, though probably more difficult than having it be a totally separate component. The biggest challenges in doing this in a usable and maintainable way would likely be on the scripting side.
From Bill's list above:
- WRF scientific validation and improvement
- Mike together with Sam?
- Improved build process
- Bill will work on this at high priority
- Computational performance
- Unclear when this will be worked on further
- Restart capabilities in WRF-CTSM
- Unclear when this will be worked on further
- Improved data model capabilities in LILAC
- Unclear when this will be worked on further (probably not needed for WRF, but may be needed for other atmosphere models)
- Improved UI for setting CTSM inputs
- Bill will work on this at high priority, at least to get some small improvements in place
- Streamlined toolchain
- We'll continue to try to make gradual progress on this
- Initialization method
- We need to have more discussion on this
- Erik: FYI. Keith had some examples of pictures and references in the User's Guide (UG). I added most of the pictures (all but one) and the references to them. Still need to do more references and the last picture (which also includes discussion of mksurfdata_map).
- Erik: mizuRoute notes. First version will not handle frozen stream. First version will pass zero's for mizuRoute export state. First version import and export state on same grid, but in general this is NOT true for mizuRoute. mizuRoute is going to initially expect ice, direct-to-ocean runoff and irrigation fields from, but will ignore them.
- Erik: Initial version of the "namelist" control file will be to
read in the same version, convert it into a python dictionary, and
then possibly change the values according to CESM XML settings. The
first version will ignore
user_nl_mizuRoute
, but the next version could read it, but only in the mizuRoute input control file format. But, it will allow you to make changes to the control file fromuser_nl_mizuRoute
. A future version could change it so thatuser_nl_mizuRoute
expects namelist format as otheruser_nl_*
files. - Erik: In the nuopc driver we should turn off ice, direct-to-ocean and irrigation sent to mizuRoute if they are turned off. Also flooding will be off and ignored in mizuRoute. Note, the nuopc driver for dlnd, only handles cism, will need to be extended to have a ROF standalone setup. The MCT driver is more general in this respect.
- Erik: We need a cime branch for mizuRoute (that will eventually come to cime master). It needs some changes to get mizuRoute to work as a component, and to have rof standalone. We also need to check that mizuRoute grids are correct by looking at the ID's rather than grid centers (mizuRoute grids don't have grid centers and with thousands of vertices it doesn't make sense to calculate them). I plan to put this branch on my fork.
- Erik: mizuRoute runs with a river network file that's different than the catchments that's used for input. We can translate it to the input grid, but some catchments don't have rivers, so runoff will directly pass through in these areas.
- Erik: Need to add some new paramdata fields for Fang's Fire changes. I think this should be handled by checking if they exist and if not, set values to nan. Then if used the code will choke. We could also handle it by checking version of the file or some such thing. Or put them on all files.
- Erik: I'm adding a
ctsm5_1
physics option on Fang's branch, and compsets for it. One thing is that it looks like for archiving, if the compset has_CTSM5_1%BGC-CROP_
in the name, it seems to expect filenames with ".ctsm" in them. I haven't tracked this down in the cime code, but it does seem to work this way, and the change isn't too hard. Is this OK? I'm also assuming we want plain .ctsm" and not with a version number ".ctsm5". It looks like adding version numbers works without trouble. - Erik: Fang's branch also has a change to btran2 (which is just for her Fire model). I'm handling that with a hydrology namelist item. The fire changes are done with a new module CNFireLi2021Mod.F90 and new fire_method='li2021gswpfrc'.
- Erik: Working on tag for Friday's release.
- Bill: Changed "closed: invalid" to "closed: non-issue". Any better suggestions?
- Negin: Naming conventions have few exceptions that we need to discuss, laid out in #869. What do you think about them?
MOSART has a bunch of different options for handling frozen runoff. Do we want to support all of them in mizuRoute?
Dave: We probably need to go through it a little bit to think about this; his guess is that the current MOSART default is fine, but would like to think through this some more.
There may be catchments with no river network segments. We may need to handle this by having runoff that lands there going directly to the ocean.
This probably won't change the way we do mapping (relative to how we do it for, say, MOSART): We generate mapping from all polygons to the ocean. But it just means that there will be some more interior cells that actually generate a non-zero runoff.
Dave: We'll need to be careful to think hard about conservation once this is coupled.
Bill: this may be designed to handle negative runoff, to avoid river volume going negative.
Let's just do the simple thing of adding parameters to all three parameter files (clm45, clm50, clm51).
Also include https://github.com/ESCOMP/CTSM/issues/206
For now, we should stick with _CLM51
- No underscore between 5 and 1
- Use CLM for now, since cime expects this in some compset matches in some places
cime also expects the model name to be in the file name.
So at some point we need to go through and completely change
references to clm
to ctsm
throughout ctsm itself, cime and cesm.
Dave suggests adding a little in the user's guide about this.
Erik thinks the temporal frequency can be whatever you want.
Can you use this to prescribe soil moisture from some other source? In principle, yes, but it would probably not make sense. It's really meant to cut off a feedback by prescribing soil moisture from some other CTSM run.
How should we stay on top of ongoing big developments?
We have a label, "enh: major new science". Maybe the thing to do is to get better about opening a new issue for things like this as the developments start.
We should at least get better about doing this for things we're working on in-house.
Let's change the label to just "enh: new science". This will apply to major and minor stuff, if it adds new science capabilities (basically, the sorts of things that Dave and Will would want to stay on top of and announce to the working group).
For files starting with clm_
, replace with ctsm_
ctsm_NcdIoUtils
ctsm_PftCon
ctsm_Atm2LndType
ctsm_CNFun
ctsm_NutrientCompetitionPhys45
ctsm_Methane
ctsm_Ozone...
ctsm_LilacCap
ctsm_LilacAtmCap
-
General
-
Documents
-
Bugs/Issues
-
Tutorials
-
Development guides
CTSM Users:
CTSM Developer Team
-
Meetings
-
Notes
-
Editing documentation (tech note, user's guide)