-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GSL atmosphere_model will not reproduce NCAR atmosphere_model #66
Comments
There are 5 seaice points in the domain, so exploring the diffs with seaice. |
Does anyone know why we are deallocating these four variables? Compared to the NCAR code? Seems like that would mean that we aren't carrying the sfclayer calculated values to the downstream calls. Is that intentional? It's causing my current test code to crash (I believe). |
I think it is a bug. These variables are not sent to MPAS in sfclayer_to_MPAS, thus they will be not defined when they are needed in mpas_atmphys_driver_seaice.F90
|
@tanyasmirnova yes, those are the four that stand out, I haven't looked too closely to see if there are others. I will make some changes to not deallocate these. This may or may not change the hrrrv5 baseline. |
Tanya, is it an oversight that they are NOT sent to the MPAS in
sfclayer_to_MPAS? Maybe the seaice coupling hasn't been completed. In any
case, I think you're on the right track to figure out this
non-reproducibility issue..
Also, in the NCAR code, these lines seem like a bug, right:
if(allocated(flhc_p) ) deallocate(flhc_sea )
if(allocated(flqc_p) ) deallocate(flqc_sea )
Is there a reason why the variables are different in each line? The logic
is different from all the other lines around it.
…On Fri, Nov 8, 2024 at 7:29 AM Michael Barlage ***@***.***> wrote:
@tanyasmirnova <https://github.com/tanyasmirnova> yes, those are the four
that stand out, I haven't looked too closely to see if there are others. I
will make some changes to not deallocate these. This may or may not change
the hrrrv5 baseline.
—
Reply to this email directly, view it on GitHub
<#66 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADLRR3WBPFF6T754J2PJV4DZ7TDDDAVCNFSM6AAAAABRJWUVJGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINRUHEYDENRVGM>
.
You are receiving this because you were assigned.Message ID:
***@***.***>
--
Joseph Olson
Numerical Weather Prediction Model Developer
Environmental Prediction Advancement Division
NOAA-Global Systems Laboratory
Boulder, Colorado
|
Although it has been a bit tedious, it has generally been a positive thing for me to diagnose these things since I think I understand the code a little better each time. I couldn't understand how some of these variables are being used across schemes. Now I see that pretty much all this memory passing is done through |
@joeolson42 Joe, I think that it is confusing that some variables are sent back to MPAS state, and others are not. When Laura moved the 2-m diagnostics to seaice driver, I think it would be cleaner to get all variables used in there from MPAS grid. I guess she had some reasoning behind not doing it for these four *_sea variables, but I have no idea what it was. |
I think I understand the logic. Any vars that you want to persist forever are sent to the MPAS state variables. But any vars you only want to persist for the physics timestep are put into these _p and _sea variables. For these phys timestep vars that you only want to persist for your scheme's call, they are allocated before the scheme and deallocated after. For those that are going to be used downstream (but still in the physics timestep), those should not be deallocated after the scheme, but deallocated after the last scheme that uses it. That's why several of these surface layer variables are not deallocated after the sfclayer driver, but deallocated either after the lsm driver or seaice driver. Overall, I think the purpose is to optimize memory management. |
My understanding is similar to yours, but I am a bit confused as to why
some of those *_p variables persist from driver to driver when they could
be sent "back to MPAS" by setting their permanently allocated counterparts
equal to the *_p variables. Then it would be straightforward to have each
of the *_p variables deallocated before leaving the driver. I think this is
how it's done for most variables held in memory, even if they aren't really
"state" variables. There just happens to be a few exceptions and I can't
help to wonder if they may be unintentional bugs. I think it would be
cleaner to send any variable that needs to be kept in memory to the non-*_p
counterpart. Does that make sense?
We are all trying to learn MPAS by reading the current code which may be
buggy. I think a lot of us are confused about how it is actually meant to
be coded.
…On Fri, Nov 8, 2024 at 11:12 AM Michael Barlage ***@***.***> wrote:
I think I understand the logic. Any vars that you want to persist forever
are sent to the MPAS state variables. But any vars you only want to persist
for the physics timestep are put into these _p and _sea variables. For
these phys timestep vars that you only want to persist for your scheme's
call, they are allocated before the scheme and deallocated after. For those
that are going to be used downstream (but still in the physics timestep),
those should not be deallocated after the scheme, but deallocated after the
last scheme that uses it. That's why several of these surface layer
variables are not deallocated after the sfclayer driver, but deallocated
either after the lsm driver or seaice driver. Overall, I think the purpose
is to optimize memory management.
—
Reply to this email directly, view it on GitHub
<#66 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADLRR3RCNVSTUTFRLIIYO6DZ7T5JRAVCNFSM6AAAAABRJWUVJGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINRVGQ3TAMZSGU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
Joseph Olson
Numerical Weather Prediction Model Developer
Environmental Prediction Advancement Division
NOAA-Global Systems Laboratory
Boulder, Colorado
|
It certainly makes the current deallocation process messy since it's not clear what has been deallocated after a scheme driver call and what hasn't. It seems like it would be better to check/deallocate everything in mpas_atmphys_vars at the end of the physics driver calls but I guess that would have a larger memory footprint. But what you're saying definitely makes sense for the vars that have both state and _p counterparts, otherwise what does the state contain?? |
I originally thought that all _p variables had state counterparts and that
the _p variables were for translating into WRF-expected dimensions.
If there are some that don't have state counterparts, that's news to me.
I also thought that, if our schemes (or drivers) are no longer i,k,j, then
we arguably don't even need the _p variables.
Not sure if that is true or not. Maybe it's a question for Laura/Michael.
…On Fri, Nov 8, 2024 at 1:03 PM Michael Barlage ***@***.***> wrote:
It certainly makes the current deallocation process messy since it's not
clear what has been deallocated after a scheme driver call and what hasn't.
It seems like it would be better to check/deallocate everything in
mpas_atmphys_vars at the end of the physics driver calls but I guess that
would have a larger memory footprint.
But what you're saying definitely makes sense for the vars that have both
state and _p counterparts, otherwise what does the state contain??
—
Reply to this email directly, view it on GitHub
<#66 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADLRR3SJOEW2X5QXP2E47TTZ7UKIFAVCNFSM6AAAAABRJWUVJGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINRVGY2TSNBYHA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
Joseph Olson
Numerical Weather Prediction Model Developer
Environmental Prediction Advancement Division
NOAA-Global Systems Laboratory
Boulder, Colorado
|
It would be nice to not need the vector to 2D translation. That could be taken care of pretty easily at the scheme driver interface level since most of the physics are column. There are a few *_p examples in the noah scheme that don't have state counterparts but I don't see very many elsewhere in a quick check. That's not to say that they shouldn't be there, i.e., maybe those outliers are bugs. |
One step closer in the reproducibility. After a little reorg, the model in one test reproduces for four timesteps. In another with double the radiation frequency (test to see if it was something in the radiation), the model goes five timesteps. The results look like this:
The positive thing is that there is one grid in each test that has different surface fields like qfx/lh. They are not the same grid but both grids are vegtyp=15 so permanent snow/ice. That is suspicious so hopefully could be easy to track down. |
For out-of-the-box
mesoscale_reference
andconvection_permitting
suites, the GSL code does not reproduce NCAR code. For example, CONUSmesoscale_reference
testcase differences (using code fixing #65) after 1 hour (5 timesteps):after one timestep:
The text was updated successfully, but these errors were encountered: