Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accelerated RFC to place PlanetiQ RO data in monitor mode #2951

Closed
10 tasks done
RussTreadon-NOAA opened this issue Sep 24, 2024 · 37 comments
Closed
10 tasks done

Accelerated RFC to place PlanetiQ RO data in monitor mode #2951

RussTreadon-NOAA opened this issue Sep 24, 2024 · 37 comments
Assignees
Labels
production update Processing update in production

Comments

@RussTreadon-NOAA
Copy link
Contributor

RussTreadon-NOAA commented Sep 24, 2024

Description

PlanetiQ data has added noise. This issue issue is opened in case we decide to place this data in monitor mode until we more fully examine it.

Target version

v16.3.19

Expected workflow changes

The only file changed is global_convinfo.txt. This requires a new GSI tag and a new GFS tag.

Tasks

  • Create release branch
  • Make workflow changes for upgrade in release branch (add additional checklist items as needed)
  • Create release notes
  • Cut hand-off tag for CDF - EMC-v16.3.19
  • Submit CDF to NCO
  • Implementation into operations complete
  • Merge release branch into operational branch
  • Cut version tag from operational branch
  • Release new version tag
  • Announce to users
@RussTreadon-NOAA RussTreadon-NOAA added production update Processing update in production triage Issues that are triage labels Sep 24, 2024
@KateFriedman-NOAA KateFriedman-NOAA removed the triage Issues that are triage label Sep 24, 2024
@KateFriedman-NOAA
Copy link
Member

Shuffled other current release branches around and recut release/gfs.v16.3.19 from dev/gfs.v16 for this ARFC.

@KateFriedman-NOAA
Copy link
Member

@RussTreadon-NOAA sounds like this may or may not happen? Let me know the status when you do.

Also, let me know the new GSI tag name when/if created. I will prep release notes and be ready to update the GSI tag in the release branch when/if available. I'll ask you to review a PR into the release branch from my fork once ready.

FYI, we will be handing off a new GFS version that is a companion to the obsproc/v1.3 update that is being handed off in early October. Hopefully this issue can be decided on before we need to hand-off for that.

@KateFriedman-NOAA
Copy link
Member

Additional update from @RussTreadon-NOAA in #2591 (#2591 (comment)):

DA team members are meeting this afternoon to make a final decision as to what course of action to take. 
Submitting an ARFC to place PlanetiQ data in monitor mode is one option. Another option is to let the 
PlanetiQ data flow into and be assimilated by the GFS. This later option requires no EIB action.

This issue is opened as a "heads up" to EIB in case the DA team goes with the first option.

Thanks for the update @RussTreadon-NOAA ! Will see what the DA team decides and go from there.

@RussTreadon-NOAA
Copy link
Contributor Author

DA team decided to move forward with ARFC to place PlanetiQ gpsro data in monitor mode. This is a one line change to global_convinfo.txt

diff -r /lfs/h1/ops/prod/packages/gfs.v16.3.18/fix/fix_gsi/global_convinfo.txt fix/global_convinfo.txt
260c260
<  gps      267    0    1     3.0      0      0      0  10.0  10.0   1.0  10.0  0.000000     0    0.     0.      0    0.     0.    0    0
---
>  gps      267    0   -1     3.0      0      0      0  10.0  10.0   1.0  10.0  0.000000     0    0.     0.      0    0.     0.    0    0

fix submodule updated in GSI branch release/gfsda.v16. Done at 7aabf37

@RussTreadon-NOAA
Copy link
Contributor Author

GSI branch release/gfsda.v16 at 7aabf37 tagged as gfsda.v16.3.19

@RussTreadon-NOAA
Copy link
Contributor Author

@KateFriedman-NOAA , let me know what you need from me and I'll work to get it to you as quickly as possible.

KateFriedman-NOAA added a commit to KateFriedman-NOAA/global-workflow that referenced this issue Sep 24, 2024
@KateFriedman-NOAA
Copy link
Member

@KateFriedman-NOAA , let me know what you need from me and I'll work to get it to you as quickly as possible.

@RussTreadon-NOAA please review PR #2952. Once the release branch in the auth repo is ready I can cut a hand-off tag. Who is submitting the CDF?

@RussTreadon-NOAA
Copy link
Contributor Author

I'm fine with submitting the ARFC but don't know what has been current practice. How have we been handling the CDF for recent implementations?

KateFriedman-NOAA added a commit to KateFriedman-NOAA/global-workflow that referenced this issue Sep 24, 2024
KateFriedman-NOAA added a commit that referenced this issue Sep 24, 2024
… ARFC (#2952)

ARFC for PlanetiQ data going into monitor mode

New GSI tag with updated global_convinfo.txt.

Refs #2951
@KateFriedman-NOAA
Copy link
Member

@RussTreadon-NOAA Tag cut for CDF: EMC-v16.3.19

I'm fine with submitting the ARFC but don't know what has been current practice. How have we been handling the CDF for recent implementations?

The last few upgrades were sudden or upstream so we didn't do CDFs and NCO manually made the new version installs with local edits. The upgrades before that had CDFs...either me or the component POC (e.g. Andrew for a few). I'd do a regular CDF since we're initiating it and choose the "Accelerated change" option. Let me know if I can assist with the CDF.

@RussTreadon-NOAA
Copy link
Contributor Author

@KateFriedman-NOAA , I'm touching base with Daryl to see how he would like to proceed since we are in CWD.

@RussTreadon-NOAA
Copy link
Contributor Author

As I wait for his reply (he is on travel), I am stepping through the Implementation Instructions in Release_Notes.md on my directory on Cactus. It's been a long time since I built a gfs.v16 package.

@KateFriedman-NOAA
Copy link
Member

Sounds good @RussTreadon-NOAA, thanks! I have to step away from my desk now but will check back periodically. Let me know if you need anything else from me for this stage. I'll take care of the last six checklist items above once this goes into ops.

@RussTreadon-NOAA
Copy link
Contributor Author

Thank you @KateFriedman-NOAA . I appreciate your quick action this afternoon. The ball is on my side of the court. I'll keep you in the loop as things progress.

@RussTreadon-NOAA
Copy link
Contributor Author

@KateFriedman-NOAA: tag EMC-v16.3.19 has been installed on Cactus in /lfs/h2/emc/da/noscrub/russ.treadon/git/global-workflow/gfs.v16.3.19. A diff of directories in gfs.v16.3.19 and /lfs/h1/ops/prod/packages/gfs.v16.3.18 finds unexpected differences.

The unexpected diffs are related to WAFS.

ecf/

russ.treadon@clogin08:/lfs/h2/emc/da/noscrub/russ.treadon/git/global-workflow/gfs.v16.3.19> diff -r /lfs/h1/ops/prod/packages/gfs.v16.3.18/ecf ecf |head
Only in /lfs/h1/ops/prod/packages/gfs.v16.3.18/ecf/scripts/enkfgdas/post: jenkfgdas_post_f003.ecf
Only in /lfs/h1/ops/prod/packages/gfs.v16.3.18/ecf/scripts/enkfgdas/post: jenkfgdas_post_f004.ecf
Only in /lfs/h1/ops/prod/packages/gfs.v16.3.18/ecf/scripts/enkfgdas/post: jenkfgdas_post_f005.ecf
Only in /lfs/h1/ops/prod/packages/gfs.v16.3.18/ecf/scripts/enkfgdas/post: jenkfgdas_post_f006.ecf

jobs/

diff -r /lfs/h1/ops/prod/packages/gfs.v16.3.18/jobs/JGFS_ATMOS_WAFS_BLENDING_0P25 jobs/JGFS_ATMOS_WAFS_BLENDING_0P25
97c97
< export SLEEP_TIME=1500  
---
> export SLEEP_TIME=1500

The diff in JGFS_ATMOS_WAFS_BLENDING_0P25 is extra white space on the end of the SLEEP_TIME in the operational gfs.v16.3.18 file.

scripts/

diff -r /lfs/h1/ops/prod/packages/gfs.v16.3.18/scripts/exgfs_atmos_wafs_blending_0p25.sh scripts/exgfs_atmos_wafs_blending_0p25.sh
73c73
<           echo "UK WAFS GRIB2 file  $COMINuk/EGRR_WAFS_0p25_*_unblended_${PDY}_${cyc}z_t${ffhr}.grib2  not found" 
---
>           echo "UK WAFS GRIB2 file  $COMINuk/EGRR_WAFS_0p25_*_unblended_${PDY}_${cyc}z_t${ffhr}.grib2 not found"
311c311,313
<   cat $COMOUT/${RUN}.t${cyc}z.wafs_blend_0p25_usonly.emailbody | mail.py -s "$subject" $maillist -v
---
>   cat $COMOUT/${RUN}.t${cyc}z.wafs_blend_0p25_usonly.emailbody | mail.py -s "$subject" $maillis
> t -v
> 

Does tag EMC-v16.3.19 contain changes from g-w issue #2591? Do we want WAFS changes in tag EMC-v16.3.19?

@RussTreadon-NOAA
Copy link
Contributor Author

Also see difference in prepobs_run_ver when comparing operational gfs.v16.3.18 and tag EMC-v16.3.19

russ.treadon@clogin08:/lfs/h2/emc/da/noscrub/russ.treadon/git/global-workflow/gfs.v16.3.19> diff /lfs/h1/ops/prod/packages/gfs.v16.3.18/versions/ versions/
diff /lfs/h1/ops/prod/packages/gfs.v16.3.18/versions/hera.ver versions/hera.ver
6c6
< export prepobs_run_ver=1.0.1
---
> export prepobs_run_ver=1.1.0
diff /lfs/h1/ops/prod/packages/gfs.v16.3.18/versions/orion.ver versions/orion.ver
6c6
< export prepobs_run_ver=1.0.1
---
> export prepobs_run_ver=1.1.0
diff /lfs/h1/ops/prod/packages/gfs.v16.3.18/versions/run.ver versions/run.ver
1,2c1,2
< export version=v16.3.16
< export gfs_ver=v16.3.16
---
> export version=v16.3.19
> export gfs_ver=v16.3.19
Only in /lfs/h1/ops/prod/packages/gfs.v16.3.18/versions/: run.ver.para
diff /lfs/h1/ops/prod/packages/gfs.v16.3.18/versions/wcoss2.ver versions/wcoss2.ver
6c6
< export prepobs_run_ver=1.0.1
---
> export prepobs_run_ver=1.1.0

EMC-v16.3.19 sets prepobs_run_ver to 1.1.0 whereas operations has 1.0.1.

@KateFriedman-NOAA
Copy link
Member

@RussTreadon-NOAA those are all expected differences due to how NCO made those installs with manual local updates.

Does tag EMC-v16.3.19 contain changes from g-w issue #2591? Do we want WAFS changes in tag EMC-v16.3.19?

No, you're seeing the sudden update in ops early last week to adjust the wait time for UKMet data and update an email message in WAFS within the GFS package, resulting in v16.3.18. See https://github.com/NOAA-EMC/global-workflow/releases/tag/gfs.v16.3.18 and NOAA-EMC/WAFS@gfs_wafs.v6.3.2...gfs_wafs.v6.3.3

Good catch, looks like we have an extra line return when we shouldn't though in the EMC WAFS copy but NCO has it correctly so that's what matters. We are going to remove WAFS from the GFS soon but I will fix that extra line return and recut the WAFS tag so if NCO does a full install it's ok.

EMC-v16.3.19 sets prepobs_run_ver to 1.1.0 whereas operations has 1.0.1.

Our prepobs_run_ver is only used in dev mode, obsproc sets and uses prepobs internally. You can see that we updated the version for our use but NCO didn't since the OPS GFS doesn't use that variable.

@RussTreadon-NOAA
Copy link
Contributor Author

Thank you @KateFriedman-NOAA. Good to know that EMC-v16.3.19 differences noted above with respect to /lfs/h1/ops/prod/packages/gfs.v16.3.8 can be ignored.

@KateFriedman-NOAA
Copy link
Member

@YaliMao-NOAA FYI I have fixed the email line that Russ noticed during a compare and recut the latest WAFS tag. I did it quickly so the WAFS tag is good in case this ARFC or following implementation pre-WAFS-removal does a full fresh install in ops. Apologies if I overstepped in the WAFS repo. I re-released the latest tag with the fix: https://github.com/NOAA-EMC/WAFS/releases/tag/gfs_wafs.v6.3.3

@RussTreadon-NOAA
Copy link
Contributor Author

WCOSS2 (Cactus) tests

Use operational gfs.v16.3.8 and EMC-v16.3.19 in stand-alone GSI run script. Run the script four times

  1. use gfs.v16.3.18 for gdas 20240923 12Z with operational gpsro dump
  2. use EMC-v16.3.19 for gdas 20240923 12Z with operational gpsro dump
  3. use gfs.v16.3.18 for gdas 20240917 00Z with test gpsro dump containing PlanetiQ along with other operational gpsro data
  4. use gfs.v16.3.19 for gdas 20240917 00Z with test gpsro dump containing PlanetiQ along with other operational gpsro data

The analysis results from 1 and 2 are bitwise identical. This is expected because PlanetiQ gpsro data is not in the operational gpsro dump file,

The analysis results from 3 and 4 differ. This is expected because PlanetiQ gpsro data is in the test gpsro dump file. gfs.v16.3.18 assimilates PlanetiQ (type 267) gpsro data.

 o-g 01     gps asm 267 0000 count         24        779       1388       4965       7367       4774       3172       3807       4543       5821       8821      84281
 o-g 01     gps asm 267 0000  bias  0.133E+01  0.235E+00 -0.354E+00 -0.328E+00 -0.227E+00 -0.150E+00 -0.254E+00 -0.167E+00 -0.539E-01 -0.999E-01  0.337E-01 -0.833E-01
 o-g 01     gps asm 267 0000   rms  0.519E+01  0.471E+01  0.470E+01  0.465E+01  0.294E+01  0.149E+01  0.136E+01  0.138E+01  0.139E+01  0.155E+01  0.170E+01  0.220E+01
 o-g 01     gps asm 267 0000  cpen  0.714E+00  0.759E+00  0.855E+00  0.115E+01  0.126E+01  0.911E+00  0.837E+00  0.820E+00  0.816E+00  0.845E+00  0.884E+00  0.631E+00
 o-g 01     gps asm 267 0000 qcpen  0.714E+00  0.759E+00  0.855E+00  0.115E+01  0.126E+01  0.911E+00  0.837E+00  0.820E+00  0.816E+00  0.845E+00  0.884E+00  0.631E+00

whereas EMC-v16.3.19 monitors PlanetiQ (type 267) gpsro data

 o-g 01     gps mon 267 0000 count         24        779       1388       4965       7367       4774       3172       3807       4543       5821       8821      84281
 o-g 01     gps mon 267 0000  bias  0.133E+01  0.235E+00 -0.354E+00 -0.328E+00 -0.227E+00 -0.150E+00 -0.254E+00 -0.167E+00 -0.539E-01 -0.999E-01  0.337E-01 -0.833E-01
 o-g 01     gps mon 267 0000   rms  0.519E+01  0.471E+01  0.470E+01  0.465E+01  0.294E+01  0.149E+01  0.136E+01  0.138E+01  0.139E+01  0.155E+01  0.170E+01  0.220E+01
 o-g 01     gps mon 267 0000  cpen  0.714E+00  0.759E+00  0.855E+00  0.115E+01  0.126E+01  0.911E+00  0.837E+00  0.820E+00  0.816E+00  0.845E+00  0.884E+00  0.631E+00
 o-g 01     gps mon 267 0000 qcpen  0.714E+00  0.759E+00  0.855E+00  0.115E+01  0.126E+01  0.911E+00  0.837E+00  0.820E+00  0.816E+00  0.845E+00  0.884E+00  0.631E+00

EMC-v16.3.19 is the behavior we want in operations.

Tagging @XuanliLi-NOAA, @HaixiaLiu-NOAA , and @dtkleist for awareness.

@KateFriedman-NOAA
Copy link
Member

@RussTreadon-NOAA checking in, has the CDF for this ARFC been submitted yet?

@RussTreadon-NOAA
Copy link
Contributor Author

Sorry for not notifiying you, @KateFriedman-NOAA . Yes, the ARFC was submitted Tuesday, 9/24/2024, afternoon. NCO reached out to Daryl and me for more information. The ARFC has been assigned to Wei Wei. An implementation date has not yet been set but it could be implemented as early as Monday, 9/30/2024.

@KateFriedman-NOAA
Copy link
Member

Sounds good, no worries, thanks @RussTreadon-NOAA! Please let me know if the implementation date changes, thanks!

@RussTreadon-NOAA
Copy link
Contributor Author

@KateFriedman-NOAA , NCO implemented the ARFC for gfs.v16.3.19 just prior to the start of the 20240930 12Z gfs cycle.

@KateFriedman-NOAA
Copy link
Member

Thanks for letting me know @RussTreadon-NOAA ! I will work on the post-implementation tasks to wrap up this issue. Will ask you to review the merge of the release branch for this into the dev/gfs.v16 branch.

@RussTreadon-NOAA
Copy link
Contributor Author

FYI @KateFriedman-NOAA , NCO implemented gfs.v16.3.19 as a copy of gfs.v16.3.18 with changes to fix/fix_gsi/global_convinfo.txt and ecf/scripts/enkfgdas/analysis/create/jenkfgdas_update.ecf.

The wall time for enkf.x was increased by 5 minutes from 30 to 35 in jenkfgdas_update.ecf. This change is in response to recent instances of enkfgdas_update running long. For example,

2024/09/18 1840Z nkg ======================================================
aborted: /prod/primary/12/gfs/v16.3/enkfgdas/analysis/create/jenkfgdas_update
log; /lfs/h1/ops/prod/output/20240918/enkfgdas_update_12.o189867755
Wall clock exceeded - Reran successfully
delayed rap job.
Which caused downstream failure:
aborted: /prod/primary/18/lmp/v2.6/20z/00/jlmp_prep
which was waiting for rap.t19z.wrfprsf00.grib - Rerun successful

and

2024/09/16 1850 GM ========================================================

LOG:[18:40:32 16.9.2024]  aborted: /prod/primary/12/gfs/v16.3/enkfgdas/analysis/create/jenkfgdas_update
log: /lfs/h1/ops/prod/output/20240916/enkfgdas_update_12.o154935001
workdir: /lfs/f1/ops/prod/tmp/enkfgdas_update_12.154935001.cbqs01

nodes used for job= (35)
nid001118 nid001119 nid001126 nid001127 nid001128 nid001129 nid001131 nid001134 nid001135 nid001138
nid001144 nid001151 nid001161 nid001167 nid001169 nid001170 nid001174 nid001198 nid001201 nid001202
nid001203 nid001211 nid001214 nid001218 nid001223 nid001224 nid001226 nid001229 nid001231 nid001234
nid001235 nid001236 nid001237 nid001239 nid001240

4121:=>> PBS: job killed: walltime 1812 exceeded limit 1800
didn't find any of the nodes used for the job "offline" or "down" in pbsnodes
rerun completed in 27mins.

@XuanliLi-NOAA
Copy link

@RussTreadon-NOAA : For my PIQ experiment, Is it better to git clone and build v16.3.19 once the data flow begins, or is it okay to use v16.3.18?
Also, I encountered an error when trying to build ufs_utils for chgres_cube, does it work on WCOSS-2? I only have emcsfc_ice_blend, emcsfc_snow2mdl, and global_cycle in the exec directory.

@RussTreadon-NOAA
Copy link
Contributor Author

@XuanliLi-NOAA , gfs.v16.3.18 and gfs.v16.19 only differ in three files

diff -r /lfs/h1/ops/prod/packages/gfs.v16.3.18/ecf/scripts/enkfgdas/analysis/create/jenkfgdas_update.ecf /lfs/h1/ops/prod/packages/gfs.v16.3.19/ecf/scripts/enkfgdas/analysis/create/jenkfgdas_update.ecf
6c6
< #PBS -l walltime=00:30:00
---
> #PBS -l walltime=00:35:00
diff -r /lfs/h1/ops/prod/packages/gfs.v16.3.18/fix/fix_gsi/global_convinfo.txt /lfs/h1/ops/prod/packages/gfs.v16.3.19/fix/fix_gsi/global_convinfo.txt
260c260
<  gps      267    0    1     3.0      0      0      0  10.0  10.0   1.0  10.0  0.000000     0    0.     0.      0    0.     0.    0    0
---
>  gps      267    0   -1     3.0      0      0      0  10.0  10.0   1.0  10.0  0.000000     0    0.     0.      0    0.     0.    0    0
diff -r /lfs/h1/ops/prod/packages/gfs.v16.3.18/versions/run.ver /lfs/h1/ops/prod/packages/gfs.v16.3.19/versions/run.ver
1,2c1,2
< export version=v16.3.16
< export gfs_ver=v16.3.16
---
> export version=v16.3.19
> export gfs_ver=v16.3.19

That said, if I am running a test with operations as the control I like to use exactly what operations is using apart from the changes I am testing. I would use gfs.v16.3.19.

Regarding chgres_cube I see what you are saying. It took a bit of poking around but I think I see what's going on. File gfs.v16.3.19/sorc/ufs_utils.fd/sorc/ufs_build contains

#                                                                                                                                                               
# ***** configuration of fv3gfs build *****                                                                                                                     

 Building chgres (chgres) .............................. no
 Building chgres_cube (chgres_cube) .................... no
 Building nst_tf_chg (nst_tf_chg) ...................... no
 Building orog (orog) .................................. no
 Building cycle (cycle) ................................ yes
 Building emcsfc (emcsfc) .............................. yes
 Building fre-nctools (nctools) ........................ no
 Building sfc_climo_gen (sfc_climo_gen) ................ no

# -- END --

Only the cycle and emcsfc builds are turned on. This agrees with what I see in `gfs.v16.3.19/sorc/logs/build_ufs_utils.log

+ cd ufs_utils.fd/sorc
+ ./build_all_ufs_utils.sh
Creating logs folder
Creating ../exec folder
 .... Building cycle ....
 .... Building emcsfc ....

 .... Build system finished ....
+ exit

KateFriedman-NOAA added a commit to KateFriedman-NOAA/global-workflow that referenced this issue Sep 30, 2024
NCO increased the walltime for the enkfgdas_update
job by 5mins (from 30mins to 35mins)

Refs NOAA-EMC#2951
@XuanliLi-NOAA
Copy link

@RussTreadon-NOAA Thank you for answering my questions. I tried to turn on chgres in the cfg file, but it's looking for ../modulefiles/fv3gfs/global_chgres.wcoss2, which is missing. The available wcoss2 module files are emcsfc_ice_blend, emcsfc_snow2mdl, and global_cycle.

KateFriedman-NOAA added a commit that referenced this issue Sep 30, 2024
NCO increased the walltime for the `enkfgdas_update` job by 5mins (from
30mins to 35mins) ahead of implementation today.

Refs #2951
@RussTreadon-NOAA
Copy link
Contributor Author

@XuanliLi-NOAA , we need to contact @GeorgeGayno-NOAA .

As you note, sorc/ufs_utils.fd/modulefiles does not contain wcoss2 modulefiles for several ufs_utils. wcoss2 modulefiles are only available for

modulefile.global_emcsfc_ice_blend.wcoss2.lua
modulefile.global_emcsfc_snow2mdl.wcoss2.lua

gfs.v16.3.19 uses ops-gfsv16.3.0. This branch is quite old. A newer branch may be build chgres_cube.

Altenatively, you can build chgres_cube as part of ufs_utils.fd in g-w develop. g-w develop points at ufs_utils.fd @ 06eec5b. I checked a recent build of g-w on WCOSS2. I see chgres_cube in sorc/ufs_utils.fd/exec. Note that you won't see chgres_cube in $HOMEgfs/exec because sorc/link_workflow.sh does not include chgres_cube in the list of ufs_utils executable to link.

@XuanliLi-NOAA
Copy link

@RussTreadon-NOAA, thank you so much! I'll give it a try with the g-w develop branch.

@GeorgeGayno-NOAA
Copy link
Contributor

@XuanliLi-NOAA , we need to contact @GeorgeGayno-NOAA .

As you note, sorc/ufs_utils.fd/modulefiles does not contain wcoss2 modulefiles for several ufs_utils. wcoss2 modulefiles are only available for

modulefile.global_emcsfc_ice_blend.wcoss2.lua
modulefile.global_emcsfc_snow2mdl.wcoss2.lua

gfs.v16.3.19 uses ops-gfsv16.3.0. This branch is quite old. A newer branch may be build chgres_cube.

The ops-gfsv16.3.0 branch was created only to support the OPS GFS. Since GFS does not use chgres_cube, the build for that program was not updated for WCOSS2.

Altenatively, you can build chgres_cube as part of ufs_utils.fd in g-w develop. g-w develop points at ufs_utils.fd @ 06eec5b. I checked a recent build of g-w on WCOSS2. I see chgres_cube in sorc/ufs_utils.fd/exec. Note that you won't see chgres_cube in $HOMEgfs/exec because sorc/link_workflow.sh does not include chgres_cube in the list of ufs_utils executable to link.

Yes, use the latest version of ufs_utils as used by the g-w develop branch.

@XuanliLi-NOAA
Copy link

@GeorgeGayno-NOAA, thanks for the information.

@RussTreadon-NOAA
Copy link
Contributor Author

This makes sense @GeorgeGayno-NOAA . Implementation packages should only build applications to be used in operations.

KateFriedman-NOAA added a commit that referenced this issue Oct 1, 2024
An ARFC in NCEP operations placed PlanetiQ RO data in monitor mode ahead
of the 12z cycle on September 30th. A new GSI tag (`gfsda.v16.3.19`)
with an updated `global_convinfo.txt` file was provided. The walltime for
the `enkfgdas_update` job was also increased from 30 mins to 35 mins.

Refs #2951
@KateFriedman-NOAA
Copy link
Member

Release branch merged into dev/gfs.v16 branch, released, and announced to users. Closing as complete.

@RussTreadon-NOAA
Copy link
Contributor Author

FYI: Check of operational gdas gsistat files shows PlanetiQ RO data is flowing through system in monitor mode effective 20241002 12Z

russ.treadon@clogin02:/lfs/h1/ops/prod/com/gfs/v16.3> grep "gps mon 267 0000 count" gdas.2024*/*/atmos/g*s.t*z.gsistat  |grep "o-g 01"                                                                              
gdas.20241002/12/atmos/gdas.t12z.gsistat: o-g 01     gps mon 267 0000 count          6        223        408       1507       2194       1475        952       1118       1384       1722       2666      25321
gdas.20241002/18/atmos/gdas.t18z.gsistat: o-g 01     gps mon 267 0000 count         44        830       1316       4851       7192       4801       3086       3662       4455       5659       8717      82130
gdas.20241003/00/atmos/gdas.t00z.gsistat: o-g 01     gps mon 267 0000 count         26        805       1560       5237       7536       4953       3279       3858       4743       5943       9159      87634
gdas.20241003/06/atmos/gdas.t06z.gsistat: o-g 01     gps mon 267 0000 count         21        761       1395       4740       6715       4572       2941       3499       4273       5473       8344      79029

@XuanliLi-NOAA
Copy link

Thanks @RussTreadon-NOAA for confirming that.

@KateFriedman-NOAA
Copy link
Member

From the October 4th RFC memo:

RFC 13303 - On WCOSS2, upgraded the GFS to v16.3.19. The reason for this
change is the following: provider PlanetiQ offers two versions of their data: basic
and premium. Premium is high quality data. Basic is a degraded version of the
premium data. EMC learned last week that NESDIS purchased the basic
(degraded) data. This ARFC prevents the GFS from assimilating the degraded
PlanetiQ GPS Radio Occultation (GPS-RO) data when it flows into the GFS. This
update also increased the walltime limit for the jenkfgdas_update job from 30 min
to 40 min to address runtime variation. Implemented on September 30 from
1330Z to 1615Z.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
production update Processing update in production
Projects
None yet
Development

No branches or pull requests

5 participants