Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JEDI-T2O for gdas-validation is missing the satbias files from the previous cycle #98

Open
emilyhcliu opened this issue Nov 16, 2023 · 15 comments

Comments

@emilyhcliu
Copy link

emilyhcliu commented Nov 16, 2023

@CoryMartin-NOAA and @RussTreadon-NOAA
When running the 2021080100 gdasatmanlinit job, the processing looks for the following satbias files from the previous cycle in 20210731/18/atmos/analysis/atmos:
gdas.t18z.atms_npp.satbias.nc4 (stores bias coefficients bias_coefficients)
gdas.t18z.atms_npp.satbias_cov.nc4 (stores bkg errors for bias coefficients bias_coeff_errors)
gdas.t18z.atms_npp.tlapse.txt

The input satbias files we used in the UFO evaluation (2021080100) are:
atms_npp_tlapmean_2021073118.txt
atms_npp_satbias_2021073118.nc4

We do not have atms_npp_satbias_cov_2021073118.nc4.
But, we have both bias_coefficients and bias_coeff_errors (values in satbias_pc) in atms_npp_satbias_2021073118.nc4

netcdf atms_npp_satbias_2021073118 {
dimensions:
        nchannels = 22 ;
        npredictors = 12 ;
variables:
        float bias_coeff_errors(npredictors, nchannels) ;
                bias_coeff_errors:_FillValue = -3.368795e+38f ;
        float bias_coefficients(npredictors, nchannels) ;
                bias_coefficients:_FillValue = -3.368795e+38f ;
        int channels(nchannels) ;
                channels:_FillValue = -2147483647 ;
        int nchannels(nchannels) ;
                nchannels:suggested_chunk_dim = 22LL ;
        int npredictors(npredictors) ;
                npredictors:suggested_chunk_dim = 12LL ;
        float number_obs_assimilated(nchannels) ;
                number_obs_assimilated:_FillValue = -3.368795e+38f ;
        string predictors(npredictors) ;
                string predictors:_FillValue = "" ;

Do we want bias_coefficients and bias_coeff_errors in the same file or separate files?

Our radiance YAML is configured to have bias_coefficients and bias_coeff_errors in separate files.

@CoryMartin-NOAA
Copy link
Contributor

@emilyhcliu this is a quirk of the GSI converters... The current converter writes everything to one file, but JEDI will write it to 2 separate files. Does it work if we just symlink the _cov file to the satbias file?

@RussTreadon-NOAA
Copy link
Contributor

My preference is to combine the information in

  • gdas.t18z.atms_npp.satbias.nc4
  • gdas.t18z.atms_npp.satbias_cov.nc4
  • gdas.t18z.atms_npp.tlapse.txt

into a single netcdf file. It's much easier to keep track of one file than three.

What's the history behind the three file separation in JEDI?

@CoryMartin-NOAA
Copy link
Contributor

There are separate read and write routines in UFO for the satbias and satbias_cov files, even though the formats are very similar.

I will note that, as part of the generalization of VarBC for aircraft, that these file formats, and the YAMLs for obs bias, will change "soon", so we probably shouldn't put too much effort into engineering a solution for these files as they are now.

@CoryMartin-NOAA
Copy link
Contributor

Do we have these files staged somewhere that we can manually copy for use? Such as /work2/noaa/da/eliu/UFO_eval/data/gsi_geovals_l127/nofgat_aug2021/20231009/bc/*2021080100*

@RussTreadon-NOAA
Copy link
Contributor

atms_n20 with bias correction seems to work in fv3jedi_var.x. I used the same satbias.nc file as input file in the obs bias and obs bias covariance sections of the input yaml. As @CoryMartin-NOAA notes we probably should hit pause on tinkering with radiance bias correction i/o given pending changes.

(My previous remark to you, @CoryMartin-NOAA, about strange increments was due to goes-16/17 amv & metop-a/b scatwnd. One or more of these produced unreasonable uv wind increments).

@CoryMartin-NOAA
Copy link
Contributor

@RussTreadon-NOAA that is probably the linear obs operator issue that @emilyhcliu and I discovered earlier in the week.

@RussTreadon-NOAA
Copy link
Contributor

Thanks for the reminder, @CoryMartin-NOAA .

Additional fv3jedi_var.x runs with different observation types present yield reasonable uv increments when processing ascatw_ascat_metop-a and ascatw_ascat_metop-b. Unreasonable uv increments occur when satwind_goes-16 and satwind_goes-17 are processed.

@CoryMartin-NOAA
Copy link
Contributor

@RussTreadon-NOAA do your yamls have this linear obs operator section as shown in this PR? https://github.com/NOAA-EMC/GDASApp/pull/724/files Weird, if so, as @emilyhcliu found reasonable increments

@RussTreadon-NOAA
Copy link
Contributor

Pretty sure I have Emily's change. It was merged into feature/gdas-validation and I did a git pull in my working copy. git status does not show any local modifications to satwind_goes-16.yaml or satwind_goes-17.yaml. Let me back up to prepatmiodaobs and run through atmanlrun.

@emilyhcliu
Copy link
Author

@RussTreadon-NOAA @CoryMartin-NOAA I will repeat the satwind test again, and then add the satwind + scatwind together.

Also, aftering manually adding the satbias and satbias_cov and tlapse files for ATMS. The end-to-end ATMS ran to completion. I will investigate the details.

@RussTreadon-NOAA
Copy link
Contributor

Reran prepatmiodaobs, atmanlinit, and atmanlrun. The only obs types processed by fv3jedi_var.x were satwind_goes-16 and satwind_goes-17. The initial CostJo look OK

 0: CostJo   : Nonlinear Jo(satwind_goes-16) = 7861.85, nobs = 329392, Jo/n = 0.0238678, err = 11.3996
  0: CostJo   : Nonlinear Jo(satwind_goes-17) = 10214.1, nobs = 429322, Jo/n = 0.0237912, err = 11.5809

The increment looks unrealistic

  0: Increment print | number of fields = 8 | cube sphere face size: C768
  0: eastward_wind                                | Min:-6.371890e+38 Max:+2.030937e-01 RMS:+1.687255e+35
  0: northward_wind                               | Min:-6.371890e+38 Max:+1.663539e-01 RMS:+1.687255e+35

@CoryMartin-NOAA
Copy link
Contributor

can winds not move to the north and west at several orders of magnitude faster than the speed of light?????? :-)

@RussTreadon-NOAA
Copy link
Contributor

RussTreadon-NOAA commented Nov 16, 2023 via email

@RussTreadon-NOAA
Copy link
Contributor

satwind_goes-17.yaml contains

obs linear operator:
   name: VertInterp

obs and linear are in the wrong order. It should read

linear obs operator:
   name: VertInterp

satwind_goes-16.yaml already had linear obs operator:. Made the above change to a working copy of satwind_goes-17.yaml.

Rerun atmanlinit and atmanlrun with both goes-16 and goes-17 satwind processed. Observation stats before solver look good (as they did before)

  0: Jo Observations Errors:
  0: Diagonal observation error covariance
  0: satwind_goes-16 nobs= 329392 Min=7.6, Max=14, RMS=11.3996
  0:
  0: Diagonal observation error covariance
  0: satwind_goes-17 nobs= 429322 Min=7.6, Max=14, RMS=11.5809
  0:
  0: End Jo Observations Errors
  0: CostJo   : Nonlinear Jo(satwind_goes-16) = 7861.85, nobs = 329392, Jo/n = 0.0238678, err = 11.3996
  0: CostJo   : Nonlinear Jo(satwind_goes-17) = 10214.1, nobs = 429322, Jo/n = 0.0237912, err = 11.5809
  0: CostJo   : Nonlinear Jo = 18075.9

Now the increments also look reasonable (single iteration with identity B)

  0: Increment print | number of fields = 8 | cube sphere face size: C768
  0: eastward_wind                                | Min:-2.016947e-01 Max:+1.931298e-01 RMS:+4.262270e-04
  0: northward_wind                               | Min:-1.652080e-01 Max:+2.280027e-01 RMS:+4.151106e-04

Interesting tidbit. The final Increment print table includes non-zero increments for cloud_liquid_ice and cloud_liquid_water

  0: Increment print | number of fields = 8 | cube sphere face size: C768
  0: eastward_wind                                | Min:-2.016947e-01 Max:+1.931298e-01 RMS:+4.262270e-04
  0: northward_wind                               | Min:-1.652080e-01 Max:+2.280027e-01 RMS:+4.151106e-04
  0: air_temperature                              | Min:+0.000000e+00 Max:+0.000000e+00 RMS:+0.000000e+00
  0: surface_pressure                             | Min:+0.000000e+00 Max:+0.000000e+00 RMS:+0.000000e+00
  0: specific_humidity                            | Min:+0.000000e+00 Max:+0.000000e+00 RMS:+0.000000e+00
  0: cloud_liquid_ice                             | Min:+0.000000e+00 Max:+1.618770e-20 RMS:+1.293217e-23
  0: cloud_liquid_water                           | Min:+0.000000e+00 Max:+1.474788e-19 RMS:+2.167418e-22
  0: ozone_mass_mixing_ratio                      | Min:+0.000000e+00 Max:+0.000000e+00 RMS:+0.000000e+00

The cloud increments are extremely small. From where do these non-zero increment values originate: variable transform, change in numerical precision, ....?

@RussTreadon-NOAA
Copy link
Contributor

@emilyhcliu , I found it necessary to change fieldOfViewNumber in parm/ioda/bufr2ioda/bufr2ioda_atms.yaml from type: float to type: int. Without this change, fv3jedi_var.x aborted with

  6: Exception:         source_column:  0
  6:    source_filename:        /work2/noaa/da/rtreadon/gdas-validation/global-workflow/sorc/gdas.cd/ioda/src/engines/ioda/inclu\
de/ioda/Variables/Variable.h
  6:    source_function:        Variable_Implementation ioda::detail::Variable_Base<Variable_Implementation>::read(gsl::span<Dat\
aType>, const ioda::Selection &, const ioda::Selection &) const [with DataType = int; Marshaller = ioda::detail::Object_Accessor\
_Regular<int, int>; TypeWrapper = ioda::Types::GetType_Wrapper<int, 0>; Variable_Implementation = ioda::Variable]
  6:    source_line:    539
  6:
  6: Exception: oops::Variational<FV3JEDI, UFO and IODA observations> terminating...

After changing fieldOfViewNumber, sensorScanPosition in the ioda format atms dump file, fv3jedi_var.x ran to completion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants