Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Icb into v2.5 #495

Open
wants to merge 25 commits into
base: wiso-fesom2.5
Choose a base branch
from
Open

Icb into v2.5 #495

wants to merge 25 commits into from

Conversation

ackerlar
Copy link
Collaborator

@ackerlar ackerlar commented Sep 7, 2023

No description provided.

@patrickscholz
Copy link
Contributor

Hey @ackerlar, kannst du nochmal checken warum die standard testcases mit deinen Aenderungen nicht funktionieren (all checks have failed). Wird wahrscheinlich ein problem mit dem GNU Compiler sein, der zickt deutlich mehr herum wenn irgendwas nicht 100% conform ist!

@JanStreffing
Copy link
Collaborator

Note that this was also the case for https://github.com/FESOM/fesom2/commits/wiso-fesom2.5 already before the PR.

@JanStreffing
Copy link
Collaborator

The tests are looking for OASIS libraries. I think this can happen if you set the default value of preprocessor flag for FESOM coupled to true. This should always be false, and then esm_tools will set it to true automatically in case we install FESOM2 as part of a coupled model.

@JanStreffing
Copy link
Collaborator

It's getting further, but crashing on:

[ 61%] Building Fortran object src/CMakeFiles/fesom.dir/icb_dyn.F90.o
/__w/fesom2/fesom2/src/icb_dyn.F90:527:19:

  527 |    write(icbID,'(2I,12e15.7)') mype, istep,  &
      |                   1
Error: Nonnegative width required in format string at (1)
/__w/fesom2/fesom2/src/icb_dyn.F90:540:19:

  540 |    write(icbID,'(2I,12e15.7)') mype,istep,  &
      |                   1
Error: Nonnegative width required in format string at (1)
/__w/fesom2/fesom2/src/icb_dyn.F90:79:29:

   79 |  call iceberg_average_andkeel(mesh, partit, dynamics, uo_dz,vo_dz, uo_keel,vo_keel, T_dz,S_dz, T_keel,S_keel, depth_ib,iceberg_elem, ib)
      |                             1
Error: Explicit interface required for 'iceberg_average_andkeel' at (1): target argument
/__w/fesom2/fesom2/src/icb_dyn.F90:109:26:

  109 |  call iceberg_acceleration( mesh, partit, dynamics, ib, au_ib, av_ib, Ao, Aa, Ai, Ad,   &
      |                          1
Error: Explicit interface required for 'iceberg_acceleration' at (1): target argument
make[2]: *** [src/CMakeFiles/fesom.dir/build.make:772: src/CMakeFiles/fesom.dir/icb_dyn.F90.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [CMakeFiles/Makefile2:139: src/CMakeFiles/fesom.dir/all] Error 2
make: *** [Makefile:130: all] Error 2

@patrickscholz
Copy link
Contributor

I think he ask here for an interface similar like we use it in many parts of code e.g.

module visc_filt_bcksct_interface
  interface
    subroutine visc_filt_bcksct(dynamics, partit, mesh)
      use mod_mesh
      USE MOD_PARTIT
      USE MOD_PARSUP
      USE MOD_DYN
      type(t_dyn)   , intent(inout), target :: dynamics
      type(t_partit), intent(inout), target :: partit
      type(t_mesh)  , intent(in)   , target :: mesh    
    end subroutine
  end interface
end module

I think what might happen is ... when he tries to compile these lines he doesn't know yet what is subroutine iceberg_acceleration(...). Sometimes Fortran (i think the picky gnu compiler is here the problem) behaves very weird in this. So you can create a module interface for that routine that tells him explicitly with use that_module what this subroutine means? Or you could try to put the calling routine subroutine iceberg_dyn(...) at the end of that file (option1 should be the saver one).

Also seeing the calling structure ...

subroutine iceberg_dyn(mesh, partit, ice, dynamics, ib, new_u_ib, new_v_ib, u_ib, v_ib, lon,lat, depth_ib, &
                       height_ib, length_ib, width_ib, iceberg_elem, &
		       mass_ib, Ci, Ca, Co, Cda_skin, Cdo_skin, &
		       rho_ice, rho_air, rho_h2o, P_sill, conc_sill, frozen_in, &
		       file1, file2, P_ib, conci_ib, dt_ib, lastsubstep, &
		       f_u_ib_old, f_v_ib_old, l_semiimplicit, &
		       semiimplicit_coeff, AB_coeff, file3, rho_icb)

... would it make sense to create an own derived type for the iceberg where its variables are bundled together. Also to keep up the principle of refactoring and possible dwarf-irisation? (@dsidoren ?).

@trackow
Copy link
Contributor

trackow commented Sep 7, 2023

About the calling structure, I have the feeling this might be easier once this is part of refactoring, to check that results are bit-identical. It would require larger changes I think, and refactoring might diverge again over that time...

@ackerlar
Copy link
Collaborator Author

ackerlar commented Sep 8, 2023

I put icb_elem, icb_dyn, and icb_step into modules and made some more smaller changes. I compiled with the GNU compiler for me now

@JanStreffing
Copy link
Collaborator

Good progress. I had a glace at the latest version, and I think the current error might be, because you start using OMP PARALELL DO without starting the OMP block. In other subroutines there is something like:

!$OMP PARALLEL DEFAULT(SHARED) PRIVATE(n, nz, nzmax, nzmin, a, b, c, tr, cp, tp, m, zinv, dz, &
!$OMP                                         rsss, Ty, Ty1, c1, zinv1, zinv2, v_adv, zbar_n, z_n)
!$OMP DO

At the start.

@JanStreffing
Copy link
Collaborator

JanStreffing commented Sep 11, 2023

The other thing that looked strange to me, was the !$OMP BARRIER outside of any OMP DO

https://github.com/ackerlar/fesom2/blob/caa21b76787f19c008f5f54198b21f8894771dc2/src/oce_ale_tracer.F90#L236C1-L236C14

@patrickscholz
Copy link
Contributor

@JanStreffing : The !$OMP BARRIER is here related to the finalizing of the halo exchange for all the OMP threats which has to happen outside/after a !$OMP PARALLEL DO elemental or vertice loop.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants