-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More tests for MHD #557
Comments
Probably @feathern already mentioned this, but in my experience, Jones (2011) MHD takes a very long time to run (like several hours on 2.048 cores) so it might not be feasible to make a test out of it. The hydro case runs faster but I think also takes some time. |
I agree that it is impossible to run the full benchmark as a test, but the purpose of the tests is different from the benchmarks. The tests are there to document what kind of result Rayleigh produces after a short runtime (say 100 time steps in low/medium resolution). We can add this result as reference result to the test. It will not be a good benchmark result, but that is ok. The main purpose is that this result will change if someone changes something about Rayleigh's interior (like the PSI formulation or the reference states) and accidentally also changes the existing functionality. So they act as warning lights that something changed, and then we can decide whether that is acceptable (if we expected it to affect the MHD cases) or unacceptable (if the change should be independent from the MHD cases). So I think they would still make good test cases, I just need to clean up the PR a bit and make sure the result is checked correctly. |
@gassmoeller Ah I see, that makes sense. Would there be any issue though with the random initial condition? After digging through Jones (2011), he write: A number of different initial conditions were tried which resulted ...But maybe by 100 time steps the initial state will be wiped out. |
Oh, I didnt realize these cases included randomness. How are the random numbers in Rayleigh implemented? Are they reproducible? For ASPECT we had to switch to a different random number generator (namely https://cplusplus.com/reference/random/mt19937/) to generate reproducibly the same random numbers if we start with the same seed. |
@gassmoeller I believe it just uses the Fortran "random_seed" and "random_number" functions see line 386--388 of src/Physics/Initial_Conditions.F90. @feathern knows more but my understanding is that for (e.g.) the random thermal field, Rayleigh generates random amplitudes and then weights them by a Gaussian in l-space---the field spectral coefficients get assigned this weighted random amplitude, also with a random phase (see line 448). I feel like we could either: tweak the benchmark to start with a different init type (say the m=7 field we know will grow) or wait for the decaying modes to wash out. They might wash out pretty quickly, so we still might not need to run the full benchmark. |
There is no randomness. Rayleigh has five primary benchmarks, as well as four additional benchmarks related to compositional convection that come from Breuer et al. 2010. Of the five primary benchmarks, which come from Christensen et al. 2001 (Boussinesq) and Jones et al. 2011 (anelastic), only one uses a random initial condition -- the unsteady dynamo described in Jones et al. 2011. The other benchmarks initialize the temperature or entropy fields with a combination of low-order, pure spherical harmonics multiplied by a specified radial function f(r) and typically superimposed onto a conductive temperature or entropy profile. At the moment, we use case 0 from Christensen et al. 2001, run for a short amount of time, to check whether the benchmark measurements (e.g., volume-averaged KE and drift frequency) yield the same values as they have for previous versions of the code run for a similarly short amount of time. We are now looking at adding the steady hydro and steady MHD cases of Jones et al. 2011 into the mix. None of these three use random initial conditions. When the case 0 benchmark is run, and benchmark measurements at made at time X, well before equilibration, the resulting values are (remarkably) the same at the same time X, owing to how the initial conditions are specified. |
@feathern @gassmoeller OK, I stand corrected! I must have been exclusively reading Jones's description of the unsteady dynamo benchmark. |
Actually, I'm confused! What I said is true for all EXCEPT the Jones 2011, steady MHD benchmark. We're fine for adding the hydro benchmark, but the steady MHD does indeed use init_type =7 (random perturbations) for both the thermal field and the magnetic flux functions. We should probably use Christensen et al.'s case 1 as our MHD test, and then test both case 0 from that paper and the steady hydro case from Jones 2011. If we want to test the anelastic/MHD combination, we may need to devise a new initial condition that isn't random. |
We should make tests out of the input examples:
input_examples/j2011_steady_hydro_minimal
input_examples/j2011_steady_mhd_minimal
The text was updated successfully, but these errors were encountered: