Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no burning at lowT, lower the default jina rate floor. #645

Open
Debraheem opened this issue May 13, 2024 · 19 comments · May be fixed by #647
Open

no burning at lowT, lower the default jina rate floor. #645

Debraheem opened this issue May 13, 2024 · 19 comments · May be fixed by #647
Labels
rates-net Rates and net modules

Comments

@Debraheem
Copy link
Member

Debraheem commented May 13, 2024

See : https://lists.mesastar.org/pipermail/mesa-users/2024-May/015169.html

See also https://docs.mesastar.org/en/release-r24.03.1/reference/star_job.html#jina-reaclib-min-t9

"set jina reaclib rates to zero for T9 <= this. if this control is <= 0,
then use the standard default from rates. need <= 3d-3 for pre-ms li7
burning if change this, must remove old cached rates from
data/rates_data/cache"

We currently have reaclib_min_T9 = 1d-2 (i.e. no reactions below 10 MK) as a MESA default, which misses be9 burning as well as li7 burning on the pre-ms, so these rates are effectively set to 0. This seems like an issue. Why don't we lower this to 1d-4 or something similar to see if it breaks anything? I think we should find a way to lower the floor if possible or communicate when it should and shouldn't be lowered more clearly to users. I was not aware of this until Masanobu raised it and it directly influences pre-ms modeling, but it could have unforeseen consequence we are not yet aware of.

@Debraheem Debraheem added the rates-net Rates and net modules label May 13, 2024
@fxt44
Copy link
Member

fxt44 commented May 13, 2024

let's do the experiment of lowering to at least 1d-3.

@Debraheem Debraheem linked a pull request May 13, 2024 that will close this issue
@evbauer
Copy link
Member

evbauer commented May 14, 2024

Does anybody know what was the original motivation for having this control in the first place?

@fxt44
Copy link
Member

fxt44 commented May 14, 2024

On May 16, 2015, at 3:49 PM, Bill Paxton [email protected] wrote:

I'm leaving this one to Frank to fix someday.
Meantime, I'm just providing a not-very-satisfactory workaround for the pre-ms and ms case.

As you can see from this lovely figure, we need to get down to 2e6 for Li and 4e5 for Deuterium.
But if we drop the lower logT limit for reaclib below 3e6 we run into very bad convergence problems during post ms burning.

Perhaps there's a nice general solution, but I don't see it now.

So the ad hoc nasty solution is to provide a control that let's you change the lower logT bound from the inlist.

when you change the bound, you need to clear out the rates cache (delete all from data/rates_data/cache).

I'm leaving the default unchanged (1e7). For pre-ms and ms evolution where you want to use a large net and also included Li and Deut burning, you'll have to explicitly reduce the limit by setting the new control 'jina_reaclib_min_T9' in &star_jobs. Note that the value is T9. So you'll want something like jina_reaclib_min_T9 4d-4 to get down to logT ~ 5.6.

This new control is in 7574. I'm running the test suite at the moment.

-Bill

@fxt44
Copy link
Member

fxt44 commented May 14, 2024

its possible the problem no longer exists in 2024 ....

@Debraheem
Copy link
Member Author

Perhaps. Do you know what Bill meant by "very bad convergence problems"? Was there a specific test case where he demonstrated this?

@fxt44
Copy link
Member

fxt44 commented May 14, 2024

i know very well. at the time, models with nets that included the light isotopes and reaclib rates could struggle mightily to get to he burning and/or beyond. yes, there was a specific mesa-users case (choi's) being examined as the test vehicle , but that "test case" is long gone - in part because bill solved the immediate user's issue of returning the light isotopes in prems to ms evolution, while simultaneously not perturbing the growing number of test suite case results for the tams and beyond. hence the opening punt by bill to fix it properly someday.

i suggest reducing reaclib_min_T9 to 1d-3, then 1d-4, then 1d-5 and see what the current test suite complain about at each reduction. offline, one could/should also test a few masses, say 1, 2, 5, 15, 25 msun with larger nets and a smaller reaclib_min_T9 to check if going from the pre-ms to the final fate presents any issues.

@Debraheem
Copy link
Member Author

Okay, the first round on the test_suite with reaclib_min_T9 = 1d-4 already passed on the whole test_suite, so that's good. I will test this locally with big nets across the masses.

@Debraheem
Copy link
Member Author

@Debraheem
Copy link
Member Author

Debraheem commented May 14, 2024

Testing general big networks locally, it's hard to even get on the main-sequence with reaclib_min_T9 = 1d-4, both the 20Msun and 1 Msun models fail using mesa_52.net and mesa_206.net respectively. The 20Msun has a slightly easier time getting off the ground, so I relaxed the limit to reaclib_min_T9 = 1d-3, to which the 20Msun appears to be running but the 1Msun crashes. With reaclib_min_T9 = 1d-2, there is no issue.

My first thought here is that reaclib_min_T9 = 1d-2 is still necessary, and that this comes from the rates being noisy, or perhaps their derivatives being noisy at low T. I would imagine the solution will come from implementing something like auto_diff for rho-T in rates and net.

@Debraheem Debraheem linked a pull request May 14, 2024 that will close this issue
@fxt44
Copy link
Member

fxt44 commented May 14, 2024

so the issue is still with us. wow, ok. convergence issues are almost always an incompatibility between a function value saying "this way" and the derivative saying "that way".

this comes from the rates being noisy

let's check this. for the mesa_52.net with, say reaclib_min_T9 = 1d-3, plot all of the reaction rates in the cache directory from log(T)=6 to log(T) = 8. we're looking for any non-monotonic behavior.

@Debraheem
Copy link
Member Author

One immediate point of reference, for the 1Msun, look at the noise in enuc and eneu (see image) in the first couple timesteps before the model crashes. logTc ~ 6.6. I'm using the 1Msun_pms_to_WD test_suite here with no modifications.
noise

@fxt44
Copy link
Member

fxt44 commented May 14, 2024

ok, good. eps_nuc and eps_nu are quite low. model is also fully convective, with fresh cold fuel is being mixed into hotter regions. maybe on the mass fraction, limit the max to maybe 1e-5 and drop the lower limit to maybe 1e-10 to see what is happening with the light isotopes.

@Debraheem
Copy link
Member Author

hmm, it seems there is no pgstar control for the y-axis limits /facepalm https://docs.mesastar.org/en/release-r24.03.1/reference/pgstar.html#summary-burn-window

@fxt44
Copy link
Member

fxt44 commented May 14, 2024

yea, unfortunate; try the abundance window.

@fxt44
Copy link
Member

fxt44 commented May 14, 2024

on the 1msun, 52 iso, what does the terminal say is limiting the timestep in what cell?

update - handled offline.

@Debraheem
Copy link
Member Author

https://lists.mesastar.org/pipermail/mesa-users/2024-May/015184.html

A likely candidate source of issues at lowT. I have been meaning to improve the rate table interpolation scheme in MESA as I was aware of this, but never how bad it could be... Thanks to Kaili we have a plot showing the rate interpolation at lowT! Yikes.
Outlook-uu1mhty0

@warrickball
Copy link
Contributor

Something similar was flagged in another issue (almost two years ago). As you mentioned in the thread, MESA is interpolating the linear rate in terms of log temperature, rather than log rate vs log temperature.

@Debraheem
Copy link
Member Author

See:

call interp_m3q(logttab, nrattab, f1, mp_work_size, work1, &
'rates do_make_rate_tables', operr)

My first question is, should we be using interp_pm instead of interp_m3q, like we do in the opacity interpolation? @evbauer

Other than this, it seems like it should be possible to just take the log of the rate value and interpolate that instead?

@evbauer
Copy link
Member

evbauer commented May 23, 2024

My first question is, should we be using interp_pm instead of interp_m3q, like we do in the opacity interpolation? @evbauer

At first glance, I think probably not. interp_pm is even stricter than interp_m3q, and that extra strictness might not be necessary. Here's the comment in interp_1d_pm.f90:

      ! the following produce piecewise monotonic interpolants
      ! rather than monotonicity preserving
      ! this stricter limit never introduces interpolated values exceeding the given values,
      ! even in places where the given values are not monotonic.
      ! the downside is reduced accuracy on smooth data compared to the mp routines.

As for interpolating in log-log space instead of log-linear, that seems reasonable to me. Have we ever tried? Perhaps it's time to give it a shot and see how testing goes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rates-net Rates and net modules
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants