-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix and extension to the way that PolyChord treats priors #233
base: master
Are you sure you want to change the base?
Conversation
Thanks! It was indeed somewhere in the to-do list, as suggested by @lukashergt, so we'll merge it after some work. As for the "extra" priors, I am sure we will find a way to keep them as priors semantically in the input, but to have the PolyChord interface deal with the difference. As said somewhere else, I'll come back to this soon! |
Codecov Report
@@ Coverage Diff @@
## master #233 +/- ##
==========================================
- Coverage 87.88% 87.87% -0.02%
==========================================
Files 92 92
Lines 8335 8335
==========================================
- Hits 7325 7324 -1
- Misses 1010 1011 +1
Continue to review full report at Codecov.
|
I did have a bit of a think about how to automate this, but it wasn't immediately obvious to me the best way for polychord to discover the manual prior density structure. Any hints would be much appreciated, as I agree that the most user friendly solution would be for yaml specified prior densities to be treated as likelihoods from polychord's perspective (although the current way is more explicit) |
This PR is basically an update of #104. Would be good to copy the Latex correction from there, i.e. here, this applies to the old line 263 (new line 251). It should be |
Regarding multidimensional priors it is worth noting that excluding part of the parameter space still works at the prior level. For example it still works to restrict one parameter prior:
exclude_prior: "lambda x, y: np.log(x > y)" But if you want to add some finite |
Many thanks @lukashergt for pointing out these links. This therefore fixes #101, fixes #103 and closes #104. |
In contrast to #232 and #231, this is a slightly more complicated/controversial PR.
The way priors are treated in nested sampling is quite subtle, in particular the way that it makes a clear separation between prior and likelihood. This was first covered by Chen, Feroz & Hobson, and is something I'm exploring with @appetrosyan as part of his supernest package, and in a soon-to-be-released paper. This has been discussed partially in #77
In implementing a prior, one can either incorporate it as an additional density term added to the loglikelihood, or as a transformation from the unit Hypercube. MCMC approaches see these as the same, but nested sampling views them quite differently. Both recover the same posterior and evidence, although one will get different KL divergences and different runtimes/accuracy (as a rule of thumb, the more information you can put in the prior, the lower the DKL, the faster the compression from prior to posterior, the more accurate the nested sampling)
This PR implements how I would establish this in cobaya, although discussion would be useful to see if I've missed any subtleties. It:
The only thing this therefore fails at is the treatment of the SZ prior for the plik TT|TTTEEE likelihood. In this framework, you have to manually add this in as a likelihood rather than a prior density, in the exact same way as you do for priors, e.g.
It would be great if cobaya-cosmo-generator could do this -- any hints on how I could implement this?