Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MCMC and Evaluate sampler give different results #170

Open
ShazAlvi opened this issue Apr 29, 2021 · 3 comments
Open

MCMC and Evaluate sampler give different results #170

ShazAlvi opened this issue Apr 29, 2021 · 3 comments

Comments

@ShazAlvi
Copy link

I am new to Cobaya and the development of Likelihood in this framework. I have written my likelihood and the param file. The params of my file look like this,

params:
  H0: 67.0
  mnu: 0.06
  nnu: 3.046  
  w: -1.0
  wa: 0.0
  omk: 0.0
  omnuh2: 0.0
  ombh2:
    prior:
      dist: norm
      loc: 0.022445
      scale: 0.028
  omch2:
    prior:
      dist: norm
      loc: 0.12055
      scale: 0.027
  tau:
    prior:
      dist: norm
      loc: 0.07
      scale: 0.13
  logA:
    prior:      
      dist: norm
      loc: 3.044
      scale: 0.029
    latex: \log(10^{10} A_\mathrm{s})
    drop: true
  As:
    value: 'lambda logA: 1e-10*np.exp(logA)'
    latex: A_\mathrm{s}
  ns:
    prior:
      dist: norm
      loc: 0.96
      scale: 0.074

When I run the param file with evaluate as a sampler, I get the following, seemingly reasonable, output

2021-04-29 10:16:28,153 [output] Output to be read-from/written-into folder '.', with prefix 'My_Test'
 2021-04-29 10:16:28,253 [camb] Importing *auto-installed* CAMB (but defaulting to *global*).
 2021-04-29 10:16:28,310 [camb] Initialized!
 2021-04-29 10:16:40,889 [prior] *WARNING* No sampled parameters requested! This will fail for non-mock samplers.
 2021-04-29 10:16:40,892 [camb] Importing *global* CAMB.
 2021-04-29 10:16:40,892 [camb] Initialized!
 2021-04-29 10:16:42,981 [evaluate] Initialized!
 2021-04-29 10:16:42,992 [evaluate] Looking for a reference point with non-zero prior.
 2021-04-29 10:16:42,992 [prior] Reference values or pdf's for some parameters were not provided. Sampling from the prior instead for those parameters.
 2021-04-29 10:16:42,993 [evaluate] Reference point:
   ombh2 = 0.0365783
   omch2 = 0.126176
   tau = 0.190604
   logA = 3.07043
   ns = 0.815588
 2021-04-29 10:16:42,993 [evaluate] Evaluating prior and likelihoods...
 2021-04-29 10:16:47,027 [evaluate] log-posterior  = 4.39314
 2021-04-29 10:16:47,027 [evaluate] log-prior      = 7.87826
 2021-04-29 10:16:47,027 [evaluate]    logprior_0 = 7.87826
 2021-04-29 10:16:47,028 [evaluate] log-likelihood = -3.48512
 2021-04-29 10:16:47,028 [evaluate]    chi2_Euclid = 6.97023
 2021-04-29 10:16:47,028 [evaluate] Derived params:
 2021-04-29 10:16:47,028 [evaluate]    As = 2.15511e-09

However, when I run with mcmc sampler, the sampler keeps running without accepting any model and the number of treid models keeps increasing. This is how the output looks like,

2021-04-29 10:25:56,312 [output] Output to be read-from/written-into folder '.', with prefix 'My_Test'
 2021-04-29 10:25:56,312 [output] Found existing info files with the requested output prefix: 'My_Test'
 2021-04-29 10:25:56,313 [output] Will delete previous products ('force' was requested).
 2021-04-29 10:25:56,439 [output] From regexp 'My_Test\\.checkpoint$' in folder '.', deleting files ['./My_Test.checkpoint']
 2021-04-29 10:25:56,439 [output] From regexp 'My_Test\\.progress$' in folder '.', deleting files ['./My_Test.progress']
 2021-04-29 10:25:56,439 [output] From regexp 'My_Test\\.covmat$' in folder '.', deleting files ['./My_Test.covmat']
 2021-04-29 10:25:56,455 [camb] Importing *auto-installed* CAMB (but defaulting to *global*).
 2021-04-29 10:25:56,515 [camb] Initialized!
 2021-04-29 10:26:07,668 [prior] *WARNING* No sampled parameters requested! This will fail for non-mock samplers.
 2021-04-29 10:26:07,671 [camb] Importing *global* CAMB.
 2021-04-29 10:26:07,672 [camb] Initialized!
 2021-04-29 10:26:09,328 [mcmc] Getting initial point... (this may take a few seconds)
 2021-04-29 10:26:09,328 [prior] Reference values or pdf's for some parameters were not provided. Sampling from the prior instead for those parameters.
 2021-04-29 10:26:13,777 [prior] *WARNING* Reference pdf not defined or improper for some parameters. Using prior's sigma instead for them.
 2021-04-29 10:26:13,778 [mcmc] Covariance matrix not present. We will start learning the covariance of the proposal earlier: R-1 = 30 (would be 2 if all params loaded).
 2021-04-29 10:26:13,783 [mcmc] Initial point: ombh2:0.03607705, omch2:0.1092784, tau:0.339702, logA:3.005904, ns:1.081473
 2021-04-29 10:26:13,801 [mcmc] Sampling!
 2021-04-29 10:26:14,282 [mcmc] Progress @ 2021-04-29 10:26:14 : 1 steps taken, and 0 accepted.
 2021-04-29 10:26:24,614 [mcmc] Progress @ 2021-04-29 10:26:24 : 45 steps taken, and 0 accepted.
 2021-04-29 10:26:34,914 [mcmc] Progress @ 2021-04-29 10:26:34 : 78 steps taken, and 0 accepted.
 2021-04-29 10:26:45,062 [mcmc] Progress @ 2021-04-29 10:26:45 : 122 steps taken, and 0 accepted.
 2021-04-29 10:26:55,076 [mcmc] Progress @ 2021-04-29 10:26:55 : 157 steps taken, and 0 accepted.

I tried to print the likelihood evaluated in the file mcmc.py in the function, def get_new_sample_metropolis(self): and it shows that the likelihood is being evaluated to -inf. I guess that is why it cant accept any model.

My question is: how can evaluate give me a decent value of the likelihood but mcmc sampler fails to get a value when both of them, I think, call the same function logposterior with a set of cosmological parameters? A clue into how the calls between mcmc and evaluate differ will be helpful in trying to understand the source of the problem.

@JesusTorrado
Copy link
Contributor

Hi @ShazAlvi

It's certainly strange! Could you please run the mcmc case again with the --debug option (or debug: True in the yaml) and post here a chunk of the output? This will have a similar effect to what you did in get_new_sample_metropolis, but a bit more detailed and should help us diagnose it.

@ShazAlvi
Copy link
Author

Hi @JesusTorrado Thanks for your reply. The debug option is already set to True. Here is the chunk of the code that sets these options.

sampler:
  mcmc:
    measure_speeds: false
    output_every: 10s
packages_path: /data2/cobaya_modules/
output: My_Test
force: true
debug: True

@JesusTorrado
Copy link
Contributor

Hi @ShazAlvi

It doesn't seem that output you posted above for mcmc was generated with debug: True: it prints way more information, in particular, per iteration, what quantities are passed to each part of the likelihood, and what the individual priors and likelihoods are worth.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants