Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I set an initial search value for each parameter #262

Closed
lidafa0206 opened this issue Dec 19, 2024 · 2 comments
Closed

How can I set an initial search value for each parameter #262

lidafa0206 opened this issue Dec 19, 2024 · 2 comments

Comments

@lidafa0206
Copy link

lidafa0206 commented Dec 19, 2024

I want to sincerely thank you for your outstanding work. The code you’ve shared has been incredibly helpful to me, and I truly appreciate the effort you've put into it.

I have a question that I hope you can help with. I am trying to set an initial search value for each parameter, as I already have a rough idea of the optimal parameter values for fitting.

if I want to set an initial value, should I directly assign cube[0] = init_search_value in the prior? If so, will this prevent the parameter from being sampled during the optimization process? My goal is to set an initial search value while still allowing the parameter to be sampled.

I would really appreciate any guidance or advice you can offer on this!
Thank you so much for your time and assistance.

My code:

def prior(cube):
    cube[0] = cube[0]
    cube[1] = cube[1]
    cube[2] = cube[2]
    
    for i in range(3,len(cube)):
        cube[i] = cube[i]
   
    return cube

def loglike(cube):
    p1,p2,...,p15 = cube[0], cube[1], cube[2], cube[3], cube[4], cube[5], cube[6], cube[7], cube[8], \
    cube[9], cube[10], cube[11],cube[12], cube[13], cube[14]
    ymodel = model(x, y, p1,p2,...,p15)

    a = -0.5 * np.sum((y - ymodel) ** 2 / noise ** 2)
    b = -np.sum(np.log(noise))
    c = -0.5 * len(y[:]) * np.log(2 * np.pi)
    loglikelihood = a + b + c

    return loglikelihood

#parameters
parameters = ['p1','p2','p3',...,'p15']
n_params = len(parameters)
# input
nLivePoints = 500

x = xxx
y = xxx
noise = xxx
# run MultiNest
results = solve(loglike,
                prior,
                n_params,
                resume=False,
                verbose=True,
                n_live_points=nLivePoints,  
                sampling_efficiency=0.8, 
                evidence_tolerance=0.5,
                importance_nested_sampling=True
                )
@JohannesBuchner
Copy link
Owner

Thanks for reaching out.
Bayesian fitting is not optimization (see section 1 of https://arxiv.org/abs/1710.06068).

Nested sampling samples randomly from the prior probability distribution. What you can do to speed things up is:
a) change the prior if your information about where the parameters should lie is a prior probability distribution you want to assume, for example from previous studies. Practically, you could place a Gaussian around the expected value (rv = scipy.stats.norm(mu,std); rv.ppf(cube[i]) does the transform).
b) use an auxiliary transformation to reparameterize and make sampling more efficient (see supernest paper). https://arxiv.org/abs/2212.01760. There is a PR for implementing this as a feature for ultranest, see JohannesBuchner/UltraNest#156

Setting the cube values to your guessed values in the prior makes the random sampled value for this parameter unused, so effectively you are fixing the parameter to a single value, i.e., a delta function prior.

@lidafa0206
Copy link
Author

Thank you very much for your reply!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants