You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to sincerely thank you for your outstanding work. The code you’ve shared has been incredibly helpful to me, and I truly appreciate the effort you've put into it.
I have a question that I hope you can help with. I am trying to set an initial search value for each parameter, as I already have a rough idea of the optimal parameter values for fitting.
if I want to set an initial value, should I directly assign cube[0] = init_search_value in the prior? If so, will this prevent the parameter from being sampled during the optimization process? My goal is to set an initial search value while still allowing the parameter to be sampled.
I would really appreciate any guidance or advice you can offer on this!
Thank you so much for your time and assistance.
My code:
def prior(cube):
cube[0] = cube[0]
cube[1] = cube[1]
cube[2] = cube[2]
for i in range(3,len(cube)):
cube[i] = cube[i]
return cube
def loglike(cube):
p1,p2,...,p15 = cube[0], cube[1], cube[2], cube[3], cube[4], cube[5], cube[6], cube[7], cube[8], \
cube[9], cube[10], cube[11],cube[12], cube[13], cube[14]
ymodel = model(x, y, p1,p2,...,p15)
a = -0.5 * np.sum((y - ymodel) ** 2 / noise ** 2)
b = -np.sum(np.log(noise))
c = -0.5 * len(y[:]) * np.log(2 * np.pi)
loglikelihood = a + b + c
return loglikelihood
#parameters
parameters = ['p1','p2','p3',...,'p15']
n_params = len(parameters)
# input
nLivePoints = 500
x = xxx
y = xxx
noise = xxx
# run MultiNest
results = solve(loglike,
prior,
n_params,
resume=False,
verbose=True,
n_live_points=nLivePoints,
sampling_efficiency=0.8,
evidence_tolerance=0.5,
importance_nested_sampling=True
)
The text was updated successfully, but these errors were encountered:
Nested sampling samples randomly from the prior probability distribution. What you can do to speed things up is:
a) change the prior if your information about where the parameters should lie is a prior probability distribution you want to assume, for example from previous studies. Practically, you could place a Gaussian around the expected value (rv = scipy.stats.norm(mu,std); rv.ppf(cube[i]) does the transform).
b) use an auxiliary transformation to reparameterize and make sampling more efficient (see supernest paper). https://arxiv.org/abs/2212.01760. There is a PR for implementing this as a feature for ultranest, see JohannesBuchner/UltraNest#156
Setting the cube values to your guessed values in the prior makes the random sampled value for this parameter unused, so effectively you are fixing the parameter to a single value, i.e., a delta function prior.
I want to sincerely thank you for your outstanding work. The code you’ve shared has been incredibly helpful to me, and I truly appreciate the effort you've put into it.
I have a question that I hope you can help with. I am trying to set an initial search value for each parameter, as I already have a rough idea of the optimal parameter values for fitting.
if I want to set an initial value, should I directly assign
cube[0] = init_search_value
in theprior
? If so, will this prevent the parameter from being sampled during the optimization process? My goal is to set an initial search value while still allowing the parameter to be sampled.I would really appreciate any guidance or advice you can offer on this!
Thank you so much for your time and assistance.
My code:
The text was updated successfully, but these errors were encountered: