-
Dr. King, I'm attempting to calculate CIs for parameters that I've estimated using
I'm most troubled by the second point, as that would lead to an additional source of variability in the logLik, but the first point is also a concern simply because I'm not sure what the implications are for fitting the loess or quadratic when one includes points far from the MLE. Or, do all these sources of variability show up in the MC SE estimate? Thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 3 replies
-
Hello @trh3! Yes, this is a misunderstanding. MCAP is used to provide profile likelihoods in the presence of Monte Carlo error in the evaluation of the likelihood. To compute a likelihood profile over a given parameter, you'll need to vary that parameter over some range, and them maximize the likelihood over the remaining parameters, keeping your profile parameter fixed. Iterated filtering (IF) is very useful for this purpose. There are some examples in the SBIED Lesson on Iterated Filtering and in the Getting Started vignette as well as elsewhere. It's important to bear in mind that iterated filtering will ultimate return some (hopefully improved) estimated parameter vector. However, one must perform an additional log likelihood estimation at this parameter, for example using The proper inputs to |
Beta Was this translation helpful? Give feedback.
-
This is extremely helpful! I didn't quite understand that the profile likelihood is formed from the MLEs of other parameters maximized at a set of fixed values of the target parameter. Thank you for clarifying! Now I am just curious, but from my understanding of the profile likelihood CI calculation from y'alls 2017 Interface paper, the quadratic approximation relies on the LAN assumption. In order to fit this approximation one needs observations of the likelihood surface around the maximum. I understand (at least, beginning to understand) how the log-likelihood from the IF procedure is different from the true log-likelihood, and I realize that looking at points far away from the neighborhood of the MLE gives you no information about the likelihood surface at the maximum, but wouldn't one be able to build up a picture of the likelihood surface by just sampling from the n-dimensional parameter space around the MLE (the area of which can be identified by IF2) and calculating the correct log-likelihood at that collection of points? Then one could use the quadratic approximation using those points to obtain the estimates of the SE and CIs? There is likely a component of profile maximum likelihood that I don't understand that is the reason why one can't do that, but it has piqued my curiosity! Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hi Teague,
We have the concept of a "poor man's profile" which is something like what
you suggest: you take the estimates from a collection of Monte Carlo
searches and produce a profile estimate for each parameter by taking the
largest likelihood falling in each bin partitioning the parameter of
interest. This scales readily for a large number of parameters, but it is
best viewed as a preliminary approximation: if all is well, one should
probably compute separate profiles for all parameters of high interest.
One can fit a quadratic to the log likelihoods in a more direct application
of LAN, similar to Le Cam's one step estimator. For large numbers of
parameters, this may be numerically fiddly, especially if there are weakly
identified combinations, possibly leading to nonlinear ridges in the
likelihood surface. For small numbers of parameters, this is not necessary.
So, either way, we usually don't do it.
I look forward to hearing what works for you.
Best,
Ed
Edward L. Ionides
Associate Chair for Undergraduate Studies and Professor,
Department of Statistics, University of Michigan
1085 South University, Ann Arbor, MI 48109-1107
email: ***@***.***
phone: 734 615 3332
office: 453 West Hall
…On Thu, Feb 9, 2023 at 8:57 AM Teague Henry ***@***.***> wrote:
Oh, LAN = Local Asymptotic Normality. And thank you for the explanation,
the issue of high dimensionality makes total sense. For context, I'm
attempting to get SE's and CI's out of a fairly high dimensional parameter
model I'm fitting with MIF2, and I'd like to use the MCMC error corrected
profile CIs. Of course, I'm trying to balance the need for a large number
of samples from around the likelihood surface with the compute time, so
trying to determine efficient ways of obtaining those samples. Thanks for
taking the time to chat!
—
Reply to this email directly, view it on GitHub
<#185 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKAFTZX3CZWIAFUDLBDLADWWTZVTANCNFSM6AAAAAAUTA4GJA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hi Teague,
Thanks for sharing your experiences with the 2006 method. The main MCMC
method in pomp is particle MCMC, implemented as pmcmc(), and abc() is also
possible.
Best,
Ed
|
Beta Was this translation helpful? Give feedback.
Hello @trh3!
Yes, this is a misunderstanding. MCAP is used to provide profile likelihoods in the presence of Monte Carlo error in the evaluation of the likelihood. To compute a likelihood profile over a given parameter, you'll need to vary that parameter over some range, and them maximize the likelihood over the remaining parameters, keeping your profile parameter fixed. Iterated filtering (IF) is very useful for this purpose. There are some examples in the SBIED Lesson on Iterated Filtering and in the Getting Started vignette as well as elsewhere. It's important to bear in mind that iterated filtering will ultimate return some (hopefully improved) estimated parameter vector. However, one…