-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adaptive proposals in Turing composite Gibbs sampler #44
Comments
I've got a couple of thoughts here:
If we were to support it, I don't think it would be easy, since you'd have to be a little more specific with how all the proposals are generated. The issue is that AdvancedMH supports all kinds of proposal styles, and it doesn't actually know what counts as a "parameter" or have any such notion of incrementing individual parameters. For example, if all you're doing is sampling from an |
Hmm, either I don't understand, or you misunderstood my question. I am exactly interested in using Turing and the Gibbs sampler implemented there. My question is rather on how I would be able (or what I should implement) to use the MH sampler with an adaptive proposal as a Gibbs component with a Turing model? |
OK, I had some trouble finding the right docs in Turing, it seems that when that PR is merged, it should be possible to do something like the following right? sampler = Gibbs(
MH(:v1=> AdvancedMH.AdaptiveProposal()),
MH(:v2=> AdvancedMH.AdaptiveProposal()))
chain = sample(model, sampler, 1000) which is what I have in mind, hope that makes it clearer. |
Ah, I understand, sorry. There'll need to be some changes on Turing's side before that will work for variables that are in a constrained space (not |
What is the status on adaptive metropolis proposals? Happy to help if needed |
@cpfiffer: thanks for the reply! I will take a look. I am working on an ABC implementation of Turing and it would be handy to have it. |
Awesome! Let me know if you need any pointers. |
Hi there, a question related to this PR
I'm currently facing an application where I would really like to use adaptive proposals like those defined in this PR in a Metropolis-within-Gibbs setting (i.e. we have a parameter vector
x
, for each parameter have an adaptive univariate proposal, and in each iteration of the MCMC sampler we update each component of the parameter vector conditional on the others using a Metropolis-Hastings step). The Turing-way to go would seem to use the stuff implemented inAdvancedMH
in a Turing compositeGibbs
sampler (something roughly likeGibbs(AdaptiveMH(:p1), AdaptiveMH(:p2), ...)
where thep1, p2, ...
are the parameter vector components)? I think in general this is worthwhile for low-dimensional applications where the gradient of the loglikelihood is really costly or unavailable. I wonder what would be the best way to proceed to allow this? Thanks for any hints!The text was updated successfully, but these errors were encountered: