You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of the main bottlenecks is differentiating the estimated mode, $\theta^* $. In theory, it is straightforward to apply automatic differentiation, by bruteforce propagating derivatives through $\theta^* $, that is, sequentially differentiating the iterations of a numerical optimizer,
But this approach, termed the direct method, is prohibitively expensive. A much faster alternative is to use the implicit function theorem. Given any accurate numerical solver, we can always use the implicit function theorem to get derivatives. One side effect is that the numerical optimizer is treated as a black box. By contrast, Rasmussen and Williams [34] define a bespoke Newton method to compute $\theta^* $, meaning we can store relevant variables from the final Newton step when computing derivatives. In our experience, this leads to important computational savings. But overall this method is much less flexible, working well only when the number of hyperparameters is low dimensional and requiring the user to pass the tensor of derivatives.
I think the jax implementation uses the tensor of derivatives but not 100% sure.
The text was updated successfully, but these errors were encountered:
This is part of INLA roadmap #340.
From the Stan paper:
I think the jax implementation uses the tensor of derivatives but not 100% sure.
The text was updated successfully, but these errors were encountered: