You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is much faster, and even more in PyMC models that are usually parametrized with a direct prior on the cholesky.
importpytensorimportpytensor.tensorasptsrng=pt.random.RandomStream()
x=srng.multivariate_normal([0, 0], [[1, 0.5], [0.5, 1]])
fn=pytensor.function([], x)
%timeitfn() # 510 µs ± 81.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)# Decompose cholesky in graph (numpy probably does this under the hood)A=pt.linalg.cholesky([[1, 0.5], [0.5, 1]])
x=A @ srng.normal(size=(2,))
fn=pytensor.function([], x)
%timeitfn() # 27.4 µs ± 3.27 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In general we should probably reduce the number of pure RV Ops we have. This allows more optimizations and makes it easier to implement different backends.
We should implement the MvNormal as an OpFromGraph that gets inlined after canonicalization (not as early as the ones with inline=True)
The text was updated successfully, but these errors were encountered:
Description
This is much faster, and even more in PyMC models that are usually parametrized with a direct prior on the cholesky.
In general we should probably reduce the number of pure RV Ops we have. This allows more optimizations and makes it easier to implement different backends.
We should implement the MvNormal as an OpFromGraph that gets inlined after canonicalization (not as early as the ones with
inline=True
)The text was updated successfully, but these errors were encountered: