Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: markov chains #1367

Open
Mv77 opened this issue Dec 27, 2023 · 0 comments
Open

Feature request: markov chains #1367

Mv77 opened this issue Dec 27, 2023 · 0 comments

Comments

@Mv77
Copy link
Contributor

Mv77 commented Dec 27, 2023

I am a big fan of what we have managed to do with the tools that represent and manipulate discrete distributions. The distr_of_func and expect functions are intuitive, expressive and flexible, even more so when used with labeled distributions.

One limitation of the toolkit is that we do not have similar tools for variables/processes with memory, i.e., markov chains. I think this is due to the fact that in most of our models the income process has a unit root and, after normalizing appropriately, the distribution of shocks does not depend on current or past states.

Models in which some process $z_t$ (income, health) follows an AR(1) process discretized into a markov chain are common. When solving these models, one often needs to calculate things like $E[f(z_{t+1})|z_t]$ for every possible $z_t$ knowing that the distribution of $z_{t+1}$ depends on $z_t$. Our current Distribution.expect(lambda z_tp1 : f(z_tp1)) don't allow us to calculate that type of object conveniently, because it is the distribution (not the function) that depends on $z_t$. I think the current solution for this type of thing is just to carry a python list of Distribution objects, one for each $z_t$, and iterate over them as needed.

It would be very convenient if we could have some MarkovChain object that allowed vectorized operations like, say

  • MarkovChain(lambda z_tp1: f(z_tp1)) to return a vector with the expectation calculated conditional on each value of $z_t$, or MarkovChain(lambda z_tp1: f(z_tp1), current = x) to calculate the expectation conditional on $z_t = x$.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Needs Triage
Development

No branches or pull requests

2 participants