- Reading: Chapter 1 (Pages 1-41)
- Key Concept:
- For
$a,b,c \in\mathbb{R}$ ,$|a\vee c - b\vee c|\le |a-b|$ -
$T$ globally stable with fixed point$u^* $ ,If$T$ is invariant on a closed set$C$ ,$u^* \in C$ - Banach's Contraction Mapping Theorem and Proof
- Neumann Series Lemma, Gelfand's formula
- Finite vs. Infinite job search model
- Algorithm for Value Function Iteration
- For
- Python Code
- Create namedtuple, use function to create model
- Successive approximation
- Greedy function
- value function iteration
- direct iteration
- See details in job_search.py
- Reading: Chapter 2 (Pages 42-80)
- Key Concepts:
-
$T$ is globally stable and$T$ dominates$S$ implies the unique fixed point of$T$ dominates any fixed point of$S$ . $|\max_{z\in D}f(z)-\max_{z\in D}g(z)|\le \max{z\in D}|f(z)-g(z)|$ - Hartman-Grobman Theorem
- Convergence rate and related proof
- Newton's fixed point method
- Partial order, order-preserving, property of sublattice, stochastic dominance
- Linear operator, equivalence with matrix, computation advantage
- Perron-Frobenius Theorem and Lemma
- Flatten a matrix
- Markov operator and its matrix representation
-
- Reading: Chapter 3 (Pages 81-104)
- Key Concept:
- Markov chain, stationarity, irreducibility, ergodicity, monotonicity
- Approximation: Tauchen discretization
- Conditional Expectation, LIE, LTP
- Code:
- QuantEcon MarkovChain packages: MarkovChain, Stationary distribution, simulate
- Ergodicity: calculate time average and compare to stationary distribution.
- See: inventory_simulation.py; day_laborer.py
- Reading: Chapter 4 (Pages 105-127)
- If
$e,c\in i\mathbb{R}^X$ ,$P$ monotone increasing,$v^* ,h^* \in i\mathbb{R}^X$ - EXERCISE 4.1.13.
-
Learning Objectives:
- Comprehend the concept of optimal stopping and its use in decision-making.
- Explore examples of optimal stopping in firm valuation with exit.
- Understand the role of continuation values in optimal stopping problems.
- Reading: Chapter 5 (Pages 128-178)
- Learning Objectives:
- Define Markov decision processes and identify their key components.
- Apply MDPs to problems like optimal inventories and savings with labor income.
- Learn about Q-factors and their use in dynamic programming.
- Reading: Chapter 6 (Pages 181-211)
- Learning Objectives:
- Understand the concept of stochastic discounting and its implications for valuation.
- Learn about the spectral radius condition and its testing methods.
- Apply MDPs with state-dependent discounting in inventory management.
- Reading: Chapter 7 (Pages 212-244)
- Learning Objectives:
- Explore the significance of moving beyond contraction maps in dynamic programming.
- Understand the impact of recursive preferences on optimal savings and risk-sensitive preferences.
- Learn about Epstein-Zin preferences and their role in dynamic programming.
- Reading: Chapter 8 (Pages 245-290)
- Learning Objectives:
- Define recursive decision processes and understand their properties.
- Differentiate between contracting and eventually contracting RDPs.
- Explore applications of RDPs in risk-sensitive decision-making and adversarial agents.
- Reading: Chapter 9 (Pages 291-305)
- Learning Objectives:
- Understand abstract dynamic programs and their generalization of dynamic programming.
- Learn about max-optimality and min-optimality in abstract dynamic programs.
- Relate abstract dynamic programs to recursive decision processes.
- Reading: Chapter 10 (Pages 306-337)
- Learning Objectives:
- Understand the basics of continuous time Markov chains and their application in dynamic programming.
- Learn about continuous time Markov decision processes and their construction.
- Explore the application of continuous time models to job search problems.