Skip to content

Commit 2c9c9b1

Browse files
authored
Merge pull request #196 from QuantEcon/remove_gviz
Remove unused graphviz from eigen_2
2 parents 264cb80 + dc9339b commit 2c9c9b1

File tree

1 file changed

+30
-38
lines changed

1 file changed

+30
-38
lines changed

lectures/eigen_II.md

+30-38
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ kernelspec:
1111
name: python3
1212
---
1313

14-
+++ {"user_expressions": []}
14+
1515

1616
# Spectral Theory
1717

@@ -27,19 +27,12 @@ In addition to what's in Anaconda, this lecture will need the following librarie
2727
```{code-cell} ipython3
2828
:tags: [hide-output]
2929
30-
!pip install graphviz quantecon
31-
```
32-
33-
```{admonition} graphviz
34-
:class: warning
35-
If you are running this lecture locally it requires [graphviz](https://www.graphviz.org)
36-
to be installed on your computer. Installation instructions for graphviz can be found
37-
[here](https://www.graphviz.org/download/)
30+
!pip install quantecon
3831
```
3932

4033
In this lecture we will begin with the foundational concepts in spectral theory.
4134

42-
Then we will explore the Perron-Frobenius Theorem and the Neumann Series Lemma, and connect them to applications in Markov chains and networks.
35+
Then we will explore the Perron-Frobenius Theorem and the Neumann Series Lemma, and connect them to applications in Markov chains and networks.
4336

4437
We will use the following imports:
4538

@@ -48,7 +41,6 @@ import matplotlib.pyplot as plt
4841
import numpy as np
4942
from numpy.linalg import eig
5043
import scipy as sp
51-
import graphviz as gv
5244
import quantecon as qe
5345
```
5446

@@ -119,7 +111,7 @@ In other words, if $w$ is a left eigenvector of matrix A, then $A^T w = \lambda
119111
This hints at how to compute left eigenvectors
120112

121113
```{code-cell} ipython3
122-
A = np.array([[3, 2],
114+
A = np.array([[3, 2],
123115
[1, 4]])
124116
125117
# Compute right eigenvectors and eigenvalues
@@ -174,7 +166,7 @@ $A$ is a nonnegative square matrix.
174166
175167
If a matrix $A \geq 0$ then,
176168
177-
1. the dominant eigenvalue of $A$, $r(A)$, is real-valued and nonnegative.
169+
1. the dominant eigenvalue of $A$, $r(A)$, is real-valued and nonnegative.
178170
2. for any other eigenvalue (possibly complex) $\lambda$ of $A$, $|\lambda| \leq r(A)$.
179171
3. we can find a nonnegative and nonzero eigenvector $v$ such that $Av = r(A)v$.
180172
@@ -204,8 +196,8 @@ Now let's consider examples for each case.
204196
Consider the following irreducible matrix A:
205197

206198
```{code-cell} ipython3
207-
A = np.array([[0, 1, 0],
208-
[.5, 0, .5],
199+
A = np.array([[0, 1, 0],
200+
[.5, 0, .5],
209201
[0, 1, 0]])
210202
```
211203

@@ -228,8 +220,8 @@ Now we can go through our checklist to verify the claims of the Perron-Frobenius
228220
Consider the following primitive matrix B:
229221

230222
```{code-cell} ipython3
231-
B = np.array([[0, 1, 1],
232-
[1, 0, 1],
223+
B = np.array([[0, 1, 1],
224+
[1, 0, 1],
233225
[1, 1, 0]])
234226
235227
np.linalg.matrix_power(B, 2)
@@ -253,7 +245,7 @@ np.round(dominant_eigenvalue, 2)
253245
eig(B)
254246
```
255247

256-
+++ {"user_expressions": []}
248+
257249

258250
Now let's verify the claims of the Perron-Frobenius Theorem for the primitive matrix B:
259251

@@ -298,7 +290,7 @@ def check_convergence(M):
298290
n_list = [1, 10, 100, 1000, 10000]
299291
300292
for n in n_list:
301-
293+
302294
# Compute (A/r)^n
303295
M_n = np.linalg.matrix_power(M/r, n)
304296
@@ -313,8 +305,8 @@ def check_convergence(M):
313305
A1 = np.array([[1, 2],
314306
[1, 4]])
315307
316-
A2 = np.array([[0, 1, 1],
317-
[1, 0, 1],
308+
A2 = np.array([[0, 1, 1],
309+
[1, 0, 1],
318310
[1, 1, 0]])
319311
320312
A3 = np.array([[0.971, 0.029, 0.1, 1],
@@ -336,8 +328,8 @@ The convergence is not observed in cases of non-primitive matrices.
336328
Let's go through an example
337329

338330
```{code-cell} ipython3
339-
B = np.array([[0, 1, 1],
340-
[1, 0, 0],
331+
B = np.array([[0, 1, 1],
332+
[1, 0, 0],
341333
[1, 0, 0]])
342334
343335
# This shows that the matrix is not primitive
@@ -358,7 +350,7 @@ In fact we have already seen the theorem in action before in {ref}`the markov ch
358350
(spec_markov)=
359351
#### Example 3: Connection to Markov chains
360352

361-
We are now prepared to bridge the languages spoken in the two lectures.
353+
We are now prepared to bridge the languages spoken in the two lectures.
362354

363355
A primitive matrix is both irreducible (or strongly connected in the language of graph) and aperiodic.
364356

@@ -410,22 +402,22 @@ $$
410402

411403
This is proven in {cite}`sargent2023economic` and a nice discussion can be found [here](https://math.stackexchange.com/questions/2433997/can-all-matrices-be-decomposed-as-product-of-right-and-left-eigenvector).
412404

413-
In the formula $\lambda_i$ is an eigenvalue of $P$ and $v_i$ and $w_i$ are the right and left eigenvectors corresponding to $\lambda_i$.
405+
In the formula $\lambda_i$ is an eigenvalue of $P$ and $v_i$ and $w_i$ are the right and left eigenvectors corresponding to $\lambda_i$.
414406

415407
Premultiplying $P^t$ by arbitrary $\psi \in \mathscr{D}(S)$ and rearranging now gives
416408

417409
$$
418410
\psi P^t-\psi^*=\sum_{i=1}^{n-1} \lambda_i^t \psi v_i w_i^{\top}
419411
$$
420412

421-
Recall that eigenvalues are ordered from smallest to largest from $i = 1 ... n$.
413+
Recall that eigenvalues are ordered from smallest to largest from $i = 1 ... n$.
422414

423415
As we have seen, the largest eigenvalue for a primitive stochastic matrix is one.
424416

425-
This can be proven using [Gershgorin Circle Theorem](https://en.wikipedia.org/wiki/Gershgorin_circle_theorem),
417+
This can be proven using [Gershgorin Circle Theorem](https://en.wikipedia.org/wiki/Gershgorin_circle_theorem),
426418
but it is out of the scope of this lecture.
427419

428-
So by the statement (6) of Perron-Frobenius Theorem, $\lambda_i<1$ for all $i<n$, and $\lambda_n=1$ when $P$ is primitive (strongly connected and aperiodic).
420+
So by the statement (6) of Perron-Frobenius Theorem, $\lambda_i<1$ for all $i<n$, and $\lambda_n=1$ when $P$ is primitive (strongly connected and aperiodic).
429421

430422

431423
Hence, after taking the Euclidean norm deviation, we obtain
@@ -438,7 +430,7 @@ Thus, the rate of convergence is governed by the modulus of the second largest e
438430

439431

440432
(la_neumann)=
441-
## The Neumann Series Lemma
433+
## The Neumann Series Lemma
442434

443435
```{index} single: Neumann's Lemma
444436
```
@@ -450,12 +442,12 @@ many applications in economics.
450442

451443
Here's a fundamental result about series that you surely know:
452444

453-
If $a$ is a number and $|a| < 1$, then
445+
If $a$ is a number and $|a| < 1$, then
454446

455447
```{math}
456448
:label: gp_sum
457-
458-
\sum_{k=0}^{\infty} a^k =\frac{1}{1-a} = (1 - a)^{-1}
449+
450+
\sum_{k=0}^{\infty} a^k =\frac{1}{1-a} = (1 - a)^{-1}
459451
460452
```
461453

@@ -476,7 +468,7 @@ Using matrix algebra we can conclude that the solution to this system of equatio
476468

477469
```{math}
478470
:label: neumann_eqn
479-
471+
480472
x^{*} = (I-A)^{-1}b
481473
482474
```
@@ -493,7 +485,7 @@ The following is a fundamental result in functional analysis that generalizes
493485
494486
Let $A$ be a square matrix and let $A^k$ be the $k$-th power of $A$.
495487
496-
Let $r(A)$ be the dominant eigenvector or as it is commonly called the *spectral radius*, defined as $\max_i |\lambda_i|$, where
488+
Let $r(A)$ be the dominant eigenvector or as it is commonly called the *spectral radius*, defined as $\max_i |\lambda_i|$, where
497489
498490
* $\{\lambda_i\}_i$ is the set of eigenvalues of $A$ and
499491
* $|\lambda_i|$ is the modulus of the complex number $\lambda_i$
@@ -517,7 +509,7 @@ r = max(abs(λ) for λ in evals) # compute spectral radius
517509
print(r)
518510
```
519511

520-
The spectral radius $r(A)$ obtained is less than 1.
512+
The spectral radius $r(A)$ obtained is less than 1.
521513

522514
Thus, we can apply the Neumann Series lemma to find $(I-A)^{-1}$.
523515

@@ -541,7 +533,7 @@ for i in range(50):
541533
Let's check equality between the sum and the inverse methods.
542534

543535
```{code-cell} ipython3
544-
np.allclose(A_sum, B_inverse)
536+
np.allclose(A_sum, B_inverse)
545537
```
546538

547539
Although we truncate the infinite sum at $k = 50$, both methods give us the same
@@ -566,11 +558,11 @@ The following table describes how output is distributed within the economy:
566558
| Industry | $x_2$ | 0.2$x_1$ | 0.4$x_2$ |0.3$x_3$ | 5 |
567559
| Service | $x_3$ | 0.2$x_1$ | 0.5$x_2$ |0.1$x_3$ | 12 |
568560

569-
The first row depicts how agriculture's total output $x_1$ is distributed
561+
The first row depicts how agriculture's total output $x_1$ is distributed
570562

571563
* $0.3x_1$ is used as inputs within agriculture itself,
572564
* $0.2x_2$ is used as inputs by the industry sector to produce $x_2$ units,
573-
* $0.3x_3$ is used as inputs by the service sector to produce $x_3$ units and
565+
* $0.3x_3$ is used as inputs by the service sector to produce $x_3$ units and
574566
* 4 units is the external demand by consumers.
575567

576568
We can transform this into a system of linear equations for the 3 sectors as

0 commit comments

Comments
 (0)