diff --git a/lectures/greek_square.md b/lectures/greek_square.md index d7c13887..3ae37d38 100644 --- a/lectures/greek_square.md +++ b/lectures/greek_square.md @@ -4,25 +4,31 @@ jupytext: extension: .md format_name: myst format_version: 0.13 - jupytext_version: 1.14.4 + jupytext_version: 1.16.2 kernelspec: display_name: Python 3 (ipykernel) language: python name: python3 --- -+++ {"user_expressions": []} - # Computing Square Roots ## Introduction -This lectures provides an example of **invariant subspace** methods for analyzing linear difference equations. -These methods are applied throughout applied economic dynamics, for example, in this QuantEcon lecture {doc}`money financed government deficits and inflation ` +This lecture can be viewed as a sequel to {doc}`eigen_I`. + +It provides an example of how eigenvectors isolate *invariant subspaces* that help construct and analyze solutions of linear difference equations. + +When vector $x_t$ starts in an invariant subspace, iterating the different equation keeps $x_{t+j}$ +in that subspace for all $j \geq 1$. + +Invariant subspace methods are used throughout applied economic dynamics, for example, in the lecture {doc}`money_inflation`. -Our approach in this lecture is to illustrate the method with an ancient example, one that ancient Greek mathematicians used to compute square roots of positive integers. +Our approach here is to illustrate the method with an ancient example, one that ancient Greek mathematicians used to compute square roots of positive integers. + +## Perfect squares and irrational numbers An integer is called a **perfect square** if its square root is also an integer. @@ -38,19 +44,22 @@ The ancient Greeks invented an algorithm to compute square roots of integers, in Their method involved - * computing a particular sequence of integers $\{y_t\}_{t=0}^\infty$ + * computing a particular sequence of integers $\{y_t\}_{t=0}^\infty$; - * computing $\lim_{t \rightarrow \infty} \left(\frac{y_{t+1}}{y_t}\right) = \bar r$ + * computing $\lim_{t \rightarrow \infty} \left(\frac{y_{t+1}}{y_t}\right) = \bar r$; - * deducing the desired square root from $\bar r$ + * deducing the desired square root from $\bar r$. In this lecture, we'll describe this method. We'll also use invariant subspaces to describe variations on this method that are faster. -## Primer on second order linear difference equation +## Second-order linear difference equations + +Before telling how the ancient Greeks computed square roots, we'll provide a quick introduction +to second-order linear difference equations. -Consider the following second-order linear difference equation +We'll study the following second-order linear difference equation $$ y_t = a_1 y_{t-1} + a_2 y_{t-2}, \quad t \geq 0 @@ -58,13 +67,31 @@ $$ (eq:2diff1) where $(y_{-1}, y_{-2})$ is a pair of given initial conditions. -We want to find expressions for $y_t, t \geq 0$ as functions of the initial conditions $(y_{-1}, y_{-2})$: +Equation {eq}`eq:2diff1` is actually an infinite number of linear equations in the sequence +$\{y_t\}_{t=0}^\infty$. + +There is one equation each for $t = 0, 1, 2, \ldots$. + +We could follow an approach taken in the lecture on {doc}`present values` and stack all of these equations into a single matrix equation that we would then solve by using matrix inversion. + +```{note} +In the present instance, the matrix equation would multiply a countably infinite dimensional square matrix by a countably infinite dimensional vector. With some qualifications, matrix multiplication and inversion tools apply to such an equation. +``` + +But we won't pursue that approach here. + + +Instead, we'll seek to find a time-invariant function that *solves* our difference equation, meaning +that it provides a formula for a $\{y_t\}_{t=0}^\infty$ sequence that satisfies +equation {eq}`eq:2diff1` for each $t \geq 0$. + +We seek an expression for $y_t, t \geq 0$ as functions of the initial conditions $(y_{-1}, y_{-2})$: $$ -y_t = g((y_{-1}, y_{-2});t), \quad t \geq 0 +y_t = g((y_{-1}, y_{-2});t), \quad t \geq 0. $$ (eq:2diff2) -We call such a function $g$ a **solution** of the difference equation {eq}`eq:2diff1`. +We call such a function $g$ a *solution* of the difference equation {eq}`eq:2diff1`. One way to discover a solution is to use a guess and verify method. @@ -81,7 +108,7 @@ For initial condition that satisfy {eq}`eq:2diff3` equation {eq}`eq:2diff1` impllies that $$ -y_0 = \left(a_1 + \frac{a_2}{\delta}\right) y_{-1} +y_0 = \left(a_1 + \frac{a_2}{\delta}\right) y_{-1}. $$ (eq:2diff4) We want @@ -90,16 +117,16 @@ $$ \left(a_1 + \frac{a_2}{\delta}\right) = \delta $$ (eq:2diff5) -which we can rewrite as the **characteristic equation** +which we can rewrite as the *characteristic equation* $$ -\delta^2 - a_1 \delta - a_2 = 0 +\delta^2 - a_1 \delta - a_2 = 0. $$ (eq:2diff6) Applying the quadratic formula to solve for the roots of {eq}`eq:2diff6` we find that $$ -\delta = \frac{ a_1 \pm \sqrt{a_1^2 + 4 a_2}}{2} +\delta = \frac{ a_1 \pm \sqrt{a_1^2 + 4 a_2}}{2}. $$ (eq:2diff7) For either of the two $\delta$'s that satisfy equation {eq}`eq:2diff7`, @@ -109,9 +136,13 @@ $$ y_t = \delta^t y_0 , \forall t \geq 0 $$ (eq:2diff8) -and $y_0 = a_1 y_{-1} + a_2 y_{-2}$ +provided that we set -The **general** solution of difference equation {eq}`eq:2diff1` takes the form +$$ +y_0 = \delta y_{-1} . +$$ + +The *general* solution of difference equation {eq}`eq:2diff1` takes the form $$ y_t = \eta_1 \delta_1^t + \eta_2 \delta_2^t @@ -137,33 +168,34 @@ If we choose $(y_{-1}, y_{-2})$ to set $(\eta_1, \eta_2) = (1, 0)$, then $y_t = If we choose $(y_{-1}, y_{-2})$ to set $(\eta_1, \eta_2) = (0, 1)$, then $y_t = \delta_2^t$ for all $t \geq 0$. +Soon we'll relate the preceding calculations to components an eigen decomposition of a transition matrix that represents difference equation {eq}`eq:2diff1` in a very convenient way. + +We'll turn to that after we describe how Ancient Greeks figured out how to compute square roots of positive integers that are not perfect squares. -## Setup -Let $\sigma$ be a positive integer greater than $1$ +## Algorithm of the Ancient Greeks -So $\sigma \in {\mathcal I} \equiv \{2, 3, \ldots \}$ +Let $\sigma$ be a positive integer greater than $1$. + +So $\sigma \in {\mathcal I} \equiv \{2, 3, \ldots \}$. We want an algorithm to compute the square root of $\sigma \in {\mathcal I}$. -If $\sqrt{\sigma} \in {\mathcal I}$, $\sigma $ is said to be a **perfect square**. +If $\sqrt{\sigma} \in {\mathcal I}$, $\sigma $ is said to be a *perfect square*. If $\sqrt{\sigma} \not\in {\mathcal I}$, it turns out that it is irrational. Ancient Greeks used a recursive algorithm to compute square roots of integers that are not perfect squares. -The algorithm iterates on a second order linear difference equation in the sequence $\{y_t\}_{t=0}^\infty$: +The algorithm iterates on a second-order linear difference equation in the sequence $\{y_t\}_{t=0}^\infty$: $$ y_{t} = 2 y_{t-1} - (1 - \sigma) y_{t-2}, \quad t \geq 0 $$ (eq:second_order) -together with a pair of integers that are initial conditions for $y_{-1}, y_{-2}$. - -First, we'll deploy some techniques for solving difference equations that are also deployed in this QuantEcon lecture about the multiplier-accelerator model: - - +together with a pair of integers that are initial conditions for $y_{-1}, y_{-2}$. +First, we'll deploy some techniques for solving the difference equations that are also deployed in {doc}`dynam:samuelson`. The characteristic equation associated with difference equation {eq}`eq:second_order` is @@ -171,11 +203,9 @@ $$ c(x) \equiv x^2 - 2 x + (1 - \sigma) = 0 $$ (eq:cha_eq0) -+++ +(Notice how this is an instance of equation {eq}`eq:2diff6` above.) -(This is an instance of equation {eq}`eq:2diff6` above.) - -If we factor the right side of the equation {eq}`eq:cha_eq0`, we obtain +Factoring the right side of equation {eq}`eq:cha_eq0`, we obtain $$ c(x)= (x - \lambda_1) (x-\lambda_2) = 0 @@ -197,11 +227,11 @@ By applying the quadratic formula to solve for the roots the characteristic equ {eq}`eq:cha_eq0`, we find that $$ -\lambda_1 = 1 + \sqrt{\sigma}, \quad \lambda_2 = 1 - \sqrt{\sigma} +\lambda_1 = 1 + \sqrt{\sigma}, \quad \lambda_2 = 1 - \sqrt{\sigma}. $$ (eq:secretweapon) -Formulas {eq}`eq:secretweapon` indicate that $\lambda_1$ and $\lambda_2$ are both simple functions -of a single variable, namely, $\sqrt{\sigma}$, the object that some Ancient Greeks wanted to compute. +Formulas {eq}`eq:secretweapon` indicate that $\lambda_1$ and $\lambda_2$ are each functions +of a single variable, namely, $\sqrt{\sigma}$, the object that we along with some Ancient Greeks want to compute. Ancient Greeks had an indirect way of exploiting this fact to compute square roots of a positive integer. @@ -214,7 +244,7 @@ $$ y_t = \lambda_1^t \eta_1 + \lambda_2^t \eta_2 $$ -where $\eta_1$ and $\eta_2$ are chosen to satisfy the prescribed initial conditions $y_{-1}, y_{-2}$: +where $\eta_1$ and $\eta_2$ are chosen to satisfy prescribed initial conditions $y_{-1}, y_{-2}$: $$ \begin{aligned} @@ -225,17 +255,17 @@ $$(eq:leq_sq) System {eq}`eq:leq_sq` of simultaneous linear equations will play a big role in the remainder of this lecture. -Since $\lambda_1 = 1 + \sqrt{\sigma} > 1 > \lambda_2 = 1 - \sqrt{\sigma} $ -it follows that for **almost all** (but not all) initial conditions +Since $\lambda_1 = 1 + \sqrt{\sigma} > 1 > \lambda_2 = 1 - \sqrt{\sigma} $, +it follows that for *almost all* (but not all) initial conditions $$ -\lim_{t \rightarrow \infty} \left(\frac{y_{t+1}}{y_t}\right) = 1 + \sqrt{\sigma} +\lim_{t \rightarrow \infty} \left(\frac{y_{t+1}}{y_t}\right) = 1 + \sqrt{\sigma}. $$ Thus, $$ -\sqrt{\sigma} = \lim_{t \rightarrow \infty} \left(\frac{y_{t+1}}{y_t}\right) - 1 +\sqrt{\sigma} = \lim_{t \rightarrow \infty} \left(\frac{y_{t+1}}{y_t}\right) - 1. $$ However, notice that if $\eta_1 = 0$, then @@ -247,7 +277,7 @@ $$ so that $$ -\sqrt{\sigma} = 1 - \lim_{t \rightarrow \infty} \left(\frac{y_{t+1}}{y_t}\right) +\sqrt{\sigma} = 1 - \lim_{t \rightarrow \infty} \left(\frac{y_{t+1}}{y_t}\right). $$ Actually, if $\eta_1 =0$, it follows that @@ -270,40 +300,38 @@ so again, convergence is immediate, and we have no need to compute a limit. System {eq}`eq:leq_sq` of simultaneous linear equations can be used in various ways. - * we can take $y_{-1}, y_{-2}$ as given initial conditions and solve for $\eta_1, \eta_2$ + * we can take $y_{-1}, y_{-2}$ as given initial conditions and solve for $\eta_1, \eta_2$; - * we can instead take $\eta_1, \eta_2$ as given and solve for initial conditions $y_{-1}, y_{-2}$ + * we can instead take $\eta_1, \eta_2$ as given and solve for initial conditions $y_{-1}, y_{-2}$. Notice how we used the second approach above when we set $\eta_1, \eta_2$ either to $(0, 1)$, for example, or $(1, 0)$, for example. -In taking this second approach, we were in effect finding an **invariant subspace** of ${\bf R}^2$. +In taking this second approach, we constructed an *invariant subspace* of ${\bf R}^2$. Here is what is going on. -For $ t \geq 0$ and for most pairs of initial conditions $(y_{-1}, y_{-2}) \in {\bf R}^2$ for equation {eq}`eq:second_order', $y_t$ can be expressed as a linear combination of $y_{t-1}$ and $y_{t-2}$. +For $ t \geq 0$ and for most pairs of initial conditions $(y_{-1}, y_{-2}) \in {\bf R}^2$ for equation {eq}`eq:second_order`, $y_t$ can be expressed as a linear combination of $y_{t-1}$ and $y_{t-2}$. But for some special initial conditions $(y_{-1}, y_{-2}) \in {\bf R}^2$, $y_t$ can be expressed as a linear function of $y_{t-1}$ only. These special initial conditions require that $y_{-1}$ be a linear function of $y_{-2}$. -We'll study these special initial conditions soon. But first let's write some Python code to iterate on equation {eq}`eq:second_order` starting from an arbitrary $(y_{-1}, y_{-2}) \in {\bf R}^2$. +We'll study these special initial conditions soon. + +But first let's write some Python code to iterate on equation {eq}`eq:second_order` starting from an arbitrary $(y_{-1}, y_{-2}) \in {\bf R}^2$. ## Implementation We now implement the above algorithm to compute the square root of $\sigma$. - In this lecture, we use the following import: ```{code-cell} ipython3 -:tags: [] - import numpy as np +import matplotlib.pyplot as plt ``` ```{code-cell} ipython3 -:tags: [] - def solve_λs(coefs): # Calculate the roots using numpy.roots λs = np.roots(coefs) @@ -359,16 +387,12 @@ print(f"sqrt({σ}) is approximately {sqrt_σ:.5f} (error: {dev:.5f})") Now we consider cases where $(\eta_1, \eta_2) = (0, 1)$ and $(\eta_1, \eta_2) = (1, 0)$ ```{code-cell} ipython3 -:tags: [] - # Compute λ_1, λ_2 λ_1, λ_2 = solve_λs(coefs) print(f'Roots for the characteristic equation are ({λ_1:.5f}, {λ_2:.5f}))') ``` ```{code-cell} ipython3 -:tags: [] - # Case 1: η_1, η_2 = (0, 1) ηs = (0, 1) @@ -380,9 +404,7 @@ print(f"For η_1, η_2 = (0, 1), sqrt_σ = {sqrt_σ:.5f}") ``` ```{code-cell} ipython3 -:tags: [] - -# Case 2: η_1, η_2 = (0, 1) +# Case 2: η_1, η_2 = (1, 0) ηs = (1, 0) sqrt_σ = y(1, ηs) / y(0, ηs) - 1 @@ -391,9 +413,7 @@ print(f"For η_1, η_2 = (1, 0), sqrt_σ = {sqrt_σ:.5f}") We find that convergence is immediate. -+++ - -Let's represent the preceding analysis by vectorizing our second order difference equation {eq}`eq:second_order` and then using eigendecompositions of a state transition matrix. +Next, we'll represent the preceding analysis by first vectorizing our second-order difference equation {eq}`eq:second_order` and then using eigendecompositions of an associated state transition matrix. ## Vectorizing the difference equation @@ -433,13 +453,112 @@ $$ x_{t+1} = V \Lambda V^{-1} x_t $$ -Define +Now we implement the algorithm above. + +First we write a function that iterates $M$ + +```{code-cell} ipython3 +def iterate_M(x_0, M, num_steps, dtype=np.float64): + + # Eigendecomposition of M + Λ, V = np.linalg.eig(M) + V_inv = np.linalg.inv(V) + + # Initialize the array to store results + xs = np.zeros((x_0.shape[0], + num_steps + 1)) + + # Perform the iterations + xs[:, 0] = x_0 + for t in range(num_steps): + xs[:, t + 1] = M @ xs[:, t] + + return xs, Λ, V, V_inv + +# Define the state transition matrix M +M = np.array([ + [2, -(1 - σ)], + [1, 0]]) + +# Initial condition vector x_0 +x_0 = np.array([2, 2]) + +# Perform the iteration +xs, Λ, V, V_inv = iterate_M(x_0, M, num_steps=100) + +print(f"eigenvalues:\n{Λ}") +print(f"eigenvectors:\n{V}") +print(f"inverse eigenvectors:\n{V_inv}") +``` + +Let's compare the eigenvalues to the roots {eq}`eq:secretweapon` of equation +{eq}`eq:cha_eq0` that we computed above. + +```{code-cell} ipython3 +roots = solve_λs((1, -2, (1 - σ))) +print(f"roots: {np.round(roots, 8)}") +``` + +Hence we confirmed {eq}`eq:eigen_sqrt`. + +Information about the square root we are after is also contained +in the two eigenvectors. + +Indeed, each eigenvector is just a two-dimensional subspace of ${\mathbb R}^3$ pinned down by dynamics of the form + +$$ +y_{t} = \lambda_i y_{t-1}, \quad i = 1, 2 +$$ (eq:invariantsub101) + +that we encountered above in equation {eq}`eq:2diff8` above. + +In equation {eq}`eq:invariantsub101`, the $i$th $\lambda_i$ equals the $V_{i, 1}/V_{i,2}$. + +The following graph verifies this for our example. + +```{code-cell} ipython3 +:tags: [hide-input] + +# Plotting the eigenvectors +plt.figure(figsize=(8, 8)) + +plt.quiver(0, 0, V[0, 0], V[1, 0], angles='xy', scale_units='xy', + scale=1, color='C0', label=fr'$\lambda_1={np.round(Λ[0], 4)}$') +plt.quiver(0, 0, V[0, 1], V[1, 1], angles='xy', scale_units='xy', + scale=1, color='C1', label=fr'$\lambda_2={np.round(Λ[1], 4)}$') + +# Annotating the slopes +plt.text(V[0, 0]-0.5, V[1, 0]*1.2, + r'slope=$\frac{V_{1,1}}{V_{1,2}}=$'+f'{np.round(V[0, 0] / V[1, 0], 4)}', + fontsize=12, color='C0') +plt.text(V[0, 1]-0.5, V[1, 1]*1.2, + r'slope=$\frac{V_{2,1}}{V_{2,2}}=$'+f'{np.round(V[0, 1] / V[1, 1], 4)}', + fontsize=12, color='C1') + +# Adding labels +plt.axhline(0, color='grey', linewidth=0.5, alpha=0.4) +plt.axvline(0, color='grey', linewidth=0.5, alpha=0.4) +plt.legend() + +plt.xlim(-1.5, 1.5) +plt.ylim(-1.5, 1.5) +plt.show() +``` + +## Invariant subspace approach + +The preceding calculation indicates that we can use the eigenvectors $V$ to construct 2-dimensional *invariant subspaces*. + +We'll pursue that possibility now. + +Define the transformed variables + $$ x_t^* = V^{-1} x_t $$ -We can recover $x_t$ from $x_t^*$: +Evidently, we can recover $x_t$ from $x_t^*$: $$ x_t = V x_t^* @@ -453,23 +572,21 @@ Let $$ V = \begin{bmatrix} V_{1,1} & V_{1,2} \cr - V_{2,2} & V_{2,2} \end{bmatrix}, \quad + V_{2,1} & V_{2,2} \end{bmatrix}, \quad V^{-1} = \begin{bmatrix} V^{1,1} & V^{1,2} \cr - V^{2,2} & V^{2,2} \end{bmatrix} + V^{2,1} & V^{2,2} \end{bmatrix} $$ Notice that it follows from $$ \begin{bmatrix} V^{1,1} & V^{1,2} \cr - V^{2,2} & V^{2,2} \end{bmatrix} \begin{bmatrix} V_{1,1} & V_{1,2} \cr - V_{2,2} & V_{2,2} \end{bmatrix} = \begin{bmatrix} 1 & 0 \cr 0 & 1 \end{bmatrix} + V^{2,1} & V^{2,2} \end{bmatrix} \begin{bmatrix} V_{1,1} & V_{1,2} \cr + V_{2,1} & V_{2,2} \end{bmatrix} = \begin{bmatrix} 1 & 0 \cr 0 & 1 \end{bmatrix} $$ that - - $$ V^{2,1} V_{1,1} + V^{2,2} V_{2,1} = 0 $$ @@ -477,7 +594,7 @@ $$ and $$ -V^{1,1}V_{1,2} + V^{1,2} V_{2,2} = 0 +V^{1,1}V_{1,2} + V^{1,2} V_{2,2} = 0. $$ These equations will be very useful soon. @@ -493,14 +610,14 @@ $$ To deactivate $\lambda_1$ we want to set $$ -x_{1,0}^* = 0 +x_{1,0}^* = 0. $$ This can be achieved by setting $$ -x_{2,0} = -( V^{1,2})^{-1} V^{1,1} = V_{2,1} V_{1,1}^{-1} x_{1,0}. +x_{2,0} = -( V^{1,2})^{-1} V^{1,1} x_{1,0} = V_{2,2} V_{1,2}^{-1} x_{1,0}. $$ (eq:deactivate1) To deactivate $\lambda_2$, we want to set @@ -512,58 +629,89 @@ $$ This can be achieved by setting $$ -x_{2,0} = -(V^{2,2})^{-1} V^{2,1} = V_{2,1} V_{1,1}^{-1} x_{1,0} +x_{2,0} = -(V^{2,2})^{-1} V^{2,1} x_{1,0} = V_{2,1} V_{1,1}^{-1} x_{1,0}. $$ (eq:deactivate2) +Let's verify {eq}`eq:deactivate1` and {eq}`eq:deactivate2` below -We shall encounter equations very similar to {eq}`eq:deactivate1` and {eq}`eq:deactivate2` -in this QuantEcon lecture {doc}`money financed government deficits and inflation ` -and in many other places in dynamic economic theory. +To deactivate $\lambda_1$ we use {eq}`eq:deactivate1` -### Implementation +```{code-cell} ipython3 +xd_1 = np.array((x_0[0], + V[1,1]/V[0,1] * x_0[0]), + dtype=np.float64) -Now we implement the algorithm above. +# Compute x_{1,0}^* +np.round(V_inv @ xd_1, 8) +``` -First we write a function that iterates $M$ +We find $x_{1,0}^* = 0$. -```{code-cell} ipython3 -:tags: [] +Now we deactivate $\lambda_2$ using {eq}`eq:deactivate2` -def iterate_M(x_0, M, num_steps): - # Eigendecomposition of M - Λ, V = np.linalg.eig(M) - V_inv = np.linalg.inv(V) - - print(f"eigenvalue:\n{Λ}") - print(f"eigenvector:\n{V}") - - # Initialize the array to store results - x = np.zeros((x_0.shape[0], num_steps)) - - # Perform the iterations - for t in range(num_steps): - x[:, t] = V @ np.diag(Λ**t) @ V_inv @ x_0 - - return x +```{code-cell} ipython3 +xd_2 = np.array((x_0[0], + V[1,0]/V[0,0] * x_0[0]), + dtype=np.float64) -# Define the state transition matrix M -M = np.array([[2, -(1 - σ)], - [1, 0]]) +# Compute x_{2,0}^* +np.round(V_inv @ xd_2, 8) +``` -# Initial condition vector x_0 -x_0 = np.array([1, 0]) +We find $x_{2,0}^* = 0$. -# Perform the iteration -xs = iterate_M(x_0, M, num_steps=100) +```{code-cell} ipython3 +# Simulate with muted λ1 λ2. +num_steps = 10 +xs_λ1 = iterate_M(xd_1, M, num_steps)[0] +xs_λ2 = iterate_M(xd_2, M, num_steps)[0] + +# Compute ratios y_t / y_{t-1} +ratios_λ1 = xs_λ1[1, 1:] / xs_λ1[1, :-1] +ratios_λ2 = xs_λ2[1, 1:] / xs_λ2[1, :-1] ``` -Compare the eigenvector to the roots we obtained above +The following graph shows the ratios $y_t / y_{t-1}$ for the two cases. -```{code-cell} ipython3 -:tags: [] +We find that the ratios converge to $\lambda_2$ in the first case and $\lambda_1$ in the second case. -roots = solve_λs((1, -2, (1 - σ))) -print(f"roots: {np.round(roots, 8)}") +```{code-cell} ipython3 +:tags: [hide-input] + +# Plot the ratios for y_t / y_{t-1} +fig, axs = plt.subplots(1, 2, figsize=(14, 6)) + +# First subplot +axs[0].plot(np.round(ratios_λ1, 6), + label=r'$\frac{y_t}{y_{t-1}}$', linewidth=3) +axs[0].axhline(y=Λ[1], color='red', linestyle='--', + label='$\lambda_2$', alpha=0.5) +axs[0].set_xlabel('t', size=18) +axs[0].set_ylabel(r'$\frac{y_t}{y_{t-1}}$', size=18) +axs[0].set_title(r'$\frac{y_t}{y_{t-1}}$ after Muting $\lambda_1$', + size=13) +axs[0].legend() + +# Second subplot +axs[1].plot(ratios_λ2, label=r'$\frac{y_t}{y_{t-1}}$', + linewidth=3) +axs[1].axhline(y=Λ[0], color='green', linestyle='--', + label='$\lambda_1$', alpha=0.5) +axs[1].set_xlabel('t', size=18) +axs[1].set_ylabel(r'$\frac{y_t}{y_{t-1}}$', size=18) +axs[1].set_title(r'$\frac{y_t}{y_{t-1}}$ after Muting $\lambda_2$', + size=13) +axs[1].legend() + +plt.tight_layout() +plt.show() ``` -Hence we confirmed {eq}`eq:eigen_sqrt`. +## Concluding remarks + +This lecture sets the stage for many other applications of the *invariant subspace* methods. + +All of these exploit very similar equations based on eigen decompositions. + +We shall encounter equations very similar to {eq}`eq:deactivate1` and {eq}`eq:deactivate2` +in {doc}`money_inflation` and in many other places in dynamic economic theory.