From 4b7cf91c02b23a1aa7193e6731647cac1277f707 Mon Sep 17 00:00:00 2001 From: Trang Truong <91804044+htrangtr@users.noreply.github.com> Date: Sat, 30 Dec 2023 10:46:30 -0800 Subject: [PATCH] edit unicode conversion (#280) * edit latex missing \ * basic edits of unicode typos and missing space * edit unicode and add commas * add + for some functions --- lectures/tools_and_techniques/geom_series.md | 6 +++--- .../iterative_methods_sparsity.md | 6 +++--- lectures/tools_and_techniques/linear_algebra.md | 6 +++--- .../tools_and_techniques/numerical_linear_algebra.md | 12 ++++++------ 4 files changed, 15 insertions(+), 15 deletions(-) diff --git a/lectures/tools_and_techniques/geom_series.md b/lectures/tools_and_techniques/geom_series.md index 0ad2dc00..362026c5 100644 --- a/lectures/tools_and_techniques/geom_series.md +++ b/lectures/tools_and_techniques/geom_series.md @@ -499,7 +499,7 @@ in a project with gross one period nominal rate of return accumulates project - thus, $1$ dollar invested at time $0$ pays interest $r$ dollars after one period, so we have $r+1 = R$ - dollars at time$1$ + dollars at time $1$ - at time $1$ we reinvest $1+r =R$ dollars and receive interest of $r R$ dollars at time $2$ plus the *principal* $R$ dollars, so we receive $r R + R = (1+r)R = R^2$ @@ -551,7 +551,7 @@ The **present value** of the lease is $$ \begin{aligned} -p_0 & = x_0 + x_1/R + x_2/(R^2) + \ddots \\ +p_0 & = x_0 + x_1/R + x_2/(R^2) + \cdots \\ & = x_0 (1 + G R^{-1} + G^2 R^{-2} + \cdots ) \\ & = x_0 \frac{1}{1 - G R^{-1}} \end{aligned} @@ -704,7 +704,7 @@ plot!(plt, T, y_3, label = L"$T$-period Lease First-order Approx. adj.") Evidently our approximations perform well for small values of $T$. -However, holding $g$ and r fixed, our approximations deteriorate as $T$ increases. +However, holding $g$ and $r$ fixed, our approximations deteriorate as $T$ increases. Next we compare the infinite and finite duration lease present values over different lease lengths $T$. diff --git a/lectures/tools_and_techniques/iterative_methods_sparsity.md b/lectures/tools_and_techniques/iterative_methods_sparsity.md index 42896d18..48fe8f1d 100644 --- a/lectures/tools_and_techniques/iterative_methods_sparsity.md +++ b/lectures/tools_and_techniques/iterative_methods_sparsity.md @@ -457,7 +457,7 @@ equation through methods such as value-function iteration. The condition we will examine here is called [**diagonal dominance**](https://en.wikipedia.org/wiki/Diagonally_dominant_matrix). $$ -|A_{ii}| \geq \sum_{j\neq i} |A_{ij}| \quad\text{for all } i = 1\ldots N +|A_{ii}| \geq \sum_{j\neq i} |A_{ij}| \quad\text{for all } i = 1, \ldots N $$ That is, in every row, the diagonal element is weakly greater in absolute value than the sum of all of the other elements in the row. In cases @@ -466,7 +466,7 @@ where it is strictly greater, we say that the matrix is strictly diagonally domi With our example, given that $Q$ is the infinitesimal generator of a Markov chain, we know that each row sums to 0, and hence it is weakly diagonally dominant. -However, notice that when $\rho > 0$, and since the diagonal of $Q$ is negative, $A = rho I - Q$ makes the matrix strictly diagonally dominant. +However, notice that when $\rho > 0$, and since the diagonal of $Q$ is negative, $A = \rho I - Q$ makes the matrix strictly diagonally dominant. ### Jacobi Iteration @@ -1187,7 +1187,7 @@ $$ If $Q$ is a matrix, we could just take its transpose to find the adoint. However, with matrix-free methods, we need to implement the adjoint-vector product directly. -The logic for the adjoint is that for a given $n = (n_1,\ldots, n_m, \ldots n_M)$, the $Q^T$ product for that row has terms enter when +The logic for the adjoint is that for a given $n = (n_1,\ldots, n_m, \ldots, n_M)$, the $Q^T$ product for that row has terms enter when 1. $1 < n_m \leq N$, entering into the identical $n$ except with one less customer in the $m$ position 1. $1 \leq n_m < N$, entering into the identical $n$ except with one more customer in the $m$ position diff --git a/lectures/tools_and_techniques/linear_algebra.md b/lectures/tools_and_techniques/linear_algebra.md index 397090ad..10f9689e 100644 --- a/lectures/tools_and_techniques/linear_algebra.md +++ b/lectures/tools_and_techniques/linear_algebra.md @@ -402,7 +402,7 @@ $m > n$ vectors in $\mathbb R ^n$ must be linearly dependent. The following statements are equivalent to linear independence of $A := \{a_1, \ldots, a_k\} \subset \mathbb R ^n$. 1. No vector in $A$ can be formed as a linear combination of the other elements. -1. If $\beta_1 a_1 + \cdots \beta_k a_k = 0$ for scalars $\beta_1, \ldots, \beta_k$, then $\beta_1 = \cdots = \beta_k = 0$. +1. If $\beta_1 a_1 + \cdots + \beta_k a_k = 0$ for scalars $\beta_1, \ldots, \beta_k$, then $\beta_1 = \cdots = \beta_k = 0$. (The zero in the first expression is the origin of $\mathbb R ^n$) @@ -415,13 +415,13 @@ In other words, if $A := \{a_1, \ldots, a_k\} \subset \mathbb R ^n$ is linearly independent and $$ -y = \beta_1 a_1 + \cdots \beta_k a_k +y = \beta_1 a_1 + \cdots + \beta_k a_k $$ then no other coefficient sequence $\gamma_1, \ldots, \gamma_k$ will produce the same vector $y$. -Indeed, if we also have $y = \gamma_1 a_1 + \cdots \gamma_k a_k$, +Indeed, if we also have $y = \gamma_1 a_1 + \cdots + \gamma_k a_k$, then $$ diff --git a/lectures/tools_and_techniques/numerical_linear_algebra.md b/lectures/tools_and_techniques/numerical_linear_algebra.md index 749eec56..19d93766 100644 --- a/lectures/tools_and_techniques/numerical_linear_algebra.md +++ b/lectures/tools_and_techniques/numerical_linear_algebra.md @@ -626,7 +626,7 @@ Q = Tridiagonal(fill(alpha, N - 1), [-alpha; fill(-2alpha, N - 2); -alpha], Here we can use `Tridiagonal` to exploit the structure of the problem. -Consider a simple payoff vector $r$ associated with each state, and a discount rate $rho$. Then we can solve for +Consider a simple payoff vector $r$ associated with each state, and a discount rate $\rho$. Then we can solve for the expected present discounted value in a way similar to the discrete-time case. $$ @@ -655,23 +655,23 @@ linear problem. v = A \ r ``` -The $Q$ is also used to calculate the evolution of the Markov chain, in direct analogy to the $psi_{t+k} = psi_t P^k$ evolution with the transition matrix $P$ of the discrete case. +The $Q$ is also used to calculate the evolution of the Markov chain, in direct analogy to the $\psi_{t+k} = \psi_t P^k$ evolution with the transition matrix $P$ of the discrete case. In the continuous case, this becomes the system of linear differential equations $$ -\dot{psi}(t) = Q(t)^T psi(t) +\dot{\psi}(t) = Q(t)^T \psi(t) $$ given the initial condition $\psi(0)$ and where the $Q(t)$ intensity matrix is allowed to vary with time. In the simplest case of a constant $Q$ matrix, this is a simple constant-coefficient system of linear ODEs with coefficients $Q^T$. -If a stationary equilibrium exists, note that $\dot{psi}(t) = 0$, and the stationary solution $psi^{*}$ needs to satisfy +If a stationary equilibrium exists, note that $\dot{\psi}(t) = 0$, and the stationary solution $\psi^{*}$ needs to satisfy $$ -0 = Q^T psi^{*} +0 = Q^T \psi^{*} $$ -Notice that this is of the form $0 psi^{*} = Q^T psi^{*}$ and hence is equivalent to finding the eigenvector associated with the $\lambda = 0$ eigenvalue of $Q^T$. +Notice that this is of the form $0 \psi^{*} = Q^T \psi^{*}$ and hence is equivalent to finding the eigenvector associated with the $\lambda = 0$ eigenvalue of $Q^T$. With our example, we can calculate all of the eigenvalues and eigenvectors