Skip to content

Commit

Permalink
Fix remaining codespell typos, rejecting invalid fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
endolith committed Jul 9, 2024
1 parent da0cead commit a937ff9
Show file tree
Hide file tree
Showing 14 changed files with 35 additions and 35 deletions.
6 changes: 3 additions & 3 deletions 02-Discrete-Bayes.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1508,7 +1508,7 @@
" # move the robot and\n",
" robot.move(distance=move_distance)\n",
"\n",
" # peform prediction\n",
" # perform prediction\n",
" prior = predict(posterior, move_distance, kernel) \n",
"\n",
" # and update the filter\n",
Expand Down Expand Up @@ -1720,7 +1720,7 @@
"source": [
"## References\n",
"\n",
" * [1] D. Fox, W. Burgard, and S. Thrun. \"Monte carlo localization: Efficient position estimation for mobile robots.\" In *Journal of Artifical Intelligence Research*, 1999.\n",
" * [1] D. Fox, W. Burgard, and S. Thrun. \"Monte carlo localization: Efficient position estimation for mobile robots.\" In *Journal of Artificial Intelligence Research*, 1999.\n",
" \n",
" http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/fox99a-html/jair-localize.html\n",
"\n",
Expand All @@ -1735,7 +1735,7 @@
" https://www.udacity.com/course/cs373\n",
" \n",
" \n",
" * [4] Khan Acadamy. \"Introduction to the Convolution\"\n",
" * [4] Khan Academy. \"Introduction to the Convolution\"\n",
" \n",
" https://www.khanacademy.org/math/differential-equations/laplace-transform/convolution-integral/v/introduction-to-the-convolution\n",
" \n",
Expand Down
4 changes: 2 additions & 2 deletions 03-Gaussians.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1288,7 +1288,7 @@
"\n",
"The discrete Bayes filter works by multiplying and adding arbitrary probability random variables. The Kalman filter uses Gaussians instead of arbitrary random variables, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussian random variables (Gaussian random variable is just another way to say normally distributed random variable). \n",
"\n",
"A remarkable property of Gaussian random variables is that the sum of two independent Gaussian random variables is also normally distributed! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).\n",
"A remarkable property of Gaussian random variables is that the sum of two independent Gaussian random variables is also normally distributed! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multiplying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).\n",
"\n",
"Wikipedia has a good article on this property, and I also prove it at the end of this chapter. \n",
"https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables\n",
Expand Down Expand Up @@ -1951,7 +1951,7 @@
"\n",
"$$p(x \\mid z) \\propto p(z|x)p(x)$$\n",
"\n",
"Now we subtitute in the equations for the Gaussians, which are\n",
"Now we substitute in the equations for the Gaussians, which are\n",
"\n",
"$$p(z \\mid x) = \\frac{1}{\\sqrt{2\\pi\\sigma_z^2}}\\exp \\Big[-\\frac{(z-x)^2}{2\\sigma_z^2}\\Big]$$\n",
"\n",
Expand Down
4 changes: 2 additions & 2 deletions 04-One-Dimensional-Kalman-Filters.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -383,7 +383,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's test it. What is the prior if the intitial position is the Gaussian $\\mathcal N(10, 0.2^2)$ and the movement is the Gaussian $\\mathcal N (15, 0.7^2)$?"
"Let's test it. What is the prior if the initial position is the Gaussian $\\mathcal N(10, 0.2^2)$ and the movement is the Gaussian $\\mathcal N (15, 0.7^2)$?"
]
},
{
Expand Down Expand Up @@ -440,7 +440,7 @@
"\n",
"Both the likelihood and prior are modeled with Gaussians. Can we multiply Gaussians? Is the product of two Gaussians another Gaussian?\n",
"\n",
"Yes to the former, and almost to the latter! In the last chapter I proved that the product of two Gaussians is proportional to another Gausian. \n",
"Yes to the former, and almost to the latter! In the last chapter I proved that the product of two Gaussians is proportional to another Gaussian. \n",
"\n",
"$$\\begin{aligned}\n",
"\\mu &= \\frac{\\sigma_1^2 \\mu_2 + \\sigma_2^2 \\mu_1} {\\sigma_1^2 + \\sigma_2^2}, \\\\\n",
Expand Down
4 changes: 2 additions & 2 deletions 05-Multivariate-Gaussians.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -745,7 +745,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"These plots look like circles and ellipses. Indeed, it turns out that any slice through the multivariate Gaussian is an ellipse. Hence, in statistics we do not call these 'contour plots', but either *error ellipses* or *confidence ellipses*; the terms are interchangable.\n",
"These plots look like circles and ellipses. Indeed, it turns out that any slice through the multivariate Gaussian is an ellipse. Hence, in statistics we do not call these 'contour plots', but either *error ellipses* or *confidence ellipses*; the terms are interchangeable.\n",
"\n",
"This code uses the function `plot_covariance_ellipse()` from `filterpy.stats`. By default the function displays one standard deviation, but you can use either the `variance` or `std` parameter to control what is displayed. For example, `variance=3**2` or `std=3` would display the 3rd standard deviation, and `variance=[1,4,9]` or `std=[1,2,3]` would display the 1st, 2nd, and 3rd standard deviations. "
]
Expand Down Expand Up @@ -1773,7 +1773,7 @@
"\n",
"It is important to understand that we are taking advantage of the fact that velocity and position are correlated. We get a rough estimate of velocity from the distance and time between two measurements, and use Bayes theorem to produce very accurate estimates after only a few observations. Please reread this section if you have any doubts. If you do not understand this you will quickly find it impossible to reason about what you will learn in the following chapters.\n",
"\n",
"The effect of including velocity appears to me minor if only care about the position. But this is only after one update. In the next chapter we will see what a dramatic increase in certainty we have after multiple updates. The measurment variance will be large, but the estimated position variance will be small. Each time you intersect the velocity covariance with position it gets narrower on the x-axis, hence the variance is also smaller each time."
"The effect of including velocity appears to me minor if only care about the position. But this is only after one update. In the next chapter we will see what a dramatic increase in certainty we have after multiple updates. The measurement variance will be large, but the estimated position variance will be small. Each time you intersect the velocity covariance with position it gets narrower on the x-axis, hence the variance is also smaller each time."
]
},
{
Expand Down
10 changes: 5 additions & 5 deletions 06-Multivariate-Kalman-Filters.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1611,7 +1611,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"There are many attributes there that we haven't discussed yet, but many should be familar.\n",
"There are many attributes there that we haven't discussed yet, but many should be familiar.\n",
"\n",
"At this point you could write code to plot any of these variables. However, it is often more useful to use `np.array` instead of lists. Calling `Saver.to_array()` will convert the lists into `np.array`. There is one caveat: if the shape of any of the attributes changes during the run, the `to_array` will raise an exception since `np.array` requires all of the elements to be of the same type and size. \n",
"\n",
Expand Down Expand Up @@ -1724,7 +1724,7 @@
"\n",
"$$\\bar\\sigma^2 = \\sigma^2 + \\sigma^2_{move}$$\n",
"\n",
"We add the variance of the movement to the variance of our estimate to reflect the loss of knowlege. We need to do the same thing here, except it isn't quite that easy with multivariate Gaussians. \n",
"We add the variance of the movement to the variance of our estimate to reflect the loss of knowledge. We need to do the same thing here, except it isn't quite that easy with multivariate Gaussians. \n",
"\n",
"We can't simply write $\\mathbf{\\bar P} = \\mathbf P + \\mathbf Q$. In a multivariate Gaussians the state variables are *correlated*. What does this imply? Our knowledge of the velocity is imperfect, but we are adding it to the position with\n",
"\n",
Expand Down Expand Up @@ -1809,7 +1809,7 @@
"source": [
"You can see that with a velocity of 5 the position correctly moves 3 units in each 6/10ths of a second step. At each step the width of the ellipse is larger, indicating that we have lost information about the position due to adding $\\dot x\\Delta t$ to x at each step. The height has not changed - our system model says the velocity does not change, so the belief we have about the velocity cannot change. As time continues you can see that the ellipse becomes more and more tilted. Recall that a tilt indicates *correlation*. $\\mathbf F$ linearly correlates $x$ with $\\dot x$ with the expression $\\bar x = \\dot x \\Delta t + x$. The $\\mathbf{FPF}^\\mathsf T$ computation correctly incorporates this correlation into the covariance matrix.\n",
"\n",
"Here is an animation of this equation that allows you to change the design of $\\mathbf F$ to see how it affects shape of $\\mathbf P$. The `F00` slider affects the value of F[0, 0]. `covar` sets the intial covariance between the position and velocity($\\sigma_x\\sigma_{\\dot x}$). I recommend answering these questions at a minimum\n",
"Here is an animation of this equation that allows you to change the design of $\\mathbf F$ to see how it affects shape of $\\mathbf P$. The `F00` slider affects the value of F[0, 0]. `covar` sets the initial covariance between the position and velocity($\\sigma_x\\sigma_{\\dot x}$). I recommend answering these questions at a minimum\n",
"\n",
"* what if $x$ is not correlated to $\\dot x$? (set F01 to 0, the rest at defaults)\n",
"* what if $x = 2\\dot x\\Delta t + x_0$? (set F01 to 2, the rest at defaults)\n",
Expand Down Expand Up @@ -2550,7 +2550,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Looking at the output we see a very large spike in the filter output at the beginning. We set $\\text{P}=500\\, \\mathbf{I}_2$ (this is shorthand notation for a 2x2 diagonal matrix with 500 in the diagonal). We now have enough information to understand what this means, and how the Kalman filter treats it. The 500 in the upper left hand corner corresponds to $\\sigma^2_x$; therefore we are saying the standard deviation of `x` is $\\sqrt{500}$, or roughly 22.36 m. Roughly 99% of the samples occur withing $3\\sigma$, therefore $\\sigma^2_x=500$ is telling the Kalman filter that the prediction (the prior) could be up to 67 meters off. That is a large error, so when the measurement spikes the Kalman filter distrusts its own estimate and jumps wildly to try to incorporate the measurement. Then, as the filter evolves $\\mathbf P$ quickly converges to a more realistic value.\n",
"Looking at the output we see a very large spike in the filter output at the beginning. We set $\\text{P}=500\\, \\mathbf{I}_2$ (this is shorthand notation for a 2x2 diagonal matrix with 500 in the diagonal). We now have enough information to understand what this means, and how the Kalman filter treats it. The 500 in the upper left hand corner corresponds to $\\sigma^2_x$; therefore we are saying the standard deviation of `x` is $\\sqrt{500}$, or roughly 22.36 m. Roughly 99% of the samples occur within $3\\sigma$, therefore $\\sigma^2_x=500$ is telling the Kalman filter that the prediction (the prior) could be up to 67 meters off. That is a large error, so when the measurement spikes the Kalman filter distrusts its own estimate and jumps wildly to try to incorporate the measurement. Then, as the filter evolves $\\mathbf P$ quickly converges to a more realistic value.\n",
"\n",
"Let's look at the math behind this. The equation for the Kalman gain is\n",
"\n",
Expand Down Expand Up @@ -2847,7 +2847,7 @@
"source": [
"## Batch Processing\n",
"\n",
"The Kalman filter is designed as a recursive algorithm - as new measurements come in we immediately create a new estimate. But it is very common to have a set of data that have been already collected which we want to filter. Kalman filters can be run in a batch mode, where all of the measurements are filtered at once. We have implemented this in `KalmanFilter.batch_filter()`. Internally, all the function does is loop over the measurements and collect the resulting state and covariance estimates in arrays. It simplifies your logic and conveniently gathers all of the outputs into arrays. I often use this function, but waited until the end of the chapter so you would become very familiar with the predict/update cyle that you must run.\n",
"The Kalman filter is designed as a recursive algorithm - as new measurements come in we immediately create a new estimate. But it is very common to have a set of data that have been already collected which we want to filter. Kalman filters can be run in a batch mode, where all of the measurements are filtered at once. We have implemented this in `KalmanFilter.batch_filter()`. Internally, all the function does is loop over the measurements and collect the resulting state and covariance estimates in arrays. It simplifies your logic and conveniently gathers all of the outputs into arrays. I often use this function, but waited until the end of the chapter so you would become very familiar with the predict/update cycle that you must run.\n",
"\n",
"First collect your measurements into an array or list. Maybe it is in a CSV file:\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion 07-Kalman-Filter-Math.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1548,7 +1548,7 @@
"\n",
"$$\\mathbf z_k \\sim P(\\mathbf z_k \\mid \\mathbf x_k)$$\n",
"\n",
"We have a recurrence now, so we need an initial condition to terminate it. Therefore we say that the initial distribution is the probablity of the state $\\mathbf x_0$:\n",
"We have a recurrence now, so we need an initial condition to terminate it. Therefore we say that the initial distribution is the probability of the state $\\mathbf x_0$:\n",
"\n",
"$$\\mathbf x_0 \\sim P(\\mathbf x_0)$$\n",
"\n",
Expand Down
Loading

0 comments on commit a937ff9

Please sign in to comment.