The work conducted here was initially completed in the Ohio State University undergraduate program with the intent of graduating with research distinction in the department of Mechanical Engineering. The requirements for said distinction was: 1). participation in an undergraduate research forum, 2). completion of research-related coursework and 3). the completion of a comprehensive thesis (published and linked below).
Thesis: https://kb.osu.edu/handle/1811/101702
The model predictive control (MPC) architecture is a powerful tool in the realm of control theory. It defines the process of looking into future states of a system (as governed by predetermined model equations) and selecting the most appropriate set of controls with respect to some cost/objective function. The set of commands and libraries presented here serves as my personal testing ground for the solutions to the MPC formulation - and is in no way a competitor with more professional libraries.
For some animations of the system at work, you can skip the description and scroll to the bottom of the readme.md.
Final Note: The classes and associated functions shown here are in no way complete. It is my hope to use the model predictive control architecture as a platform for introducing myself to various optimization strategies. I intend to develop the library whenever I have free time and new ideas as inspired by my studies. I have identified that the library's biggest flaw is the lack of alternatives to gradient descent-based approaches, and the inexistance of any LP-oriented optimization strategies. These will hopefully be explored more thoroughly in the future.
Overall, the MPC decision scheme follows the diagram below.
First, the generalized MPC will be defined so that each of the following sections can be spent investigating a method of finding the solution to the problem.
Let us first define our model in terms of a discrete linear/nonlinear dynamical system.
Where
We can use this model to characterize a prediction horizon as a series of simulated steps starting at some initial position,
Where
A placeholder
Using the aforementationed sets, a cost function can be defined which assigns a 'score' to each of the period costs, or the state and its associated input, as well as the terminal cost, or the final state evaluated at the end of the prediction horizon.
Where the functions
Where
The optimization now becomes a minimization of the total cost over the
Where
In most, if not all cases, the optimization problem can be further simplified by the assumption that the intial state,
and replacing the optimization statement with
In this form we ignore the initial state, giving a slightly more concise cost function.
Now that the optimization problem is defined more thoroughly, the steps used to solve it can be expressed. First, let us define the stopping point to the optimization process. This occurs when the cost function is at a local minimum, or when the partial derivative w.r.t. the inputs of the cost function is equal to 0.
In practice, it will be more realistic to attempt to find an approximately optimal solution. That is,
where
where we will select the step-size function
In the nonlinear gradient descent (NGD, or steep-descent) algorithm, the step-size function
In this case, the optimization takes the simplest available step towards the minimum. This method is efficient in cases where the set
The nonlinear Newton's optimization makes the assumption that the cost function approximates a quadratic function and utilizes the second derivative of
We thus set the step-size function to
making the optimization
This method can be proven to be exact with
While this method often converges in many fewer steps than the NGD method, it is computationally much more expensive. Implying that in problems where
description coming soon...
For some information on the right most image see michaelnaps/roomba_draws.git