Nonlinear optimal control via nonlinear programming (NLP). This library relies upon jax to compute gradients, Jacobian-vector products, and Hessian-vector products through automatic differentiation (AD). Due to calculating gradients through AD as opposed to finite differences, the solution converges in a small number of iterations, but the runtime suffers. In the ideal case, we would find analytical expressions for the gradients, Jacobians, and Hessians (or approximate Hessians).
- First create an optimal control problem by subclassing from the template given in core.py. See cartpole.py for an example.
import jax.numpy as np
from nopt.problems import CartPole
cartpole = CartPole()
- Specify boundary conditions and number of grid points:
bcs = {'x0': jnp.array([0., 0., 0., 0.]),
'xN': jnp.array([0., 0., jnp.pi, 0.])}
N = 100
- Create a NLP problem:
from nopt import NlpProblem
problem = NlpProblem(cartpole, boundary_conditions=bcs, N=N)
- Find the optimized solution:
from nopt import solve
zstar = solve(problem, max_iters=10)
You can visualize the optimized output by calling plot
:
anim = problem.plot(zstar)
See the example notebook.