AA 203: Optimal and Learning-based Control

Spring 2020

Course Description

Optimal control solution techniques for systems with known and unknown dynamics. Dynamic programming, Hamilton-Jacobi reachability, and direct and indirect methods for trajectory optimization. Introduction to model predictive control. Model-based and model-free reinforcement learning, and connections between modern reinforcement learning and fundamental optimal control ideas.

Meeting Times

Lectures will be online; details of lecture recordings and office hours are available in the syllabus.

Syllabus

The class syllabus can be found here.

Schedule

Subject to change. Lecture notes are available here. We will try to have the lecture notes updated before the class.

Week Topic Lecture Slides
1 Introduction, nonlinear optimization
Constrained nonlinear optimization
Recitation: Linear dynamical systems
Lecture 1
Lecture 2
Recitation 1
2 Dynamic programming, discrete LQR
Stochastic DP, value iteration, policy iteration
Recitation: Nonlinear regression fundamentals
Lecture 3
Lecture 4
Recitation 2
3 Iterative LQR, DDP, and LQG
Introduction to reinforcement learning
Recitation: Introduction to Python
Lecture 5
Lecture 6
Recitation 3
4 HJB, HJI, and reachability analysis
Direct methods for optimal control
Recitation: Convex and mixed-integer programming
Lecture 7
Lecture 8
Recitation 4
5 Direct collocation and SQP
Introduction to MPC
Recitation: Training neural networks and PyTorch
Lecture 9
Lecture 10
Recitation 5
6 Feasibility and stability of MPC
Adaptive optimal control
Lecture 11
Lecture 12
7 Intro to model-based RL
Model-free RL
Lecture 13
Lecture 14
8 Model-based policy learning
Lecture 15
9 Calculus of variations
Indirect methods for optimal control
Lecture 16
Lecture 17
10 Pontryagin's maximum principle
Numerical aspects of indirect optimal control
Lecture 18
Lecture 19