Robust Trajectory Optimization

Robust Trajectory Optimization
Motion planning algorithms for agile robotic systems operating in uncertain environments, with application to self-driving cars, drones, and autonomous spacecraft. Emphasis is placed on real-time implementability (e.g., via massive parallelization on GPUs), on robustness (via techniques from robust model predictive control, convex optimization, and contraction theory), and on formal performance guarantees (via advanced mathematical and statistical tools).

Keywords: Motion Planning, Robust Control, Optimization

Students: Sumeet Singh, Brian Ichter, Edward Schmerling

Related Work

Conference Articles

  1. S. Singh, V. Sindhwani, J.-J. E. Slotine, and M. Pavone, “Learning Stabilizable Dynamical Systems via Control Contraction Metrics,” in Workshop on Algorithmic Foundations of Robotics, 2018. (In Press)

    Abstract: We propose a novel framework for learning stabilizable nonlinear dynamical systems for continuous control tasks in robotics. The key idea is to develop a new control-theoretic regularizer for dynamics fitting rooted in the notion of stabilizability, which guarantees that the learnt system can be accompanied by a robust controller capable of stabilizing any trajectory that the system can generate. By leveraging tools from contraction theory, statistical learning, and convex optimization, we provide a general and tractable algorithm to learn stabilizable dynamics, which can be applied to complex underactuated systems. We validate the proposed algorithm on a simulated planar quadrotor system and observe that the control-theoretic regularized dynamics model is able to consistently generate and accurately track reference trajectories whereas the model learnt using standard regression techniques, e.g., ridge-regression (RR) does extremely poorly on both tasks. Furthermore, in aggressive flight regimes with high velocity and bank angle, the tracking controller fails to stabilize the trajectory generated by the ridge-regularized model whereas no instabilities were observed using the control-theoretic learned model, even with a small number of demonstration examples. The results presented illustrate the need to infuse standard model-based reinforcement learning algorithms with concepts drawn from nonlinear control theory for improved reliability.

    @inproceedings{SinghSindhwaniEtAl2018,
      author = {Singh, S. and Sindhwani, V. and Slotine, J.-J. E. and Pavone, M.},
      title = {Learning Stabilizable Dynamical Systems via Control Contraction Metrics},
      booktitle = {{Workshop on Algorithmic Foundations of Robotics}},
      year = {2018},
      note = {In Press},
      month = oct,
      url = {https://arxiv.org/abs/1808.00113},
      keywords = {press},
      owner = {ssingh19},
      timestamp = {2018-09-21}
    }
    
  2. S. Singh, M. Chen, S. L. Herbert, C. J. Tomlin, and M. Pavone, “Robust Tracking with Model Mismatch for Fast and Safe Planning: an SOS Optimization Approach,” in Workshop on Algorithmic Foundations of Robotics, 2018. (In Press)

    Abstract: In the pursuit of real-time motion planning, a commonly adopted practice is to compute a trajectory by running a planning algorithm on a simplified, low-dimensional dynamical model, and then employ a feedback tracking controller that tracks such a trajectory by accounting for the full, high-dimensional system dynamics. While this strategy of planning with model mismatch generally yields fast computation times, there are no guarantees of dynamic feasibility, which hampers application to safety-critical systems. Building upon recent work that addressed this problem through the lens of Hamilton-Jacobi (HJ) reachability, we devise an algorithmic framework whereby one computes, offline, for a pair of "planner" (i.e., low-dimensional) and "tracking" (i.e., high-dimensional) models, a feedback tracking controller and associated tracking bound. This bound is then used as a safety margin when generating motion plans via the low-dimensional model. Specifically, we harness the computational tool of sum-of-squares (SOS) programming to design a bilinear optimization algorithm for the computation of the feedback tracking controller and associated tracking bound. The algorithm is demonstrated via numerical experiments, with an emphasis on investigating the trade-off between the increased computational scalability afforded by SOS and its intrinsic conservativeness. Collectively, our results enable scaling the appealing strategy of planning with model mismatch to systems that are beyond the reach of HJ analysis, while maintaining safety guarantees.

    @inproceedings{SinghChenEtAl2018,
      author = {Singh, S. and Chen, M. and Herbert, S. L. and Tomlin, C. J. and Pavone, M.},
      title = {Robust Tracking with Model Mismatch for Fast and Safe Planning: an {SOS} Optimization Approach},
      booktitle = {{Workshop on Algorithmic Foundations of Robotics}},
      year = {2018},
      note = {In Press},
      month = oct,
      url = {https://arxiv.org/abs/1808.00649},
      keywords = {press},
      owner = {ssingh19},
      timestamp = {2018-09-21}
    }