Anirudha Majumdar

Contacts:

Email: ani dot majumdar at princeton dot edu

Anirudha Majumdar


Anirudha Majumdar was a postdoctoral candidate at the Autonomous Systems Lab in 2016-2017. He completed his Ph.D. in the Electrical Engineering and Computer Science department at MIT with Russ Tedrake. Ani received his undergraduate degree in Mechanical Engineering and Mathematics from the University of Pennsylvania, where he was a member of the GRASP lab. His research is primarily in robotics: he works on algorithms for controlling highly dynamics robots such as unmanned aerial vehicles with formal guarantees on the safety of the system. Ani’s research has been recognized by the Siebel Foundation Scholarship and the Best Conference Paper Award at the International Conference on Robotics and Automation (ICRA) 2013. Anirudha is now an an Assistant Professor in the Mechanical and Aerospace Engineering department at Princeton University.

Awards:

  • Siebel Foundation Scholarship
  • Best Conference Paper Award, Int. Conf. on Robotics on Automation (ICRA), 2013.

Currently at Princeton University

ASL Publications

  1. S. Singh, B. Landry, A. Majumdar, J.-J. E. Slotine, and M. Pavone, “Robust Feedback Motion Planning via Contraction Theory,” Int. Journal of Robotics Research, vol. 42, no. 9, pp. 655–688, 2023.

    Abstract:

    @article{SinghLandryEtAl2019,
      author = {Singh, S. and Landry, B. and Majumdar, A. and Slotine, J-J. E. and Pavone, M.},
      title = {Robust Feedback Motion Planning via Contraction Theory},
      journal = {{Int. Journal of Robotics Research}},
      volume = {42},
      number = {9},
      pages = {655--688},
      year = {2023},
      keywords = {pub},
      owner = {ssingh19},
      timestamp = {2019-09-11},
      url = {https://journals.sagepub.com/doi/pdf/10.1177/02783649231186165}
    }
    
  2. S. Singh, J. Lacotte, A. Majumdar, and M. Pavone, “Risk-sensitive Inverse Reinforcement Learning via Semi- and Non-Parametric Methods,” Int. Journal of Robotics Research, vol. 37, no. 13, pp. 1713–1740, 2018.

    Abstract: The literature on Inverse Reinforcement Learning (IRL) typically assumes that humans take actions in order to minimize the expected value of a cost function, i.e., that humans are risk neutral. Yet, in practice, humans are often far from being risk neutral. To fill this gap, the objective of this paper is to devise a framework for risk-sensitive IRL in order to explicitly account for a human’s risk sensitivity. To this end, we propose a flexible class of models based on coherent risk measures, which allow us to capture an entire spectrum of risk preferences from risk-neutral to worst-case. We propose efficient non-parametric algorithms based on linear programming and semi-parametric algorithms based on maximum likelihood for inferring a human’s underlying risk measure and cost function for a rich class of static and dynamic decision-making settings. The resulting approach is demonstrated on a simulated driving game with ten human participants. Our method is able to infer and mimic a wide range of qualitatively different driving styles from highly risk-averse to risk-neutral in a data-efficient manner. Moreover, comparisons of the Risk-Sensitive (RS) IRL approach with a risk-neutral model show that the RS-IRL framework more accurately captures observed participant behavior both qualitatively and quantitatively, especially in scenarios where catastrophic outcomes such as collisions can occur.

    @article{SinghLacotteEtAl2018,
      author = {Singh, S. and Lacotte, J. and Majumdar, A. and Pavone, M.},
      title = {Risk-sensitive Inverse Reinforcement Learning via Semi- and Non-Parametric Methods},
      journal = {{Int. Journal of Robotics Research}},
      volume = {37},
      number = {13},
      pages = {1713--1740},
      year = {2018},
      url = {https://arxiv.org/pdf/1711.10055.pdf},
      owner = {ssingh19},
      timestamp = {2019-08-21}
    }
    
  3. S. Singh, Y.-L. Chow, A. Majumdar, and M. Pavone, “A Framework for Time-Consistent, Risk-Sensitive Model Predictive Control: Theory and Algorithms,” 2018.

    Abstract: In this paper we present a framework for risk-sensitive model predictive control (MPC) of linear systems affected by stochastic multiplicative uncertainty. Our key innovation is to consider a time-consistent, dynamic risk evaluation of the cumulative cost as the objective function to be minimized. This framework is axiomatically justified in terms of time-consistency of risk assessments, is amenable to dynamic optimization, and is unifying in the sense that it captures a full range of risk preferences from risk-neutral (i.e., expectation) to worst case. Within this framework, we propose and analyze an online risk-sensitive MPC algorithm that is provably stabilizing. Furthermore, by exploiting the dual representation of time-consistent, dynamic risk measures, we cast the computation of the MPC control law as a convex optimization problem amenable to real-time implementation. Simulation results are presented and discussed.

    @unpublished{SinghChowEtAl2018,
      author = {Singh, S. and Chow, Y.-L. and Majumdar, A. and Pavone, M.},
      title = {A Framework for Time-Consistent, Risk-Sensitive Model Predictive Control: Theory and Algorithms},
      note = {{Available at }\url{http://arxiv.org/abs/1703.01029}},
      year = {2018},
      url = {http://arxiv.org/pdf/1703.01029.pdf},
      owner = {ssingh19},
      timestamp = {2018-06-30}
    }
    
  4. S. Singh, Y.-L. Chow, A. Majumdar, and M. Pavone, “A Framework for Time-Consistent, Risk-Sensitive Model Predictive Control: Theory and Algorithms,” IEEE Transactions on Automatic Control, vol. 64, no. 7, pp. 2905–2912, 2018.

    Abstract: In this paper we present a framework for risk-sensitive model predictive control (MPC) of linear systems affected by stochastic multiplicative uncertainty. Our key innovation is to consider a time-consistent, dynamic risk evaluation of the cumulative cost as the objective function to be minimized. This framework is axiomatically justified in terms of time-consistency of risk assessments, is amenable to dynamic optimization, and is unifying in the sense that it captures a full range of risk preferences from risk-neutral (i.e., expectation) to worst case. Within this framework, we propose and analyze an online risk-sensitive MPC algorithm that is provably stabilizing. Furthermore, by exploiting the dual representation of time-consistent, dynamic risk measures, we cast the computation of the MPC control law as a convex optimization problem amenable to real-time implementation. Simulation results are presented and discussed.

    @article{SinghChowEtAl2018b,
      author = {Singh, S. and Chow, Y.-L. and Majumdar, A. and Pavone, M.},
      title = {A Framework for Time-Consistent, Risk-Sensitive Model Predictive Control: Theory and Algorithms},
      journal = {{IEEE Transactions on Automatic Control}},
      volume = {64},
      number = {7},
      pages = {2905--2912},
      year = {2018},
      note = {{Extended version available at:} \url{http://arxiv.org/abs/1703.01029}},
      url = {http://arxiv.org/pdf/1703.01029.pdf},
      owner = {ssingh19},
      timestamp = {2019-07-29}
    }
    
  5. A. Majumdar and M. Pavone, “How Should a Robot Assess Risk? Towards an Axiomatic Theory of Risk in Robotics,” in Int. Symp. on Robotics Research, Puerto Varas, Chile, 2017.

    Abstract: Endowing robots with the capability of assessing risk and making risk-aware decisions is widely considered a key step toward ensuring safety for robots operating under uncertainty. But, how should a robot quantify risk? A natural and common approach is to consider the framework whereby costs are assigned to stochastic outcomes - an assignment captured by a cost random variable. Quantifying risk then corresponds to evaluating a risk metric, i.e., a mapping from the cost random variable to a real number. Yet, the question of what constitutes a "good" risk metric has received little attention within the robotics community. The goal of this paper is to explore and partially address this question by advocating axioms that risk metrics in robotics applications should satisfy in order to be employed as rational assessments of risk. We discuss general representation theorems that precisely characterize the class of metrics that satisfy these axioms (referred to as distortion risk metrics), and provide instantiations that can be used in applications. We further discuss pitfalls of commonly used risk metrics in robotics, and discuss additional properties that one must consider in sequential decision making tasks. Our hope is that the ideas presented here will lead to a foundational framework for quantifying risk (and hence safety) in robotics applications.

    @inproceedings{MajumdarPavone2017,
      author = {Majumdar, A. and Pavone, M.},
      title = {How Should a Robot Assess Risk? {Towards} an Axiomatic Theory of Risk in Robotics},
      booktitle = {{Int. Symp. on Robotics Research}},
      year = {2017},
      address = {Puerto Varas, Chile},
      month = dec,
      url = {/wp-content/papercite-data/pdf/Majumdar.Pavone.ISRR17.pdf},
      owner = {anirudha},
      timestamp = {2018-01-16}
    }
    
  6. A. Majumdar, S. Singh, A. Mandlekar, and M. Pavone, “Risk-sensitive Inverse Reinforcement Learning via Coherent Risk Models,” in Robotics: Science and Systems, Cambridge, Massachusetts, 2017.

    Abstract: The literature on Inverse Reinforcement Learning (IRL) typically assumes that humans take actions in order to minimize the expected value of a cost function, i.e., that humans are risk neutral. Yet, in practice, humans are often far from being risk neutral. To fill this gap, the objective of this paper is to devise a framework for risk-sensitive IRL in order to explicitly account for an expert’s risk sensitivity. To this end, we propose a flexible class of models based on coherent risk metrics, which allow us to capture an entire spectrum of risk preferences from risk-neutral to worst-case. We propose efficient algorithms based on Linear Programming for inferring an expert’s underlying risk metric and cost function for a rich class of static and dynamic decision-making settings. The resulting approach is demonstrated on a simulated driving game with ten human participants. Our method is able to infer and mimic a wide range of qualitatively different driving styles from highly risk-averse to risk-neutral in a data-efficient manner. Moreover, comparisons of the Risk-Sensitive (RS) IRL approach with a risk-neutral model show that the RS-IRL framework more accurately captures observed participant behavior both qualitatively and quantitatively.

    @inproceedings{MajumdarSinghEtAl2017,
      author = {Majumdar, A. and Singh, S. and Mandlekar, A. and Pavone, M.},
      title = {Risk-sensitive Inverse Reinforcement Learning via Coherent Risk Models},
      booktitle = {{Robotics: Science and Systems}},
      year = {2017},
      address = {Cambridge, Massachusetts},
      month = jul,
      url = {/wp-content/papercite-data/pdf/Majumdar.Singh.Mandlekar.Pavone.RSS17.pdf},
      owner = {ssingh19},
      timestamp = {2017-04-28}
    }
    
  7. S. Singh, A. Majumdar, J.-J. E. Slotine, and M. Pavone, “Robust Online Motion Planning via Contraction Theory and Convex Optimization,” in Proc. IEEE Conf. on Robotics and Automation, Singapore, 2017.

    Abstract: We present a framework for online generation of robust motion plans for robotic systems with nonlinear dynamics subject to bounded disturbances, control constraints, and online state constraints such as obstacles. In an offline phase, one computes the structure of a feedback controller that can be efficiently implemented online to track any feasible nominal trajectory. The offline phase leverages contraction theory and convex optimization to characterize a fixed-size “tube” that the state is guaranteed to remain within while tracking a nominal trajectory (representing the center of the tube). In the online phase, when the robot is faced with obstacles, a motion planner uses such a tube as a robustness margin for collision checking, yielding nominal trajectories that can be safely executed (i.e., tracked without collisions under disturbances). In contrast to recent work on robust online planning using funnel libraries, our approach is not restricted to a fixed library of maneuvers computed offline and is thus particularly well-suited to applications such as UAV flight in densely cluttered environments where complex maneuvers may be required to reach a goal. We demonstrate our approach through simulations of a 6-state planar quadrotor navigating cluttered environments in the presence of a cross-wind. We also discuss applications of our approach to Tube Model Predictive Control (TMPC) and compare the merits of our method with state-of-the-art nonlinear TMPC techniques.

    @inproceedings{SinghMajumdarEtAl2017,
      author = {Singh, S. and Majumdar, A. and Slotine, J.-J. E. and Pavone, M.},
      title = {Robust Online Motion Planning via Contraction Theory and Convex Optimization},
      booktitle = {{Proc. IEEE Conf. on Robotics and Automation}},
      year = {2017},
      note = {{Extended version available at }\url{http://asl.stanford.edu/wp-content/papercite-data/pdf/Singh.Majumdar.Slotine.Pavone.ICRA17.pdf}},
      address = {Singapore},
      month = may,
      url = {/wp-content/papercite-data/pdf/Singh.Majumdar.Slotine.Pavone.ICRA17.pdf},
      owner = {bylard},
      timestamp = {2018-06-30}
    }
    
  8. A. Majumdar and M. Pavone, “How Should a Robot Assess Risk? Towards an Axiomatic Theory of Risk in Robotics,” ArXiv 1710.11040, 2017.

    Abstract:

    @unpublished{MajumdarPavone2017b,
      author = {Majumdar, Anirudha and Pavone, Marco},
      title = {How Should a Robot Assess Risk? {T}owards an Axiomatic Theory of Risk in Robotics},
      note = {{Extended version of ISRR 2017 paper. Available at }\url{http://arxiv.org/pdf/1710.11040.pdf}},
      year = {2017},
      journal = {ArXiv 1710.11040},
      timestamp = {2018-06-30}
    }