Rohan Sinha

Contacts:

Email: rhnsinha at stanford dot edu

Rohan Sinha


Rohan is a PhD candidate in the department of Aeronautics and Astronautics. His research focuses on developing methodologies that improve the reliability of ML-enabled robotic systems, particularly when these systems encounter out-of-distribution conditions with respect to their training data. Broadly, his research interests lie at the intersection of control theory, machine learning, and applied robotics.

Previously, he received bachelor’s degrees in Mechanical Engineering and Computer Science from the University of California, Berkeley. As an undergraduate, Rohan worked on data-driven predictive control under Prof. Francesco Borrelli in the Model Predictive Control Lab and on learning control algorithms that rely on vision systems under Prof. Benjamin Recht in the Berkeley Artificial Intelligence Lab. He has also interned as an autonomous driving engineer at Delphi (now Motional) and as a software engineer at Amazon.

In his free time, Rohan enjoys playing a variety of sports including sailing, tennis, soccer, and snowboarding.


ASL Publications

  1. R. Sinha, A. Elhafsi, C. Agia, M. Foutter, E. Schmerling, and M. Pavone, “Real-Time Anomaly Detection and Planning with Large Language Models,” in Robotics: Science and Systems, 2024. (Submitted)

    Abstract: Foundation models, e.g., large language models, trained on internet-scale data possess zero-shot generalization capabilities that make them a promising technology for anomaly detection for robotic systems. Fully realizing this promise, however, poses two challenges: (i) mitigating the considerable computational expense of these models such that they may be applied online, and (ii) incorporating their judgement regarding potential anomalies into a safe control framework. In this work we present a two-stage reasoning framework: a fast binary anomaly classifier based on analyzing observations in an LLM embedding space, which may trigger a slower fallback selection stage that utilizes the reasoning capabilities of generative LLMs. These stages correspond to branch points in a model predictive control strategy that maintains the joint feasibility of continuing along various fallback plans as soon as an anomaly is detected (while the selector decides), thus ensuring safety. We demonstrate that, even when instantiated with relatively small language models, our fast anomaly classifier outperforms autoregressive reasoning with state-of-the-art GPT models. This enables our runtime monitor to improve the trustworthiness of dynamic robotic systems under resource and time constraints.

    @inproceedings{SinhaElhafsiEtAl2024,
      author = {Sinha, R. and Elhafsi, A. and Agia, C. and Foutter, M. and Schmerling, E. and Pavone, M.},
      title = {Real-Time Anomaly Detection and Planning with Large Language Models},
      booktitle = {{Robotics: Science and Systems}},
      keywords = {sub},
      note = {Submitted},
      year = {2024},
      owner = {rhnsinha},
      timestamp = {2024-03-01}
    }
    
  2. R. Luo, R. Sinha, Y. Sun, A. Hindy, S. Zhao, S. Savarese, E. Schmerling, and M. Pavone, “Online Distribution Shift Detection via Recency Prediction,” in Proc. IEEE Conf. on Robotics and Automation, 2024. (In Press)

    Abstract: When deploying modern machine learning-enabled robotic systems in high-stakes applications, detecting distributional shift is critical. However, most existing methods for detecting distribution shift are not well-suited to robotics settings, where data often arrives in a streaming fashion and may be very high-dimensional. In this work, we present an online method for detecting distributional shift with guarantees on the false positive rate — i.e., when there is no distribution shift, our system is very unlikely (with probability < ε) to falsely issue an alert; any alerts that are issued should therefore be heeded. Our method is specifically designed for efficient detection even with high dimensional data, and it empirically achieves up to 6x faster detection on realistic robotics settings compared to prior work while maintaining a low false negative rate in practice (whenever there is a distribution shift in our experiments, our method indeed emits an alert).

    @inproceedings{LuoSinhaEtAl2023,
      author = {Luo, R. and Sinha, R. and Sun, Y. and Hindy, A. and Zhao, S. and Savarese, S. and Schmerling, E. and Pavone, M.},
      booktitle = {{Proc. IEEE Conf. on Robotics and Automation}},
      title = {Online Distribution Shift Detection via Recency Prediction},
      year = {2024},
      keywords = {press},
      note = {In press},
      url = {https://arxiv.org/abs/2211.09916},
      owner = {rdyro},
      timestamp = {2022-09-21}
    }
    
  3. R. Sinha, E. Schmerling, and M. Pavone, “Closing the Loop on Runtime Monitors with Fallback-Safe MPC,” in Proc. IEEE Conf. on Decision and Control, 2023.

    Abstract: When we rely on deep-learned models for robotic perception, we must recognize that these models may behave unreliably on inputs dissimilar from the training data, compromising the closed-loop system’s safety. This raises fundamental questions on how we can assess confidence in perception systems and to what extent we can take safety-preserving actions when external environmental changes degrade our perception model’s performance. Therefore, we present a framework to certify the safety of a perception-enabled system deployed in novel contexts. To do so, we leverage robust model predictive control (MPC) to control the system using the perception estimates while maintaining the feasibility of a safety-preserving fallback plan that does not rely on the perception system. In addition, we calibrate a runtime monitor using recently proposed conformal prediction techniques to certifiably detect when the perception system degrades beyond the tolerance of the MPC controller, resulting in an end-to-end safety assurance. We show that this control framework and calibration technique allows us to certify the system’s safety with orders of magnitudes fewer samples than required to retrain the perception network when we deploy in a novel context on a photo-realistic aircraft taxiing simulator. Furthermore, we illustrate the safety-preserving behavior of the MPC on simulated examples of a quadrotor.

    @inproceedings{SinhaSchmerlingEtAl2023,
      author = {Sinha, R. and Schmerling, E. and Pavone, M.},
      title = {Closing the Loop on Runtime Monitors with Fallback-Safe MPC},
      year = {2023},
      keywords = {pub},
      booktitle = {{Proc. IEEE Conf. on Decision and Control}},
      url = {/wp-content/papercite-data/pdf/Sinha.Pavone.CDC23.pdf},
      owner = {rhnsinha},
      timestamp = {2023-04-12}
    }
    
  4. M. Foutter, R. Sinha, S. Banerjee, and M. Pavone, “Self-Supervised Model Generalization using Out-of-Distribution Detection,” in Conf. on Robot Learning - Workshop on Out-of-Distribution Generalization in Robotics, 2023.

    Abstract:

    @inproceedings{FoutterSinhaEtAl2023,
      author = {Foutter, M. and Sinha, R. and Banerjee, S. and Pavone, M.},
      title = {Self-Supervised Model Generalization using Out-of-Distribution Detection},
      booktitle = {{Conf. on Robot Learning - Workshop on Out-of-Distribution Generalization in Robotics}},
      year = {2023},
      asl_abstract = {Autonomous agents increasingly rely on learned components to streamline safe and reliable decision making. However, data dissimilar to that seen in training, deemed to be Out-of-Distribution (OOD), creates undefined behavior in the output of our learned-components, which can have detrimental consequences in a safety critical setting such as autonomous satellite rendezvous. In the wild, we typically are exposed to a mix of in-and-out of distribution data where OOD inputs correspond to uncommon and unfamiliar data when a nominally competent system encounters a new situation. In this paper, we propose an architecture that detects the presence of OOD inputs in an online stream of data. The architecture then uses these OOD inputs to recognize domain invariant features between the original training and OOD domain to improve model inference. We demonstrate that our algorithm more than doubles model accuracy on the OOD domain with sparse, unlabeled OOD examples compared to a naive model without such data on shifted MNIST domains. Importantly, we also demonstrate our algorithm maintains strong accuracy on the original training domain, generalizing the model to a mix of in-and-out of distribution examples seen at deployment. Code for our experiment is available at: https://github.com/StanfordASL/CoRL_OODWorkshop_DANN-DL.},
      asl_address = {Atlanta, GA},
      asl_url = {https://openreview.net/forum?id=z5XS3BY13J},
      url = {https://openreview.net/forum?id=z5XS3BY13J},
      owner = {somrita},
      timestamp = {2024-03-01}
    }
    
  5. A. Elhafsi, R. Sinha, C. Agia, E. Schmerling, I. A. D. Nesnas, and M. Pavone, “Semantic Anomaly Detection with Large Language Models,” Autonomous Robots, vol. 47, pp. 1035–1055, 2023.

    Abstract:

    @article{ElhafsiSinhaEtAl2023,
      author = {Elhafsi, A. and Sinha, R. and Agia, C. and Schmerling, E. and Nesnas, I. A. D and Pavone, M.},
      title = {Semantic Anomaly Detection with Large Language Models},
      journal = {{Autonomous Robots}},
      volume = {47},
      number = {},
      pages = {1035–-1055},
      year = {2023},
      asl_month = oct,
      asl_abstract = {As robots acquire increasingly sophisticated skills and see increasingly complex and varied environments, the threat of an edge case or anomalous failure is ever present. For example, Tesla cars have seen interesting failure modes ranging from autopilot disengagements due to inactive traffic lights carried by trucks to phantom braking caused by images of stop signs on roadside billboards. These system-level failures are not due to failures of any individual component of the autonomy stack but rather system-level deficiencies in semantic reasoning. Such edge cases, which we call semantic anomalies, are simple for a human to disentangle yet require insightful reasoning. To this end, we study the application of large language models (LLMs), endowed with broad contextual understanding and reasoning capabilities, to recognize such edge cases and introduce a monitoring framework for semantic anomaly detection in vision-based policies. Our experiments apply this framework to a finite state machine policy for autonomous driving and a learned policy for object manipulation. These experiments demonstrate that the LLM-based monitor can effectively identify semantic anomalies in a manner that shows agreement with human reasoning. Finally, we provide an extended discussion on the strengths and weaknesses of this approach and motivate a research outlook on how we can further use foundation models for semantic anomaly detection. Our project webpage can be found at https://sites.google.com/view/llm-anomaly-detection.},
      asl_doi = {10.1007/s10514-023-10132-6},
      asl_url = {https://doi.org/10.1007/s10514-023-10132-6},
      owner = {amine},
      timestamp = {2024-02-29}
    }
    
  6. R. Sinha, S. Sharma, S. Banerjee, T. Lew, R. Luo, S. M. Richards, Y. Sun, E. Schmerling, and M. Pavone, “A System-Level View on Out-of-Distribution Data in Robotics,” 2022.

    Abstract: When testing conditions differ from those represented in training data, so-called out-of-distribution (OOD) inputs can mar the reliability of black-box learned components in the modern robot autonomy stack. Therefore, coping with OOD data is an important challenge on the path towards trustworthy learning-enabled open-world autonomy. In this paper, we aim to demystify the topic of OOD data and its associated challenges in the context of data-driven robotic systems, drawing connections to emerging paradigms in the ML community that study the effect of OOD data on learned models in isolation. We argue that as roboticists, we should reason about the overall system-level competence of a robot as it performs tasks in OOD conditions. We highlight key research questions around this system-level view of OOD problems to guide future research toward safe and reliable learning-enabled autonomy.

    @inproceedings{SinhaSharmaEtAl2022,
      author = {Sinha, R. and Sharma, S. and Banerjee, S. and Lew, T. and Luo, R. and Richards, S. M. and Sun, Y. and Schmerling, E. and Pavone, M.},
      title = {A System-Level View on Out-of-Distribution Data in Robotics},
      year = {2022},
      keywords = {},
      url = {https://arxiv.org/abs/2212.14020},
      owner = {rhnsinha},
      timestamp = {2022-12-30}
    }
    
  7. R. Sinha, J. Harrison, S. M. Richards, and M. Pavone, “Adaptive Robust Model Predictive Control with Matched and Unmatched Uncertainty,” in American Control Conference, 2022.

    Abstract: We propose a learning-based robust predictive control algorithm that compensates for significant uncertainty in the dynamics for a class of discrete-time systems that are nominally linear with an additive nonlinear component. Such systems commonly model the nonlinear effects of an unknown environment on a nominal system. We optimize over a class of nonlinear feedback policies inspired by certainty equivalent “estimate-and-cancel” control laws pioneered in classical adaptive control to achieve significant performance improvements in the presence of uncertainties of large magnitude, a setting in which existing learning-based predictive control algorithms often struggle to guarantee safety. In contrast to previous work in robust adaptive MPC, our approach allows us to take advantage of structure (i.e., the numerical predictions) in the a priori unknown dynamics learned online through function approximation. Our approach also extends typical nonlinear adaptive control methods to systems with state and input constraints even when we cannot directly cancel the additive uncertain function from the dynamics. Moreover, we apply contemporary statistical estimation techniques to certify the system’s safety through persistent constraint satisfaction with high probability. Finally, we show in simulation that our method can accommodate more significant unknown dynamics terms than existing methods.

    @inproceedings{SinhaHarrisonEtAl2022,
      author = {Sinha, R. and Harrison, J. and Richards, S. M. and Pavone, M.},
      title = {Adaptive Robust Model Predictive Control with Matched and Unmatched Uncertainty},
      year = {2022},
      keywords = {pub},
      booktitle = {{American Control Conference}},
      url = {https://arxiv.org/abs/2104.08261},
      owner = {rhnsinha},
      timestamp = {2022-01-31}
    }
    
  8. R. Sinha, J. Harrison, S. M. Richards, and M. Pavone, “Adaptive Robust Model Predictive Control via Uncertainty Cancellation,” IEEE Transactions on Automatic Control, 2022. (In Press)

    Abstract: We propose a learning-based robust predictive control algorithm that compensates for significant uncertainty in the dynamics for a class of discrete-time systems that are nominally linear with an additive nonlinear component. Such systems are commonly used to model the nonlinear effects of an unknown environment on a nominal linear system. Inspired by certainty equivalent “estimate-and-cancel” control laws pioneered in classical adaptive control, we optimize over a class of nonlinear feedback policies to significantly improve performance in the presence of uncertainties of large magnitude, a setting in which existing learning-based predictive control algorithms often struggle to guarantee safety. In contrast to previous work in robust adaptive model predictive control, our approach allows us to take advantage of structure (i.e., the numerical predictions) in the a priori unknown dynamics learned online through function approximation. Our approach also extends typical nonlinear adaptive control methods to systems with state and input constraints even when we cannot directly cancel the additive uncertain function from the dynamics. Moreover, we apply contemporary statistical estimation techniques to certify the system’s safety in the form of persistent constraint satisfaction with high probability. Finally, we show in simulation that our method can accommodate more significant unknown dynamics terms than existing methods.

    @article{SinhaHarrisonEtAl2022b,
      author = {Sinha, R. and Harrison, J. and Richards, S. M. and Pavone, M.},
      title = {Adaptive Robust Model Predictive Control via Uncertainty Cancellation},
      journal = {{IEEE Transactions on Automatic Control}},
      year = {2022},
      keywords = {press},
      note = {In press},
      url = {https://arxiv.org/abs/2212.01371},
      owner = {rhnsinha},
      timestamp = {2023-01-30}
    }