Somrita Banerjee

Contacts:

Email: somrita at stanford dot edu

Somrita Banerjee


Somrita Banerjee is a Ph.D. candidate in Aeronautics and Astronautics. She received her B.S. in Mechanical Engineering with minors in Aerospace Engineering and Computer Science from Cornell University in 2017. At Cornell University, she worked in the Space Systems Design Studio with Professor Mason Peck.

Somrita’s current research interests lie at the intersection of trajectory optimization, machine learning, and optimal control of the next generation of space robots, specifically to further goals of greater autonomy and risk-sensitive learning.

In her free time, Somrita enjoys dancing, playing board games with friends, and going hiking in sunny California.

Awards:

  • Stanford Graduate Fellowship

ASL Publications

  1. A. Hindy, R. Luo, S. Banerjee, J. Kuck, E. Schmerling, and M. Pavone, “Diagnostic Runtime Monitoring with Martingales,” in Robotics: Science and Systems, 2024. (Submitted)

    Abstract: Machine learning systems deployed in safety-critical robotics settings must be robust to distribution shifts. However, system designers must understand the cause of a distribution shift in order to implement the appropriate intervention or mitigation strategy and prevent system failure. In this paper, we present a novel framework for diagnosing distribution shifts in a streaming fashion by deploying multiple stochastic martingales simultaneously. We show that knowledge of the underlying cause of a distribution shift can lead to proper interventions over the lifecycle of a deployed system. Our experimental framework can easily be adapted to different types of distribution shifts, models, and datasets. We find that our method outperforms existing work on diagnosing distribution shifts in terms of speed, accuracy, and flexibility, and validate the efficiency of our model in both simulated and live hardware settings.

    @inproceedings{HindyLuoEtAl2024,
      author = {Hindy, A. and Luo, R. and Banerjee, S. and Kuck, J. and Schmerling, E. and Pavone, M.},
      title = {Diagnostic Runtime Monitoring with Martingales},
      note = {Submitted},
      booktitle = {{Robotics: Science and Systems}},
      year = {2024},
      address = {},
      url = {},
      keywords = {sub},
      owner = {somrita},
      timestamp = {2024-02-09}
    }
    
  2. S. Banerjee, B. Balaban, M. Shirley, K. Bradner, and M. Pavone, “Contingency Planning Using Bi-level Markov Decision Processes for Space Missions,” in IEEE Aerospace Conference, 2024.

    Abstract:

    @inproceedings{BanerjeeBalabanEtAl2024,
      author = {Banerjee, S. and Balaban, B. and Shirley, M. and Bradner, K. and Pavone, M.},
      title = {Contingency Planning Using Bi-level Markov Decision Processes for Space Missions},
      booktitle = {{IEEE Aerospace Conference}},
      year = {2024},
      asl_abstract = {This work focuses on autonomous contingency planning for scientific missions by enabling rapid policy computation from any off-nominal point in the state space in the event of a delay or deviation from the nominal mission plan. Successful contingency planning involves managing risks and rewards, often probabilistically associated with actions, in stochastic scenarios. Markov Decision Processes (MDPs) are used to mathematically model decision-making in such scenarios. However, in the specific case of planetary rover traverse planning, the vast action space and long planning time horizon pose computational challenges. A bi-level MDP framework is proposed to improve computational tractability, while also aligning with existing mission planning practices and enhancing explainability and trustworthiness of AI-driven solutions. We discuss the conversion of a mission planning MDP into a bi-level MDP, and test the framework on RoverGridWorld, a modified GridWorld environment for rover mission planning. We demonstrate the computational tractability and near-optimal policies achievable with the bi-level MDP approach, highlighting the trade-offs between compute time and policy optimality as the problem's complexity grows. This work facilitates more efficient and flexible contingency planning in the context of scientific missions.},
      asl_address = {Big Sky, Montana},
      asl_month = mar,
      asl_url = {},
      owner = {somrita},
      timestamp = {2024-02-09}
    }
    
  3. M. Foutter, R. Sinha, S. Banerjee, and M. Pavone, “Self-Supervised Model Generalization using Out-of-Distribution Detection,” in Conf. on Robot Learning - Workshop on Out-of-Distribution Generalization in Robotics, 2023.

    Abstract:

    @inproceedings{FoutterSinhaEtAl2023,
      author = {Foutter, M. and Sinha, R. and Banerjee, S. and Pavone, M.},
      title = {Self-Supervised Model Generalization using Out-of-Distribution Detection},
      booktitle = {{Conf. on Robot Learning - Workshop on Out-of-Distribution Generalization in Robotics}},
      year = {2023},
      asl_abstract = {Autonomous agents increasingly rely on learned components to streamline safe and reliable decision making. However, data dissimilar to that seen in training, deemed to be Out-of-Distribution (OOD), creates undefined behavior in the output of our learned-components, which can have detrimental consequences in a safety critical setting such as autonomous satellite rendezvous. In the wild, we typically are exposed to a mix of in-and-out of distribution data where OOD inputs correspond to uncommon and unfamiliar data when a nominally competent system encounters a new situation. In this paper, we propose an architecture that detects the presence of OOD inputs in an online stream of data. The architecture then uses these OOD inputs to recognize domain invariant features between the original training and OOD domain to improve model inference. We demonstrate that our algorithm more than doubles model accuracy on the OOD domain with sparse, unlabeled OOD examples compared to a naive model without such data on shifted MNIST domains. Importantly, we also demonstrate our algorithm maintains strong accuracy on the original training domain, generalizing the model to a mix of in-and-out of distribution examples seen at deployment. Code for our experiment is available at: https://github.com/StanfordASL/CoRL_OODWorkshop_DANN-DL.},
      asl_address = {Atlanta, GA},
      asl_url = {https://openreview.net/forum?id=z5XS3BY13J},
      url = {https://openreview.net/forum?id=z5XS3BY13J},
      owner = {somrita},
      timestamp = {2024-03-01}
    }
    
  4. S. Banerjee, A. Sharma, E. Schmerling, M. Spolaor, M. Nemerouf, and M. Pavone, “Data Lifecycle Management in Evolving Input Distributions for Learning-based Aerospace Applications,” in IEEE Aerospace Conference, 2023.

    Abstract: As input distributions evolve over a mission lifetime, maintaining performance of learning-based models becomes challenging. This paper presents a framework to incrementally retrain a model by selecting a subset of test inputs to label, which allows the model to adapt to changing input distributions. Algorithms within this framework are evaluated based on (1) model performance throughout mission lifetime and (2) cumulative costs associated with labeling and model retraining. We provide an open-source benchmark of a satellite pose estimation model trained on images of a satellite in space and deployed in novel scenarios (e.g., different backgrounds or misbehaving pixels), where algorithms are evaluated on their ability to maintain high performance by retraining on a subset of inputs. We also propose a novel algorithm to select a diverse subset of inputs for labeling, by characterizing the information gain from an input using Bayesian uncertainty quantification and choosing a subset that maximizes collective information gain using concepts from batch active learning. We show that our algorithm outperforms others on the benchmark, e.g., achieves comparable performance to an algorithm that labels 100% of inputs, while only labeling 50% of inputs, resulting in low costs and high performance over the mission lifetime.

    @inproceedings{BanerjeeSharmaEtAl2022,
      author = {Banerjee, S. and Sharma, A. and Schmerling, E. and Spolaor, M. and Nemerouf, M. and Pavone, M.},
      title = {Data Lifecycle Management in Evolving Input Distributions for Learning-based Aerospace Applications},
      booktitle = {{IEEE Aerospace Conference}},
      year = {2023},
      url = {https://arxiv.org/abs/2209.06855},
      owner = {somrita},
      timestamp = {2022-09-14}
    }
    
  5. R. Sinha, S. Sharma, S. Banerjee, T. Lew, R. Luo, S. M. Richards, Y. Sun, E. Schmerling, and M. Pavone, “A System-Level View on Out-of-Distribution Data in Robotics,” 2022.

    Abstract: When testing conditions differ from those represented in training data, so-called out-of-distribution (OOD) inputs can mar the reliability of black-box learned components in the modern robot autonomy stack. Therefore, coping with OOD data is an important challenge on the path towards trustworthy learning-enabled open-world autonomy. In this paper, we aim to demystify the topic of OOD data and its associated challenges in the context of data-driven robotic systems, drawing connections to emerging paradigms in the ML community that study the effect of OOD data on learned models in isolation. We argue that as roboticists, we should reason about the overall system-level competence of a robot as it performs tasks in OOD conditions. We highlight key research questions around this system-level view of OOD problems to guide future research toward safe and reliable learning-enabled autonomy.

    @inproceedings{SinhaSharmaEtAl2022,
      author = {Sinha, R. and Sharma, S. and Banerjee, S. and Lew, T. and Luo, R. and Richards, S. M. and Sun, Y. and Schmerling, E. and Pavone, M.},
      title = {A System-Level View on Out-of-Distribution Data in Robotics},
      year = {2022},
      keywords = {},
      url = {https://arxiv.org/abs/2212.14020},
      owner = {rhnsinha},
      timestamp = {2022-12-30}
    }
    
  6. S. Banerjee, J. Harrison, P. M. Furlong, and M. Pavone, “Adaptive Meta-Learning for Identification of Rover-Terrain Dynamics,” in Int. Symp. on Artificial Intelligence, Robotics and Automation in Space, Pasadena, California, 2020.

    Abstract: Rovers require knowledge of terrain to plan trajectories that maximize safety and efficiency. Terrain type classification relies on input from human operators or machine learning-based image classification algorithms. However, high level terrain classification is typically not sufficient to prevent incidents such as rovers becoming unexpectedly stuck in a sand trap; in these situations, online rover-terrain interaction data can be leveraged to accurately predict future dynamics and prevent further damage to the rover. This paper presents a meta-learning-based approach to adapt probabilistic predictions of rover dynamics by augmenting a nominal model affine in parameters with a Bayesian regression algorithm (P-ALPaCA). A regularization scheme is introduced to encourage orthogonality of nominal and learned features, leading to interpretable probabilistic estimates of terrain parameters in varying terrain conditions.

    @inproceedings{BanerjeeHarrisonEtAl2020,
      author = {Banerjee, S. and Harrison, J. and Furlong, P. M. and Pavone, M.},
      title = {Adaptive Meta-Learning for Identification of Rover-Terrain Dynamics},
      booktitle = {{Int. Symp. on Artificial Intelligence, Robotics and Automation in Space}},
      year = {2020},
      address = {Pasadena, California},
      month = oct,
      url = {https://arxiv.org/abs/2009.10191},
      owner = {somrita},
      timestamp = {2020-09-18}
    }
    
  7. S. Banerjee, T. Lew, R. Bonalli, A. Alfaadhel, I. A. Alomar, H. M. Shageer, and M. Pavone, “Learning-based Warm-Starting for Fast Sequential Convex Programming and Trajectory Optimization,” in IEEE Aerospace Conference, Big Sky, Montana, 2020.

    Abstract: Sequential convex programming (SCP) has recently emerged as an effective tool to quickly compute locally optimal trajectories for robotic and aerospace systems alike, even when initialized with an unfeasible trajectory. In this paper, by focusing on the Guaranteed Sequential Trajectory Optimization (GuSTO) algorithm, we propose a methodology to accelerate SCP-based algorithms through warm-starting. Specifically, leveraging a dataset of expert trajectories from GuSTO, we devise a neural-network-based approach to predict a locally optimal state and control trajectory, which is used to warm-start the SCP algorithm. This approach allows one to retain all the theoretical guarantees of GuSTO while simultaneously taking advantage of the fast execution of the neural network and reducing the time and number of iterations required for GuSTO to converge. The result is a faster and theoretically guaranteed trajectory optimization algorithm.

    @inproceedings{BanerjeeEtAl2020,
      author = {Banerjee, S. and Lew, T. and Bonalli, R. and Alfaadhel, A. and Alomar, I. A. and Shageer, H. M. and Pavone, M.},
      title = {Learning-based Warm-Starting for Fast Sequential Convex Programming and Trajectory Optimization},
      booktitle = {{IEEE Aerospace Conference}},
      year = {2020},
      address = {Big Sky, Montana},
      month = mar,
      url = {https://ieeexplore.ieee.org/abstract/document/9172293/},
      owner = {lew},
      timestamp = {2020-01-09}
    }