Arrow Research search

Author name cluster

Aditya Narendra

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
2 author rows

Possible papers

2

AAAI Conference 2025 Short Paper

Ensuring Class-Conditional Coverage for Pathological Workflows (Student Abstract)

  • Siddharth Narendra
  • Shubham Ojha
  • Aditya Narendra
  • Abhay Kshirsagar
  • Abhisek Mallick

Conformal Prediction (CP) is an uncertainty quantification framework that provides prediction sets with a user-specified probability to include the true class in the prediction set. This guarantee on the user-specified probability is known as marginal coverage. Marginal coverage refers to the probability that the true label is included in the prediction set, averaged over all test samples. However, this can lead to inconsistent coverage across different classes, constraining its suitability for high-stakes applications such as pathological workflows. This study implements a Classwise CP method applied to two cancer datasets to achieve class-conditional coverage which ensures that each class has a user-specified probability of being included in the prediction set when it is the true label. Our results demonstrate the effectiveness of this approach through a significant reduction in the average class coverage gap compared to the Baseline CP method.

IROS Conference 2025 Conference Paper

M3PO: Massively Multi-Task Model-Based Policy Optimization

  • Aditya Narendra
  • Dmitry Makarov
  • Aleksandr Panov 0001

We introduce Massively Multi-Task Model-Based Policy Optimization (M3PO), a scalable model-based reinforcement learning (MBRL) framework designed to address the challenges of sample efficiency in single-task settings and generalization in multi-task domains. Existing model-based approaches like DreamerV3 rely on generative world models that prioritize pixel-level reconstruction, often at the cost of control-centric representations, while model-free methods such as PPO suffer from high sample complexity and limited exploration. M3PO integrates an implicit world model, trained to predict task outcomes without reconstructing observations, with a hybrid exploration strategy that combines model-based planning and model-free uncertainty-driven bonuses. This approach eliminates the bias-variance trade-off inherent in prior methods (e. g. , POME’s exploration bonuses) by using the discrepancy between model-based and model-free value estimates to guide exploration while maintaining stable policy updates via a trust-region optimizer. M3PO is introduced as an advanced alternative to existing model-based policy optimization methods.