Arrow Research search

Author name cluster

Yash Patel

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

NeurIPS Conference 2025 Conference Paper

Conformal Prediction for Ensembles: Improving Efficiency via Score-Based Aggregation

  • Yash Patel
  • Eduardo Ochoa Rivera
  • Ambuj Tewari

Distribution-free uncertainty estimation for ensemble methods is increasingly desirable due to the widening deployment of multi-modal black-box predictive models. Conformal prediction is one approach that avoids such distributional assumptions. Methods for conformal aggregation have in turn been proposed for ensembled prediction, where the prediction regions of individual models are merged as to retain coverage guarantees while minimizing conservatism. Merging the prediction regions directly, however, sacrifices structures present in the conformal scores that can further reduce conservatism. We, therefore, propose a novel framework that extends the standard scalar formulation of a score function to a multivariate score that produces more efficient prediction regions. We then demonstrate that such a framework can be efficiently leveraged in both classification and predict-then-optimize regression settings downstream and empirically show the advantage over alternate conformal aggregation methods.

AAAI Conference 2023 Conference Paper

Contrastive Classification and Representation Learning with Probabilistic Interpretation

  • Rahaf Aljundi
  • Yash Patel
  • Milan Sulc
  • Nikolay Chumerin
  • Daniel Olmeda Reino

Cross entropy loss has served as the main objective function for classification-based tasks. Widely deployed for learning neural network classifiers, it shows both effectiveness and a probabilistic interpretation. Recently, after the success of self supervised contrastive representation learning methods, supervised contrastive methods have been proposed to learn representations and have shown superior and more robust performance, compared to solely training with cross entropy loss. However, cross entropy loss is still needed to train the final classification layer. In this work, we investigate the possibility of learning both the representation and the classifier using one objective function that combines the robustness of contrastive learning and the probabilistic interpretation of cross entropy loss. First, we revisit a previously proposed contrastive-based objective function that approximates cross entropy loss and present a simple extension to learn the classifier jointly. Second, we propose a new version of the supervised contrastive training that learns jointly the parameters of the classifier and the backbone of the network. We empirically show that these proposed objective functions demonstrate state-of-the-art performance and show a significant improvement over the standard cross entropy loss with more training stability and robustness in various challenging settings.