Arrow Research search

Author name cluster

Arpan Dasgupta

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

AAMAS Conference 2025 Conference Paper

Bayesian Collaborative Bandits with Thompson Sampling for Improved Outreach in Maternal Health

  • Arpan Dasgupta
  • Gagan Jain
  • Arun Suggala
  • Karthikeyan Shanmugam
  • Milind Tambe
  • Aparna Taneja

Mobile health (mHealth) programs face a critical challenge in optimizing the timing of automated health information calls to beneficiaries. This challenge has been formulated as a collaborative multiarmed bandit problem, requiring online learning of a low-rank reward matrix. Existing solutions often rely on heuristic combinations of offline matrix completion and exploration strategies. In this work, we propose a principled Bayesian approach using Thompson Sampling for this collaborative bandit problem. Our method leverages prior information through efficient Gibbs sampling for posterior inference over the low-rank matrix factors, enabling faster convergence. We demonstrate significant improvements over stateof-the-art baselines on a real-world dataset from the world’s largest maternal mHealth program. Our approach achieves a 16% reduction in the number of calls compared to existing methods and a 47% reduction compared to the deployed random policy. This efficiency gain translates to a potential increase in program capacity by 0. 5 − 1. 4 million beneficiaries, granting them access to vital antenatal and post-natal care information. Furthermore, we observe a 7% and 29% improvement in beneficiary retention (an extremely hard metric to impact) compared to state-of-the-art and deployed baselines, respectively. Synthetic simulations further demonstrate the superiority of our approach, particularly in low-data regimes and in effectively utilizing prior information. We also provide a theoretical analysis of our algorithm in a special setting using Eluder dimension.

ECAI Conference 2025 Conference Paper

Beyond Listenership: AI-Predicted Interventions Drive Improvements in Maternal Health Behaviours

  • Arpan Dasgupta
  • Sarvesh Gharat
  • Neha Madhiwalla
  • Aparna Hegde
  • Milind Tambe
  • Aparna Taneja

Automated voice calls with health information are a proven method for disseminating maternal and child health information among beneficiaries and are deployed in several programs around the world. However, these programs often suffer from beneficiary dropoffs and poor engagement. In previous work, through real-world trials, we showed that an AI model, specifically a restless bandit model, could identify beneficiaries who would benefit most from live service call interventions, preventing dropoffs and boosting engagement. However, one key question has remained open so far: does such improved listenership via AI-targeted interventions translate into beneficiaries’ improved knowledge and health behaviors? We present a first study that shows not only listenership improvements due to AI interventions, but also simultaneously links these improvements to health behavior changes. Specifically, we demonstrate that AI-scheduled interventions, which enhance listenership, lead to statistically significant improvements in beneficiaries’ health behaviors such as taking iron or calcium supplements in the postnatal period, as well as understanding of critical health topics during pregnancy and infancy. This underscores the potential of AI to drive meaningful improvements in maternal and child health.

ICLR Conference 2023 Conference Paper

Explaining RL Decisions with Trajectories

  • Shripad Vilasrao Deshmukh
  • Arpan Dasgupta
  • Balaji Krishnamurthy
  • Nan Jiang
  • Chirag Agarwal
  • Georgios Theocharous
  • Jayakumar Subramanian

Explanation is a key component for the adoption of reinforcement learning (RL) in many real-world decision-making problems. In the literature, the explanation is often provided by saliency attribution to the features of the RL agent's state. In this work, we propose a complementary approach to these explanations, particularly for offline RL, where we attribute the policy decisions of a trained RL agent to the trajectories encountered by it during training. To do so, we encode trajectories in offline training data individually as well as collectively (encoding a set of trajectories). We then attribute policy decisions to a set of trajectories in this encoded space by estimating the sensitivity of the decision with respect to that set. Further, we demonstrate the effectiveness of the proposed approach in terms of quality of attributions as well as practical scalability in diverse environments that involve both discrete and continuous state and action spaces such as grid-worlds, video games (Atari) and continuous control (MuJoCo). We also conduct a human study on a simple navigation task to observe how their understanding of the task compares with data attributed for a trained RL policy.