Arrow Research search

Author name cluster

Susan Murphy

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

12 papers
1 author row

Possible papers

12

RLJ Journal 2025 Journal Article

Non-Stationary Latent Auto-Regressive Bandits

  • Anna L. Trella
  • Walter H. Dempsey
  • Asim Gazi
  • Ziping Xu
  • Finale Doshi-Velez
  • Susan Murphy

For the non-stationary multi-armed bandit (MAB) problem, many existing methods allow a general mechanism for the non-stationarity, but rely on a budget for the non-stationarity that is sub-linear to the total number of time steps $T$. In many real-world settings, however, the mechanism for the non-stationarity can be modeled, but there is no budget for the non- stationarity. We instead consider the non-stationary bandit problem where the reward means change due to a latent, auto-regressive (AR) state. We develop Latent AR LinUCB (LARL), an online linear contextual bandit algorithm that does not rely on the non-stationary budget, but instead forms predictions of reward means by implicitly predicting the latent state. The key idea is to reduce the problem to a linear dynamical system which can be solved as a linear contextual bandit. In fact, LARL approximates a steady-state Kalman filter and efficiently learns system parameters online. We provide an interpretable regret bound for LARL with respect to the level of non-stationarity in the environment. LARL achieves sub-linear regret in this setting if the noise variance of the latent state process is sufficiently small with respect to $T$. Empirically, LARL outperforms various baseline methods in this non-stationary bandit problem.

RLC Conference 2025 Conference Paper

Non-Stationary Latent Auto-Regressive Bandits

  • Anna L. Trella
  • Walter H. Dempsey
  • Asim Gazi
  • Ziping Xu
  • Finale Doshi-Velez
  • Susan Murphy

For the non-stationary multi-armed bandit (MAB) problem, many existing methods allow a general mechanism for the non-stationarity, but rely on a budget for the non-stationarity that is sub-linear to the total number of time steps $T$. In many real-world settings, however, the mechanism for the non-stationarity can be modeled, but there is no budget for the non- stationarity. We instead consider the non-stationary bandit problem where the reward means change due to a latent, auto-regressive (AR) state. We develop Latent AR LinUCB (LARL), an online linear contextual bandit algorithm that does not rely on the non-stationary budget, but instead forms predictions of reward means by implicitly predicting the latent state. The key idea is to reduce the problem to a linear dynamical system which can be solved as a linear contextual bandit. In fact, LARL approximates a steady-state Kalman filter and efficiently learns system parameters online. We provide an interpretable regret bound for LARL with respect to the level of non-stationarity in the environment. LARL achieves sub-linear regret in this setting if the noise variance of the latent state process is sufficiently small with respect to $T$. Empirically, LARL outperforms various baseline methods in this non-stationary bandit problem.

RLC Conference 2025 Conference Paper

When and Why Hyperbolic Discounting Matters for Reinforcement Learning Interventions

  • Ian M. Moore
  • Eura Nofshin
  • Siddharth Swaroop
  • Susan Murphy
  • Finale Doshi-Velez
  • Weiwei Pan

In settings where an AI agent nudges a human agent toward a goal, the AI can quickly learn a high-quality policy by modeling the human well. Despite behavioral evidence that humans hyperbolically discount future rewards, we model human as Markov Decision Processes (MDPs) with exponential discounting. This is because planning is difficult with non-exponential discounts. In this work, we investigate whether the performance benefits of modeling humans as hyperbolic discounters outweigh the computational costs. We focus on AI interventions that change the human's discounting (i. e. decreases the human's ""nearsightedness"" to help them toward distant goals). We derive a fixed exponential discount factor that can approximate hyperbolic discounting, and prove that this approximation guarantees the AI will never miss a necessary intervention. We also prove that our approximation causes fewer false positives (unnecessary interventions) than the mean hazard rate, another well-known method for approximating hyperbolic MDPs as exponential ones. Surprisingly, our experiments demonstrate that exponential approximations outperform hyperbolic ones in online learning, even when the ground-truth human MDP is hyperbolically discounted.

RLJ Journal 2025 Journal Article

When and Why Hyperbolic Discounting Matters for Reinforcement Learning Interventions

  • Ian M. Moore
  • Eura Nofshin
  • Siddharth Swaroop
  • Susan Murphy
  • Finale Doshi-Velez
  • Weiwei Pan

In settings where an AI agent nudges a human agent toward a goal, the AI can quickly learn a high-quality policy by modeling the human well. Despite behavioral evidence that humans hyperbolically discount future rewards, we model human as Markov Decision Processes (MDPs) with exponential discounting. This is because planning is difficult with non-exponential discounts. In this work, we investigate whether the performance benefits of modeling humans as hyperbolic discounters outweigh the computational costs. We focus on AI interventions that change the human's discounting (i.e. decreases the human's ""nearsightedness"" to help them toward distant goals). We derive a fixed exponential discount factor that can approximate hyperbolic discounting, and prove that this approximation guarantees the AI will never miss a necessary intervention. We also prove that our approximation causes fewer false positives (unnecessary interventions) than the mean hazard rate, another well-known method for approximating hyperbolic MDPs as exponential ones. Surprisingly, our experiments demonstrate that exponential approximations outperform hyperbolic ones in online learning, even when the ground-truth human MDP is hyperbolically discounted.

JMLR Journal 2024 Journal Article

Effect-Invariant Mechanisms for Policy Generalization

  • Sorawit Saengkyongam
  • Niklas Pfister
  • Predrag Klasnja
  • Susan Murphy
  • Jonas Peters

Policy learning is an important component of many real-world learning systems. A major challenge in policy learning is how to adapt efficiently to unseen environments or tasks. Recently, it has been suggested to exploit invariant conditional distributions to learn models that generalize better to unseen environments. However, assuming invariance of entire conditional distributions (which we call full invariance) may be too strong of an assumption in practice. In this paper, we introduce a relaxation of full invariance called effect-invariance (e-invariance for short) and prove that it is sufficient, under suitable assumptions, for zero-shot policy generalization. We also discuss an extension that exploits e-invariance when we have a small sample from the test environment, enabling few-shot policy generalization. Our work does not assume an underlying causal graph or that the data are generated by a structural causal model; instead, we develop testing procedures to test e-invariance directly from data. We present empirical results using simulated data and a mobile health intervention dataset to demonstrate the effectiveness of our approach. [abs] [ pdf ][ bib ] &copy JMLR 2024. ( edit, beta )

IJCAI Conference 2024 Conference Paper

ReBandit: Random Effects Based Online RL Algorithm for Reducing Cannabis Use

  • Susobhan Ghosh
  • Yongyi Guo
  • Pei-Yao Hung
  • Lara Coughlin
  • Erin Bonar
  • Inbal Nahum-Shani
  • Maureen Walton
  • Susan Murphy

The escalating prevalence of cannabis use, and associated cannabis-use disorder (CUD), poses a significant public health challenge globally. With a notably wide treatment gap, especially among emerging adults (EAs; ages 18-25), addressing cannabis use and CUD remains a pivotal objective within the 2030 United Nations Agenda for Sustainable Development Goals (SDG). In this work, we develop an online reinforcement learning (RL) algorithm called reBandit which will be utilized in a mobile health study to deliver personalized mobile health interventions aimed at reducing cannabis use among EAs. reBandit utilizes random effects and informative Bayesian priors to learn quickly and efficiently in noisy mobile health environments. Moreover, reBandit employs Empirical Bayes and optimization techniques to autonomously update its hyper-parameters online. To evaluate the performance of our algorithm, we construct a simulation testbed using data from a prior study, and compare against commonly used algorithms in mobile health studies. We show that reBandit performs equally well or better than all the baseline algorithms, and the performance gap widens as population heterogeneity increases in the simulation environment, proving its adeptness to adapt to diverse population of study participants.

AAMAS Conference 2024 Conference Paper

Reinforcement Learning Interventions on Boundedly Rational Human Agents in Frictionful Tasks

  • Eura Nofshin
  • Siddharth Swaroop
  • Weiwei Pan
  • Susan Murphy
  • Finale Doshi-Velez

Many important behavior changes are frictionful; they require individuals to expend effort over a long period with little immediate gratification. Here, an artificial intelligence (AI) agent can provide personalized interventions to help individuals stick to their goals. In these settings, the AI agent must personalize rapidly (before the individual disengages) and interpretably, to help us understand the behavioral interventions. In this paper, we introduce Behavior Model Reinforcement Learning (BMRL), a framework in which an AI agent intervenes on the parameters of a Markov Decision Process (MDP) belonging to a boundedly rational human agent. Our formulation of the human decision-maker as a planning agent allows us to attribute undesirable human policies (ones that do not lead to the goal) to their maladapted MDP parameters, such as an extremely low discount factor. Furthermore, we propose a class of tractable human models that captures fundamental behaviors in frictionful tasks. Introducing a notion of MDP equivalence specific to BMRL, we theoretically and empirically show that AI planning with our human models can lead to helpful policies on a wide range of more complex, ground-truth humans.

TMLR Journal 2023 Journal Article

Online model selection by learning how compositional kernels evolve

  • Eura Shin
  • Predrag Klasnja
  • Susan Murphy
  • Finale Doshi-Velez

Motivated by the need for efficient, personalized learning in health, we investigate the problem of online compositional kernel selection for multi-task Gaussian Process regression. Existing composition selection methods do not satisfy our strict criteria in health; selection must occur quickly, and the selected kernels must maintain the appropriate level of complexity, sparsity, and stability as data arrives online. We introduce the Kernel Evolution Model (KEM), a generative process on how to evolve kernel compositions in a way that manages the bias--variance trade-off as we observe more data about a user. Using pilot data, we learn a set of kernel evolutions that can be used to quickly select kernels for new test users. KEM reliably selects high-performing kernels for a range of synthetic and real data sets, including two health data sets.

NeurIPS Conference 2021 Conference Paper

Statistical Inference with M-Estimators on Adaptively Collected Data

  • Kelly Zhang
  • Lucas Janson
  • Susan Murphy

Bandit algorithms are increasingly used in real-world sequential decision-making problems. Associated with this is an increased desire to be able to use the resulting datasets to answer scientific questions like: Did one type of ad lead to more purchases? In which contexts is a mobile health intervention effective? However, classical statistical approaches fail to provide valid confidence intervals when used with data collected with bandit algorithms. Alternative methods have recently been developed for simple models (e. g. , comparison of means). Yet there is a lack of general methods for conducting statistical inference using more complex models on data collected with (contextual) bandit algorithms; for example, current methods cannot be used for valid inference on parameters in a logistic regression model for a binary reward. In this work, we develop theory justifying the use of M-estimators---which includes estimators based on empirical risk minimization as well as maximum likelihood---on data collected with adaptive algorithms, including (contextual) bandit algorithms. Specifically, we show that M-estimators, modified with particular adaptive weights, can be used to construct asymptotically valid confidence regions for a variety of inferential targets.

NeurIPS Conference 2020 Conference Paper

Inference for Batched Bandits

  • Kelly Zhang
  • Lucas Janson
  • Susan Murphy

As bandit algorithms are increasingly utilized in scientific studies and industrial applications, there is an associated increasing need for reliable inference methods based on the resulting adaptively-collected data. In this work, we develop methods for inference on data collected in batches using a bandit algorithm. We prove that the bandit arm selection probabilities cannot generally be assumed to concentrate. Non-concentration of the arm selection probabilities makes inference on adaptively-collected data challenging because classical statistical inference approaches, such as using asymptotic normality or the bootstrap, can have inflated Type-1 error and confidence intervals with below-nominal coverage probabilities even asymptotically. In response we develop the Batched Ordinary Least Squares estimator (BOLS) that we prove is (1) asymptotically normal on data collected from both multi-arm and contextual bandits and (2) robust to non-stationarity in the baseline reward and thus leads to reliable Type-1 error control and accurate confidence intervals.

NeurIPS Conference 2017 Conference Paper

Action Centered Contextual Bandits

  • Kristjan Greenewald
  • Ambuj Tewari
  • Susan Murphy
  • Predag Klasnja

Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.

AAAI Conference 2011 Conference Paper

Exploiting Path Refinement Abstraction in Domain Transition Graphs

  • Peter Gregory
  • Derek Long
  • Craig McNulty
  • Susan Murphy

Partial Refinement A-Star (PRA ) is an abstraction technique, based on clustering nearby nodes in graphs, useful in large path-planning problems. Abstracting the underlying graph yields a simpler problem whose solution can be used, by refinement, as a guide to a solution to the original problem. A fruitful way to view domain independent planning problems is as a collection of multi-valued variables that must perform synchronised transitions through graphs of possible values, where the edges are defined by the domain actions. Planning involves finding efficient paths through Domain Transition Graphs (DTGs). In problems where these graphs are large, planning can be prohibitively expensive. In this paper we explore two ways to exploit PRA in DTGs.