Arrow Research search

Author name cluster

Kate Donahue

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
2 author rows

Possible papers

6

TMLR Journal 2026 Journal Article

Adaptive Online Mirror Descent for Tchebycheff Scalarization in Multi-Objective Learning

  • Meitong Liu
  • Xiaoyuan Zhang
  • Chulin Xie
  • Kate Donahue
  • Han Zhao

Multi-objective learning (MOL) aims to learn under multiple potentially conflicting objectives and strike a proper balance. While recent preference-guided MOL methods often rely on additional optimization objectives or constraints, we consider the classic Tchebycheff scalarization (TCH) that naturally allows for locating solutions with user-specified trade-offs. Due to its minimax formulation, directly optimizing TCH often leads to training oscillation and stagnation. In light of this limitation, we propose an adaptive online mirror descent algorithm for TCH, called (Ada)OMD-TCH. One of our main ingredients is an adaptive online-to-batch conversion that significantly improves solution optimality over traditional conversion in practice while maintaining the same theoretical convergence guarantees. We show that (Ada)OMD-TCH achieves a convergence rate of $\mathcal O(\sqrt{\log m/T})$, where $m$ is the number of objectives and $T$ is the number of rounds, providing a tighter dependency on $m$ in the offline setting compared to existing work. Empirically, we demonstrate on both synthetic problems and federated learning tasks that (Ada)OMD-TCH effectively smooths the training process and yields preference-guided, specific, diverse, and fair solutions.

AAAI Conference 2025 Conference Paper

Private Blotto: Viewpoint Competition with Polarized Agents

  • Kate Donahue
  • Jon Kleinberg

Social media platforms are responsible for collecting and disseminating vast quantities of content. Recently, however, they have also begun enlisting users in helping annotate this content - for example, to provide context or label disinformation. However, users may act strategically, sometimes reflecting biases (e.g. political) about the "right" label. How can social media platforms design their systems to use human time most efficiently? Historically, competition over multiple items has been explored in the Colonel Blotto game setting. However, they were originally designed to model two centrally-controlled armies competing over zero-sum "items", a specific scenario with limited modern-day application. In this work, we propose and study Private Blotto game, a variant with the key difference that individual agents act independently, without being coordinated by a central "Colonel". We completely characterize the Nash stability of this game and how this impacts the amount of "misallocated effort" of users on unimportant items. We show that the outcome function (aggregating multiple labels on a single item) has a critical impact, and specifically contrast a majority rule outcome (the median) as compared to a smoother outcome function (mean). In general, for median outcomes we show that instances without stable arrangements only occur for relatively few numbers of agents, but stable arrangements may have very high levels of misallocated effort. For mean outcome functions, we show that unstable arrangements can occur even for arbitrarily large numbers of agents, but when stable arrangements exist, they always have low misallocated effort. We conclude by discussing implications our results have for motivating examples in social media platforms and political competition.

ICML Conference 2024 Conference Paper

Impact of Decentralized Learning on Player Utilities in Stackelberg Games

  • Kate Donahue
  • Nicole Immorlica
  • Meena Jagadeesan
  • Brendan Lucier
  • Aleksandrs Slivkins

When deployed in the world, a learning agent such as a recommender system or a chatbot often repeatedly interacts with another learning agent (such as a user) over time. In many such two-agent systems, each agent learns separately and the rewards of the two agents are not perfectly aligned. To better understand such cases, we examine the learning dynamics of the two-agent system and the implications for each agent’s objective. We model these systems as Stackelberg games with decentralized learning and show that standard regret benchmarks (such as Stackelberg equilibrium payoffs) result in worst-case linear regret for at least one player. To better capture these systems, we construct a relaxed regret benchmark that is tolerant to small learning errors by agents. We show that standard learning algorithms fail to provide sublinear regret, and we develop algorithms to achieve near-optimal $\mathcal{O}(T^{2/3})$ regret for both players with respect to these benchmarks. We further design relaxed environments under which faster learning ($\mathcal{O}(\sqrt{T})$) is possible. Altogether, our results take a step towards assessing how two-agent interactions in sequential and decentralized learning environments affect the utility of both agents.

AAAI Conference 2024 Conference Paper

When Are Two Lists Better than One?: Benefits and Harms in Joint Decision-Making

  • Kate Donahue
  • Sreenivas Gollapudi
  • Kostas Kollias

Historically, much of machine learning research has focused on the performance of the algorithm alone, but recently more attention has been focused on optimizing joint human-algorithm performance. Here, we analyze a specific type of human-algorithm collaboration where the algorithm has access to a set of n items, and presents a subset of size k to the human, who selects a final item from among those k. This scenario could model content recommendation, route planning, or any type of labeling task. Because both the human and algorithm have imperfect, noisy information about the true ordering of items, the key question is: which value of k maximizes the probability that the best item will be ultimately selected? For k=1, performance is optimized by the algorithm acting alone, and for k=n it is optimized by the human acting alone. Surprisingly, we show that for multiple of noise models, it is optimal to set k in [2, n-1] - that is, there are strict benefits to collaborating, even when the human and algorithm have equal accuracy separately. We demonstrate this theoretically for the Mallows model and experimentally for the Random Utilities models of noisy permutations. However, we show this pattern is *reversed* when the human is anchored on the algorithm's presented ordering - the joint system always has strictly worse performance. We extend these results to the case where the human and algorithm differ in their accuracy levels, showing that there always exist regimes where a more accurate agent would strictly benefit from collaborating with a less accurate one, but these regimes are asymmetric between the human and the algorithm's accuracy.

AAAI Conference 2021 Conference Paper

Model-sharing Games: Analyzing Federated Learning Under Voluntary Participation

  • Kate Donahue
  • Jon Kleinberg

Federated learning is a setting where agents, each with access to their own data source, combine models learned from local data to create a global model. If agents are drawing their data from different distributions, though, federated learning might produce a biased global model that is not optimal for each agent. This means that agents face a fundamental question: should they join the global model or stay with their local model? In this work, we show how this situation can be naturally analyzed through the framework of coalitional game theory. Motivated by these considerations, we propose the following game: there are heterogeneous players with different model parameters governing their data distribution and different amounts of data they have noisily drawn from their own distribution. Each player’s goal is to obtain a model with minimal expected mean squared error (MSE) on their own distribution. They have a choice of fitting a model based solely on their own data, or combining their learned parameters with those of some subset of the other players. Combining models reduces the variance component of their error through access to more data, but increases the bias because of the heterogeneity of distributions. In this work, we derive exact expected MSE values for problems in linear regression and mean estimation. We use these values to analyze the resulting game in the framework of hedonic game theory; we study how players might divide into coalitions, where each set of players within a coalition jointly construct model(s). We analyze three methods of federation, modeling differing degrees of customization. In uniform federation, the agents collectively produce a single model. In coarse-grained federation, each agent can weight the global model together with their local model. In fine-grained federation, each agent can flexibly combine models from each other agent in the federation. For each method, we constructively analyze the stable partitions of players into coalitions.

NeurIPS Conference 2021 Conference Paper

Optimality and Stability in Federated Learning: A Game-theoretic Approach

  • Kate Donahue
  • Jon Kleinberg

Federated learning is a distributed learning paradigm where multiple agents, each only with access to local data, jointly learn a global model. There has recently been an explosion of research aiming not only to improve the accuracy rates of federated learning, but also provide certain guarantees around social good properties such as total error. One branch of this research has taken a game-theoretic approach, and in particular, prior work has viewed federated learning as a hedonic game, where error-minimizing players arrange themselves into federating coalitions. This past work proves the existence of stable coalition partitions, but leaves open a wide range of questions, including how far from optimal these stable solutions are. In this work, we motivate and define a notion of optimality given by the average error rates among federating agents (players). First, we provide and prove the correctness of an efficient algorithm to calculate an optimal (error minimizing) arrangement of players. Next, we analyze the relationship between the stability and optimality of an arrangement. First, we show that for some regions of parameter space, all stable arrangements are optimal (Price of Anarchy equal to 1). However, we show this is not true for all settings: there exist examples of stable arrangements with higher cost than optimal (Price of Anarchy greater than 1). Finally, we give the first constant-factor bound on the performance gap between stability and optimality, proving that the total error of the worst stable solution can be no higher than 9 times the total error of an optimal solution (Price of Anarchy bound of 9).