Arrow Research search

Author name cluster

Adarsh Barik

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

7 papers
2 author rows

Possible papers

7

AAAI Conference 2025 Conference Paper

p-Mean Regret for Stochastic Bandits

  • Anand Krishna
  • Philips George John
  • Adarsh Barik
  • Vincent Y. F. Tan

In this work, we extend the concept of the p-mean welfare objective from social choice theory to study p-mean regret in stochastic multi-armed bandit problems. The p-mean regret, defined as the difference between the optimal mean among the arms and the p-mean of the expected rewards, offers a flexible framework for evaluating bandit algorithms, enabling algorithm designers to balance fairness and efficiency by adjusting the parameter p. Our framework encompasses both average cumulative regret and Nash regret as special cases. We introduce a simple, unified UCB-based algorithm (Explore-Then-UCB) that achieves novel p-mean regret bounds. Our algorithm consists of two phases: a carefully calibrated uniform exploration phase to initialize sample means, followed by the UCB1 algorithm of Auer et al. (2002). Under mild assumptions, we prove that our algorithm achieves a p-mean regret bound of Otilde( sqrt( k / T^{1/(2|p|)} ) ) for all p <= -1, where k represents the number of arms and T the time horizon. When -1< p < 0, we achieve a regret bound of Otilde( sqrt( k^{1.5} / T^{1/2} ) ). For the range 0 < p <= 1, we achieve a p-mean regret scaling as Otilde( sqrt( k / T ) ), which matches the previously established lower bound up to logarithmic factors. This result stems from the fact that the p-mean regret of any algorithm is at least its average cumulative regret for p <= 1. In the case of Nash regret (the limit as p approaches zero), our unified approach differs from prior work of Barman et al. (2023), which requires a new Nash Confidence Bound algorithm. Notably, we achieve the same regret bound up to constant factors using our more general method.

NeurIPS Conference 2025 Conference Paper

Parameter-free Algorithms for the Stochastically Extended Adversarial Model

  • Shuche Wang
  • Adarsh Barik
  • Peng Zhao
  • Vincent Tan

We develop the first parameter-free algorithms for the Stochastically Extended Adversarial (SEA) model, a framework that bridges adversarial and stochastic online convex optimization. Existing approaches for the SEA model require prior knowledge of problem-specific parameters, such as the diameter of the domain $D$ and the Lipschitz constant of the loss functions $G$, which limits their practical applicability. Addressing this, we develop parameter-free methods by leveraging the Optimistic Online Newton Step (OONS) algorithm to eliminate the need for these parameters. We first establish a comparator-adaptive algorithm for the scenario with unknown domain diameter but known Lipschitz constant, achieving an expected regret bound of $\tilde{O}\big(\Vert u\Vert_2^2 + \Vert u\Vert_2(\sqrt{\sigma^2_{1: T}} + \sqrt{\Sigma^2_{1: T}})\big)$, where $u$ is the comparator vector and $\sigma^2_{1: T}$ and $\Sigma^2_{1: T}$ represent the cumulative stochastic variance and cumulative adversarial variation, respectively. We then extend this to the more general setting where both $D$ and $G$ are unknown, attaining the comparator- and Lipschitz-adaptive algorithm. Notably, the regret bound exhibits the same dependence on $\sigma^2_{1: T}$ and $\Sigma^2_{1: T}$, demonstrating the efficacy of our proposed methods even when both parameters are unknown in the SEA model.

TMLR Journal 2024 Journal Article

Recovering Exact Support in Federated lasso without Optimization

  • Adarsh Barik
  • Jean Honorio

Federated learning provides a framework to address the challenges of distributed computing, data ownership, and privacy over a large number of distributed clients with low computational and communication capabilities. In this paper, we study the problem of learning the exact support of sparse linear regression in the federated learning setup. We provide a simple communication efficient algorithm that only needs one-shot communication with the centralized server to compute the exact support by majority voting. Our method does not require the clients to solve any optimization problem and thus, can be run on devices with low computational capabilities. Our method is naturally robust to the problems of client failure, model poisoning, and straggling clients. We formally prove that our method requires a number of samples per client that is polynomial with respect to the support size, but independent of the dimension of the problem. We require the number of distributed clients to be logarithmic in the dimension of the problem. For certain classes of predictor variables (e.g. mutually independent, correlated Gaussian, etc.), the overall sample complexity matches the optimal sample complexity of the non-federated centralized setting. Furthermore, our method is easy to implement and has an overall polynomial time complexity.

ICML Conference 2022 Conference Paper

A Simple Unified Framework for High Dimensional Bandit Problems

  • Wenjie Li
  • Adarsh Barik
  • Jean Honorio

Stochastic high dimensional bandit problems with low dimensional structures are useful in different applications such as online advertising and drug discovery. In this work, we propose a simple unified algorithm for such problems and present a general analysis framework for the regret upper bound of our algorithm. We show that under some mild unified assumptions, our algorithm can be applied to different high-dimensional bandit problems. Our framework utilizes the low dimensional structure to guide the parameter estimation in the problem, therefore our algorithm achieves the comparable regret bounds in the LASSO bandit as a sanity check, as well as novel bounds that depend logarithmically on dimensions in the low-rank matrix bandit, the group sparse matrix bandit, and in a new problem: the multi-agent LASSO bandit.

ICML Conference 2022 Conference Paper

Sparse Mixed Linear Regression with Guarantees: Taming an Intractable Problem with Invex Relaxation

  • Adarsh Barik
  • Jean Honorio

In this paper, we study the problem of sparse mixed linear regression on an unlabeled dataset that is generated from linear measurements from two different regression parameter vectors. Since the data is unlabeled, our task is to not only figure out a good approximation of regression parameter vectors but also label the dataset correctly. In its original form, this problem is NP-hard. The most popular algorithms to solve this problem (such as Expectation-Maximization) have a tendency to stuck at local minima. We provide a novel invex relaxation for this intractable problem which leads to a solution with provable theoretical guarantees. This relaxation enables exact recovery of data labels. Furthermore, we recover close approximation of regression parameter vectors which match the true parameter vectors in support and sign. Our formulation uses a carefully constructed primal dual witnesses framework for the invex problem. Furthermore, we show that the sample complexity of our method is only logarithmic in terms of the dimension of the regression parameter vectors.

NeurIPS Conference 2021 Conference Paper

Fair Sparse Regression with Clustering: An Invex Relaxation for a Combinatorial Problem

  • Adarsh Barik
  • Jean Honorio

In this paper, we study the problem of fair sparse regression on a biased dataset where bias depends upon a hidden binary attribute. The presence of a hidden attribute adds an extra layer of complexity to the problem by combining sparse regression and clustering with unknown binary labels. The corresponding optimization problem is combinatorial, but we propose a novel relaxation of it as an invex optimization problem. To the best of our knowledge, this is the first invex relaxation for a combinatorial problem. We show that the inclusion of the debiasing/fairness constraint in our model has no adverse effect on the performance. Rather, it enables the recovery of the hidden attribute. The support of our recovered regression parameter vector matches exactly with the true parameter vector. Moreover, we simultaneously solve the clustering problem by recovering the exact value of the hidden attribute for each sample. Our method uses carefully constructed primal dual witnesses to provide theoretical guarantees for the combinatorial problem. To that end, we show that the sample complexity of our method is logarithmic in terms of the dimension of the regression parameter vector.

NeurIPS Conference 2019 Conference Paper

Learning Bayesian Networks with Low Rank Conditional Probability Tables

  • Adarsh Barik
  • Jean Honorio

In this paper, we provide a method to learn the directed structure of a Bayesian network using data. The data is accessed by making conditional probability queries to a black-box model. We introduce a notion of simplicity of representation of conditional probability tables for the nodes in the Bayesian network, that we call ` low rankness''. We connect this notion to the Fourier transformation of real valued set functions and propose a method which learns the exact directed structure of a low rank` Bayesian network using very few queries. We formally prove that our method correctly recovers the true directed structure, runs in polynomial time and only needs polynomial samples with respect to the number of nodes. We also provide further improvements in efficiency if we have access to some observational data.