Arrow Research search

Author name cluster

Vinayak M. Kumar

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

STOC Conference 2025 Conference Paper

Linear Hashing Is Optimal

  • Michael Jaber
  • Vinayak M. Kumar
  • David Zuckerman

We prove that hashing n balls into n bins via random 2 -linear maps yields expected maximum load O (log n / loglog n ), resolving an open question of Alon, Dietzfelbinger, Miltersen, Petrank, and Tardos (STOC ’97, JACM ’99). More generally, we show that the maximum load exceeds r · log n /loglog n with probability at most O (1/ r 2 ). Our proof uses potential functions to detect heavy bins.

STOC Conference 2024 Conference Paper

Relaxed Local Correctability from Local Testing

  • Vinayak M. Kumar
  • Geoffrey Mon

We construct the first asymptotically good relaxed locally correctable codes with polylogarithmic query complexity, bringing the upper bound polynomially close to the lower bound of Gur and Lachish (SICOMP 2021). Our result follows from showing that a high-rate locally testable code can boost the block length of a smaller relaxed locally correctable code, while preserving the correcting radius and incurring only a modest additive cost in rate and query complexity. We use the locally testable code's tester to check if the amount of corruption in the input is low; if so, we can “zoom-in” to a suitable substring of the input and recurse on the smaller code’s local corrector. Hence, iterating this operation with a suitable family of locally testable codes due to Dinur, Evra, Livne, Lubotzky, and Mozes (STOC 2022) yields asymptotically good codes with relaxed local correctability, arbitrarily large block length, and polylogarithmic query complexity. Our codes asymptotically inherit the rate and distance of any locally testable code used in the final invocation of the operation. Therefore, our framework also yields nonexplicit relaxed locally correctable codes with polylogarithmic query complexity that have rate and distance approaching the Gilbert–Varshamov bound.

UAI Conference 2021 Conference Paper

Condition number bounds for causal inference

  • Spencer Gordon
  • Vinayak M. Kumar
  • Leonard J. Schulman
  • Piyush Srivastava 0001

An important achievement in the field of causal inference was a complete characterization of when a causal effect, in a system modeled by a causal graph, can be determined uniquely from purely observational data. The identification algorithms resulting from this work produce exact symbolic expressions for causal effects, in terms of the observational probabilities. More recent work has looked at the numerical properties of these expressions, in particular using the classical notion of the condition number. In its classical interpretation, the condition number quantifies the sensitivity of the output values of the expressions to small numerical perturbations in the input observational probabilities. In the context of causal identification, the condition number has also been shown to be related to the effect of certain kinds of uncertainties in the structure of the causal graphical model. In this paper, we first give an upper bound on the condition number for the interesting case of causal graphical models with small “confounded components”. We then develop a tight characterization of the condition number of any given causal identification problem. Finally, we use our tight characterization to give a specific example where the condition number can be much lower than that obtained via generic bounds on the condition number, and to show that even “equivalent” expressions for causal identification can behave very differently with respect to their numerical stability properties.