Arrow Research search

Author name cluster

Tselil Schramm

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

16 papers
2 author rows

Possible papers

16

STOC Conference 2025 Conference Paper

Discrepancy Algorithms for the Binary Perceptron

  • Shuangping Li
  • Tselil Schramm
  • Kangjie Zhou

The binary perceptron problem asks us to find a sign vector in the intersection of independently chosen random halfspaces with intercept −κ. We analyze the performance of the canonical discrepancy minimization algorithms of Lovett-Meka and Rothvoss/Eldan-Singh for the asymmetric binary perceptron problem. We obtain new algorithmic results in the κ = 0 case and in the large-|κ| case. In the κ → −∞ case, we additionally characterize the storage capacity and complement our algorithmic results with an almost-matching overlap-gap lower bound.

STOC Conference 2024 Conference Paper

Semidefinite Programs Simulate Approximate Message Passing Robustly

  • Misha Ivkov
  • Tselil Schramm

Approximate message passing (AMP) is a family of iterative algorithms that generalize matrix power iteration. AMP algorithms are known to optimally solve many average-case optimization problems. In this paper, we show that a large class of AMP algorithms can be simulated in polynomial time by local statistics hierarchy semidefinite programs (SDPs), even when an unknown principal minor of measure 1/ polylog ( dimension ) is adversarially corrupted. Ours are the first robust guarantees for many of these problems. Further, our results offer an interesting counterpoint to strong lower bounds against less constrained SDP relaxations for average-case max-cut-gain (a.k.a. “optimizing the Sherrington-Kirkpatrick Hamiltonian”) and other problems.

NeurIPS Conference 2022 Conference Paper

The Franz-Parisi Criterion and Computational Trade-offs in High Dimensional Statistics

  • Afonso S Bandeira
  • Ahmed El Alaoui
  • Samuel Hopkins
  • Tselil Schramm
  • Alexander S Wein
  • Ilias Zadik

Many high-dimensional statistical inference problems are believed to possess inherent computational hardness. Various frameworks have been proposed to give rigorous evidence for such hardness, including lower bounds against restricted models of computation (such as low-degree functions), as well as methods rooted in statistical physics that are based on free energy landscapes. This paper aims to make a rigorous connection between the seemingly different low-degree and free-energy based approaches. We define a free-energy based criterion for hardness and formally connect it to the well-established notion of low-degree hardness for a broad class of statistical problems, namely all Gaussian additive models and certain models with a sparse planted signal. By leveraging these rigorous connections we are able to: establish that for Gaussian additive models the "algebraic" notion of low-degree hardness implies failure of "geometric" local MCMC algorithms, and provide new low-degree lower bounds for sparse linear regression which seem difficult to prove directly. These results provide both conceptual insights into the connections between different notions of hardness, as well as concrete technical tools such as new methods for proving low-degree lower bounds.

NeurIPS Conference 2021 Conference Paper

Robust Regression Revisited: Acceleration and Improved Estimation Rates

  • Arun Jambulapati
  • Jerry Li
  • Tselil Schramm
  • Kevin Tian

We study fast algorithms for statistical regression problems under the strong contamination model, where the goal is to approximately optimize a generalized linear model (GLM) given adversarially corrupted samples. Prior works in this line of research were based on the \emph{robust gradient descent} framework of \cite{PrasadSBR20}, a first-order method using biased gradient queries, or the \emph{Sever} framework of \cite{DiakonikolasKK019}, an iterative outlier-removal method calling a stationary point finder. We present nearly-linear time algorithms for robust regression problems with improved runtime or estimation guarantees compared to the state-of-the-art. For the general case of smooth GLMs (e. g. \ logistic regression), we show that the robust gradient descent framework of \cite{PrasadSBR20} can be \emph{accelerated}, and show our algorithm extends to optimizing the Moreau envelopes of Lipschitz GLMs (e. g. \ support vector machines), answering several open questions in the literature. For the well-studied case of robust linear regression, we present an alternative approach obtaining improved estimation rates over prior nearly-linear time algorithms. Interestingly, our algorithm starts with an identifiability proof introduced in the context of the sum-of-squares algorithm of \cite{BakshiP21}, which achieved optimal error rates while requiring large polynomial runtime and sample complexity. We reinterpret their proof within the Sever framework and obtain a dramatically faster and more sample-efficient algorithm under fewer distributional assumptions.

FOCS Conference 2020 Conference Paper

Subexponential LPs Approximate Max-Cut

  • Samuel B. Hopkins 0001
  • Tselil Schramm
  • Luca Trevisan 0001

We show that for every ε > 0, the degree-n ε Sherali-Adams linear program (with exp(Õ(n ε )) variables and constraints) approximates the maximum cut problem within a factor of ([1/2]+ε ' ), for some ε ' (ε)>0. Our result provides a surprising converse to known lower bounds against all linear programming relaxations of Max-Cut [1], [2], and hence resolves the extension complexity of approximate Max-Cut for approximation factors close to [1/2] (up to the function ε ' (ε)). Previously, only semidefinite programs and spectral methods were known to yield approximation factors better than [1/2] for Max-Cut in time 2 o(n). We also show that constant-degree Sherali-Adams linear programs (with poly(n) variables and constraints) can solve Max-Cut with approximation factor close to 1 on graphs of small threshold rank: this is the first connection of which we are aware between threshold rank and linear programming-based algorithms. Our results separate the power of Sherali-Adams versus Lovász-Schrijver hierarchies for approximating Max-Cut, since it is known [3] that ([1/2]+ε) approximation of Max Cut requires Ω ε (n) rounds in the Lovász-Schrijver hierarchy. We also provide a subexponential time approximation for Khot's Unique Games problem [4]: we show that for every ε>0 the degree-(n ε log q) Sherali-Adams linear program distinguishes instances of Unique Games of value ≥ 1-ε ' from instances of value ≤ ε ', for some ε ' (ε)>0, where q is the alphabet size. Such guarantees are qualitatively similar to those of previous subexponential-time algorithms for Unique Games but our algorithm does not rely on semidefinite programming or subspace enumeration techniques [5]-[6]-[7].

NeurIPS Conference 2019 Conference Paper

(Nearly) Efficient Algorithms for the Graph Matching Problem on Correlated Random Graphs

  • Boaz Barak
  • Chi-Ning Chou
  • Zhixian Lei
  • Tselil Schramm
  • Yueqi Sheng

We consider the graph matching/similarity problem of determining how similar two given graphs $G_0, G_1$ are and recovering the permutation $\pi$ on the vertices of $G_1$ that minimizes the symmetric difference between the edges of $G_0$ and $\pi(G_1)$. Graph matching/similarity has applications for pattern matching, vision, social network anonymization, malware analysis, and more. We give the first efficient algorithms proven to succeed in the correlated Erdös-Rényi model (Pedarsani and Grossglauser, 2011). Specifically, we give a polynomial time algorithm for the graph similarity/hypothesis testing task which works for every constant level of correlation between the two graphs that can be arbitrarily close to zero. We also give a quasi-polynomial ($n^{O(\log n)}$ time) algorithm for the graph matching task of recovering the permutation minimizing the symmetric difference in this model. This is the first algorithm to do so without requiring as additional input a ``seed'' of the values of the ground truth permutation on at least $n^{\Omega(1)}$ vertices. Our algorithms follow a general framework of counting the occurrences of subgraphs from a particular family of graphs allowing for tradeoffs between efficiency and accuracy.

FOCS Conference 2017 Conference Paper

The Power of Sum-of-Squares for Detecting Hidden Structures

  • Samuel B. Hopkins 0001
  • Pravesh K. Kothari
  • Aaron Potechin
  • Prasad Raghavendra
  • Tselil Schramm
  • David Steurer

We study planted problems-finding hidden structures in random noisy inputs-through the lens of the sum-of-squares semidefinite programming hierarchy (SoS). This family of powerful semidefinite programs has recently yielded many new algorithms for planted problems, often achieving the best known polynomial-time guarantees in terms of accuracy of recovered solutions and robustness to noise. One theme in recent work is the design of spectral algorithms which match the guarantees of SoS algorithms for planted problems. Classical spectral algorithms are often unable to accomplish this: the twist in these new spectral algorithms is the use of spectral structure of matrices whose entries are low-degree polynomials of the input variables. We prove that for a wide class of planted problems, including refuting random constraint satisfaction problems, tensor and sparse PCA, densest-ksubgraph, community detection in stochastic block models, planted clique, and others, eigenvalues of degree-d matrix polynomials are as powerful as SoS semidefinite programs of degree d. For such problems it is therefore always possible to match the guarantees of SoS without solving a large semidefinite program. Using related ideas on SoS algorithms and lowdegree matrix polynomials (and inspired by recent work on SoS and the planted clique problem [BHK + 16]), we prove a new SoS lower bound for the tensor PCA problem.

STOC Conference 2016 Conference Paper

Fast spectral algorithms from sum-of-squares proofs: tensor decomposition and planted sparse vectors

  • Samuel B. Hopkins 0001
  • Tselil Schramm
  • Jonathan Shi
  • David Steurer

We consider two problems that arise in machine learning applications: the problem of recovering a planted sparse vector in a random linear subspace and the problem of decomposing a random low-rank overcomplete 3-tensor. For both problems, the best known guarantees are based on the sum-of-squares method. We develop new algorithms inspired by analyses of the sum-of-squares method. Our algorithms achieve the same or similar guarantees as sum-of-squares for these problems but the running time is significantly faster. For the planted sparse vector problem, we give an algorithm with running time nearly linear in the input size that approximately recovers a planted sparse vector with up to constant relative sparsity in a random subspace of ℝ n of dimension up to Ω(√ n ). These recovery guarantees match the best known ones of Barak, Kelner, and Steurer (STOC 2014) up to logarithmic factors. For tensor decomposition, we give an algorithm with running time close to linear in the input size (with exponent ≈ 1.125) that approximately recovers a component of a random 3-tensor over ℝ n of rank up to Ω( n 4/3 ). The best previous algorithm for this problem due to Ge and Ma (RANDOM 2015) works up to rank Ω( n 3/2 ) but requires quasipolynomial time.

SODA Conference 2016 Conference Paper

On the Integrality Gap of Degree-4 Sum of Squares for Planted Clique

  • Samuel B. Hopkins 0001
  • Pravesh K. Kothari
  • Aaron Potechin
  • Prasad Raghavendra
  • Tselil Schramm

The problem of finding large cliques in random graphs and its “planted” variant, where one wants to recover a clique of size ω ≫ log ( n ) added to an Erdős-Rényi graph, have been intensely studied. Nevertheless, existing polynomial time algorithms can only recover planted cliques of size. By contrast, information theoretically, one can recover planted cliques so long as ω ≫ log ( n ). In this work, we continue the investigation of algorithms from the sum of squares hierarchy for solving the planted clique problem begun by Meka, Potechin, and Wigderson [MPW15] and Deshpande and Montanari [DM15b]. Our main results improve upon both these previous works by showing: 1. Degree four SoS does not recover the planted clique unless, improving upon the bound ω ≫ n 1/3 due to [DM15b]. 2. For, degree 2 d SoS does not recover the planted clique unless ω ≫ n 1/( d +1) /(2 d polylog n ), improving upon the bound due to [MPW15]. Our proof for the second result is based on a fine spectral analysis of the certificate used in the prior works [MPW15, DM15b, FK03] by decomposing it along an appropriately chosen basis. Along the way, we develop combinatorial tools to analyze the spectrum of random matrices with dependent entries and to understand the symmetries in the eigenspaces of the set symmetric matrices inspired by work of Grigoriev [Gri01a] An argument of Kelner shows that the first result cannot be proved using the same certificate. Rather, our proof involves constructing and analyzing a new certificate that yields the nearly tight lower bound by “correcting” the certificate of [MPW15, DM15b, FK03]