Arrow Research search

Author name cluster

Jonathan Shi

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
1 author row

Possible papers

5

SODA Conference 2022 Conference Paper

Cut Sparsification of the Clique Beyond the Ramanujan Bound: A Separation of Cut Versus Spectral Sparsification

  • Antares Chen
  • Jonathan Shi
  • Luca Trevisan 0001

We prove that a random d -regular graph, with high probability, is a cut sparsifier of the clique with approximation error at most, where = 1. 595 … and o n, d (1) denotes an error term that depends on n and d and goes to zero if we first take the limit n → ∞ and then the limit d → ∞. This is established by analyzing linear-size cuts using techniques of Jagannath and Sen [13] derived from ideas in statistical physics, and analyzing small cuts via martingale inequalities. We also prove new lower bounds on spectral sparsification of the clique. If G is a spectral sparsifier of the clique and G has average degree d, we prove that the approximation error is at least the “Ramanujan bound”, which is met by d -regular Ramanujan graphs, provided that either the weighted adjacency matrix of G is a (multiple of) a doubly stochastic matrix, or that G satisfies a certain high “odd pseudo-girth” property. The first case can be seen as an “Alon-Boppana theorem for symmetric doubly stochastic matrices, ” showing that a symmetric doubly stochastic matrix with dn non-zero entries has a non-trivial eigenvalue of magnitude at least; the second case generalizes a lower bound of Srivastava and Trevisan [23], which requires a large girth assumption. Together, these results imply a separation between spectral sparsification and cut sparsification. If G is a random log n -regular graph on n vertices (this is to ensure that G, and consequently any d -regular subgraph, has high pseudogirth), we show that, with high probability, G admits a (weighted subgraph) cut sparsifier of average degree d and approximation error at most, while every (weighted subgraph) spectral sparsifier of G having average degree d has approximation error at least.

STOC Conference 2016 Conference Paper

Fast spectral algorithms from sum-of-squares proofs: tensor decomposition and planted sparse vectors

  • Samuel B. Hopkins 0001
  • Tselil Schramm
  • Jonathan Shi
  • David Steurer

We consider two problems that arise in machine learning applications: the problem of recovering a planted sparse vector in a random linear subspace and the problem of decomposing a random low-rank overcomplete 3-tensor. For both problems, the best known guarantees are based on the sum-of-squares method. We develop new algorithms inspired by analyses of the sum-of-squares method. Our algorithms achieve the same or similar guarantees as sum-of-squares for these problems but the running time is significantly faster. For the planted sparse vector problem, we give an algorithm with running time nearly linear in the input size that approximately recovers a planted sparse vector with up to constant relative sparsity in a random subspace of ℝ n of dimension up to Ω(√ n ). These recovery guarantees match the best known ones of Barak, Kelner, and Steurer (STOC 2014) up to logarithmic factors. For tensor decomposition, we give an algorithm with running time close to linear in the input size (with exponent ≈ 1.125) that approximately recovers a component of a random 3-tensor over ℝ n of rank up to Ω( n 4/3 ). The best previous algorithm for this problem due to Ge and Ma (RANDOM 2015) works up to rank Ω( n 3/2 ) but requires quasipolynomial time.

FOCS Conference 2016 Conference Paper

Polynomial-Time Tensor Decompositions with Sum-of-Squares

  • Tengyu Ma 0001
  • Jonathan Shi
  • David Steurer

We give new algorithms based on the sum-of-squares method for tensor decomposition. Our results improve the best known running times from quasi-polynomial to polynomial for several problems, including decomposing random overcomplete 3-tensors and learning overcomplete dictionaries with constant relative sparsity. We also give the first robust analysis for decomposing overcomplete 4-tensors in the smoothed analysis model. A key ingredient of our analysis is to establish small spectral gaps in moment matrices derived from solutions to sum-of-squares relaxations. To enable this analysis we augment sum-of-squaresrelaxations with spectral analogs of maximum entropy constraints.