Arrow Research search

Author name cluster

Antares Chen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

SODA Conference 2022 Conference Paper

Cut Sparsification of the Clique Beyond the Ramanujan Bound: A Separation of Cut Versus Spectral Sparsification

  • Antares Chen
  • Jonathan Shi
  • Luca Trevisan 0001

We prove that a random d -regular graph, with high probability, is a cut sparsifier of the clique with approximation error at most, where = 1. 595 … and o n, d (1) denotes an error term that depends on n and d and goes to zero if we first take the limit n → ∞ and then the limit d → ∞. This is established by analyzing linear-size cuts using techniques of Jagannath and Sen [13] derived from ideas in statistical physics, and analyzing small cuts via martingale inequalities. We also prove new lower bounds on spectral sparsification of the clique. If G is a spectral sparsifier of the clique and G has average degree d, we prove that the approximation error is at least the “Ramanujan bound”, which is met by d -regular Ramanujan graphs, provided that either the weighted adjacency matrix of G is a (multiple of) a doubly stochastic matrix, or that G satisfies a certain high “odd pseudo-girth” property. The first case can be seen as an “Alon-Boppana theorem for symmetric doubly stochastic matrices, ” showing that a symmetric doubly stochastic matrix with dn non-zero entries has a non-trivial eigenvalue of magnitude at least; the second case generalizes a lower bound of Srivastava and Trevisan [23], which requires a large girth assumption. Together, these results imply a separation between spectral sparsification and cut sparsification. If G is a random log n -regular graph on n vertices (this is to ensure that G, and consequently any d -regular subgraph, has high pseudogirth), we show that, with high probability, G admits a (weighted subgraph) cut sparsifier of average degree d and approximation error at most, while every (weighted subgraph) spectral sparsifier of G having average degree d has approximation error at least.

SODA Conference 2016 Conference Paper

Partial Resampling to Approximate Covering Integer Programs

  • Antares Chen
  • David G. Harris 0001
  • Aravind Srinivasan

We consider positive covering integer programs, which generalize set cover and which have attracted a long line of research developing (randomized) approximation algorithms. Srinivasan (2006) gave a rounding algorithm based on the FKG inequality for systems which are “column-sparse. ” This algorithm may return an integer solution in which the variables get assigned large (integral) values; Kolliopoulos & Young (2005) modified this algorithm to limit the solution size, at the cost of a worse approximation ratio. We develop a new rounding scheme based on the Partial Resampling variant of the Lovász Local Lemma developed by Harris & Srinivasan (2013). This achieves an approximation ratio of, where a min is the minimum covering constraint and Δ 1 is the maximum ℓ 1 -norm of any column of the covering matrix (whose entries are scaled to lie in [0, 1]); we also show nearly-matching inapproximability and integrality-gap lower bounds. Our approach improves asymptotically, in several different ways, over known results. First, it replaces Δ 0, the maximum number of nonzeroes in any column (from the result of Srinivasan) by Δ 1 which is always – and can be much – smaller than Δ 0; this is the first such result in this context. Second, our algorithm automatically handles multi-criteria programs; we achieve improved approximation ratios compared to the algorithm of Srinivasan, and give, for the first time when the number of objective functions is large, polynomial-time algorithms with good multi-criteria approximations. We also significantly improve upon the upper-bounds of Kolliopoulos & Young when the integer variables are required to be within (1 + ∊) of some given upper-bounds, and show nearly-matching inapproximability.