SODA Conference 2025 Conference Paper
Forall-exist statements in pseudopolynomial time
- Eleonore Bach
- Friedrich Eisenbrand
- Thomas Rothvoss
- Robert Weismantel
Author name cluster
Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.
SODA Conference 2025 Conference Paper
STOC Conference 2025 Conference Paper
Matrix concentration inequalities, commonly used in the forms of non-commutative Khintchine inequalities or matrix Chernoff bounds, are central to a wide range of applications in computer science and mathematics. However, they fall short in many applications where tensor versions of these inequalities are needed. In this work, we study the ℓ p injective norms of sums of independent tensors. We obtain the first non-trivial concentration inequalities in this setting, and our inequalities are nearly tight in certain regimes of p and the order of the tensors. Previously, tensor concentration inequalities were known only in the special cases of rank-1 tensors or p =2. Our results are obtained via a geometric argument based on estimating the covering numbers for the natural stochastic processes corresponding to tensor injective norms. Our approach is quite general and might be applicable to other settings of matrix and tensor concentration. We discuss applications and connections of our inequalities to various other problems, including tensor principle component analysis, various models of random tensors and matrices, type-2 constants of certain Banach spaces, and locally decodable codes.
STOC Conference 2024 Conference Paper
FOCS Conference 2023 Conference Paper
In a seminal paper, Kannan and Lovász (1988) considered a quantity $\mu_{K L}(\Lambda, K)$ which denotes the best volume-based lower bound on the covering radius $\mu(\Lambda, K)$ of a convex body K with respect to a lattice $\Lambda$. Kannan and Lovász proved that $\mu(\Lambda, K) \leq n \cdot \mu_{K L}(\Lambda, K)$ and the Subspace Flatness Conjecture by Dadush (2012) claims a $O(\log (2 n))$ factor suffices, which would match the lower bound from the work of Kannan and Lovász. We settle this conjecture up to a constant in the exponent by proving that $\mu(\Lambda, K) \leq$ $O\left(\log ^{3}(2 n)\right) \cdot \mu_{K L}(\Lambda, K)$. Our proof is based on the Reverse Minkowski Theorem due to Regev and Stephens-Davidowitz (2017). Following the work of Dadush $(2012, 2019)$, we obtain a $(\log (2 n))^{O(n)}$-time randomized algorithm to solve integer programs in n variables. Another implication of our main result is a near-optimal flatness constant of $O\left(n \log ^{3}(2 n)\right)$.
FOCS Conference 2023 Conference Paper
The vector balancing constant $\operatorname{vb}(K, Q)$ of two symmetric convex bodies $K, Q$ is the minimum $r \geq 0$ so that any number of vectors from K can be balanced into an r scaling of Q. A question raised by Schechtman is whether for any zonotope $K \subseteq \mathbb{R}^{d}$ one has $\operatorname{vb}(K, K) \lesssim \sqrt{d}$. Intuitively, this asks whether a natural geometric generalization of Spencer’s Theorem (for which $K=B_{\infty}^{d}$) holds. We prove that for any zonotope $K \subseteq \mathbb{R}^{d}$ one has $\operatorname{vb}(K, K) \lesssim \sqrt{d} \log \log \log d$. Our main technical contribution is a tight lower bound on the Gaussian measure of any section of a normalized zonotope, generalizing Vaaler’s Theorem for cubes. We also prove that for two different normalized zonotopes K and Q one has $\operatorname{vb}(K, Q) \lesssim \sqrt{d \log d}$. All the bounds are constructive and the corresponding colorings can be computed in polynomial time.
SODA Conference 2022 Conference Paper
In the problem of scheduling with non-uniform communication delays, the input is a set of jobs with precedence constraints. Associated with every precedence constraint between a pair of jobs is a communication delay, the time duration the scheduler has to wait between the two jobs if they are scheduled on different machines. The objective is to assign the jobs to machines to minimize the makespan of the schedule. Despite being a fundamental problem in theory and a consequential problem in practice, the approximability of scheduling problems with communication delays is not very well understood. One of the top ten open problems in scheduling theory, in the influential list by Schuurman and Woeginger and its latest update by Bansal, asks if the problem admits a constant-factor approximation algorithm. In this paper, we answer this question in the negative by proving a logarithmic hardness for the problem under the standard complexity theory assumption that NP-complete problems do not admit quasi-polynomial-time algorithms. Our hardness result is obtained using a surprisingly simple reduction from a problem that we call Unique Machine Precedence constraints Scheduling (UMPS). We believe that this problem is of central importance in understanding the hardness of many scheduling problems and we conjecture that it is very hard to approximate. Among other things, our conjecture implies a logarithmic hardness of related machine scheduling with precedences, a long-standing open problem in scheduling theory and approximation algorithms.
SODA Conference 2021 Conference Paper
We consider the problem of scheduling jobs with precedence constraints on related machines to minimize the weighted sum of completion times, in the presence of communication delays. In this setting, denoted by Q | prec, c | Σ w j C j, if two dependent jobs are scheduled on different machines, then at least c units of communication delay time must pass between their executions. Our main result is an O (log 4 n )-approximation algorithm for the problem. As a byproduct of our result, we also obtain an O (log 3 n )-approximation algorithm for the problem of minimizing makespan Q | prec, c | C max, which improves upon the O (log 5 n/ log log n )-approximation algorithm due to a recent work of Maiti et al. [MRS + 20].
SODA Conference 2020 Conference Paper
A well-known problem in scheduling and approximation algorithms is the Santa Claus problem. Suppose that Santa Claus has a set of gifts, and he wants to distribute them among a set of children so that the least happy child is made as happy as possible. Here, the value that a child i has for a present j is of the form p ij ϵ {0, p j }. A polynomial time algorithm by Annamalai et al. gives a 12. 33-approximation and is based on a modification of Haxell's hypergraph matching argument. In this paper, we introduce a matroid version of the Santa Claus problem. Our algorithm is also based on Haxell's augmenting tree, but with the introduction of the matroid structure we solve a more general problem with cleaner methods. Our result can then be used as a blackbox to obtain a (4 + ϵ )-approximation for Santa Claus. This factor also compares against a natural, compact LP for Santa Claus.
SODA Conference 2020 Conference Paper
The Matrix Spencer Conjecture asks whether given n symmetric matrices in ℝ n×n with eigenvalues in [–1, 1] one can always find signs so that their signed sum has singular values bounded by. The standard approach in discrepancy requires proving that the convex body of all good fractional signings is large enough. However, this question has remained wide open due to the lack of tools to certify measure lower bounds for rather small non-polyhedral convex sets. A seminal result by Batson, Spielman and Srivastava from 2008 shows that any undirected graph admits a linear size spectral sparsifier. Again, one can define a convex body of all good fractional signings. We can indeed prove that this body is close to most of the Gaussian measure. This implies that a discrepancy algorithm by the second author can be used to sample a linear size sparsifer. In contrast to previous methods, we require only a logarithmic number of sampling phases.
FOCS Conference 2020 Conference Paper
We consider the classic problem of scheduling jobs with precedence constraints on identical machines to minimize makespan, in the presence of communication delays. In this setting, denoted by P | prec, c | Cmax, if two dependent jobs are scheduled on different machines, then at least c units of time must pass between their executions. Despite its relevance to many applications, this model remains one of the most poorly understood in scheduling theory. Even for a special case where an unlimited number of machines is available, the best known approximation ratio is 2/3·(c+1), whereas Graham's greedy list scheduling algorithm already gives a ( c+1) -approximation in that setting. An outstanding open problem in the top-10 list by Schuurman and Woeginger and its recent update by Bansal asks whether there exists a constant-factor approximation algorithm. In this work we give a polynomial-time O(logc·logm)-approximation algorithm for this problem, where m is the number of machines and c is the communication delay. Our approach is based on a Sherali-Adams lift of a linear programming relaxation and a randomized clustering of the semimetric space induced by this lift. The full version of this paper is available on arXiv.
SODA Conference 2019 Conference Paper
One of the prominent open problems in combinatorics is the discrepancy of set systems where each element lies in at most t sets. The Beck-Fiala conjecture suggests that the right bound is, but for three decades the only known bound not depending on the size of set system has been O ( t ). Arguably we currently lack techniques for breaking that barrier. In this paper we introduce discrepancy bounds based on Fourier analysis. We demonstrate our method on random set systems. Suppose one has n elements and m sets containing each element independently with probability p. We prove that in the regime of n ≥ Θ( m 2 log( m )), the discrepancy is at most 1 with high probability. Previously, a result of Ezra and Lovett gave a bound of O (1) under the stricter assumption that n ≫ m t.
SODA Conference 2017 Conference Paper
STOC Conference 2016 Conference Paper
FOCS Conference 2014 Conference Paper
A classical theorem of Spencer shows that any set system with n sets and n elements admits a coloring of discrepancy O(√n). Recent exciting work of Bansal, Lovett and Meka shows that such colorings can be found in polynomial time. In fact, the Lovett-Meka algorithm finds a half integral point in any "large enough" polytope. However, their algorithm crucially relies on the facet structure and does not apply to general convex sets. We show that for any symmetric convex set K with measure at least e -- n/500, the following algorithm finds a point y ∈ K ∩ [ -- 1, 1]n with Ω(n) coordinates in ±1: (1) take a random Gaussian vector x, (2) compute the point y in K ∩ [ -- 1, 1]n that is closest to x. (3) return y. This provides another truly constructive proof of Spencer's theorem and the first constructive proof of a Theorem of Giannopoulos.
SODA Conference 2014 Conference Paper
We consider the bin packing problem with d different item sizes s i and item multiplicities a i, where all numbers are given in binary encoding. This problem formulation is also known as the 1-dimensional cutting stock problem. In this work, we provide an algorithm which, for constant d, solves bin packing in polynomial time. This was an open problem for all d ≥ 3. In fact, for constant d our algorithm solves the following problem in polynomial time: given two d -dimensional polytopes P and Q, find the smallest number of integer points in P whose sum lies in Q. Our approach also applies to high multiplicity scheduling problems in which the number of copies of each job type is given in binary encoding and each type comes with certain parameters such as release dates, processing times and deadlines. We show that a variety of high multiplicity scheduling problems can be solved in polynomial time if the number of job types is constant.
STOC Conference 2014 Conference Paper
FOCS Conference 2013 Conference Paper
For bin packing, the input consists of n items with sizes between 0 and 1, which have to be assigned to a minimum number of bins of size 1. The seminal Karmarkar-Karp algorithm from '82 produces a solution with at most OPT + O(log2 OPT) bins. We provide the first improvement in now 3 decades and show that one can find a solution of cost OPT + O(log OPT · log log OPT) in polynomial time. This is achieved by rounding a fractional solution to the Gilmore-Gomory LP relaxation using the Entropy Method from discrepancy theory. The result is constructive via algorithms of Bansal and Lovett-Meka.
STOC Conference 2012 Conference Paper
SODA Conference 2012 Conference Paper
SODA Conference 2011 Conference Paper
SODA Conference 2011 Conference Paper
STOC Conference 2010 Conference Paper
The Steiner tree problem is one of the most fundamental NP-hard problems: given a weighted undirected graph and a subset of terminal nodes, find a minimum-cost tree spanning the terminals. In a sequence of papers, the approximation ratio for this problem was improved from 2 to the current best 1.55 [Robins,Zelikovsky-SIDMA'05]. All these algorithms are purely combinatorial. A long-standing open problem is whether there is an LP-relaxation for Steiner tree with integrality gap smaller than 2 [Vazirani,Rajagopalan-SODA'99]. In this paper we improve the approximation factor for Steiner tree, developing an LP-based approximation algorithm. Our algorithm is based on a, seemingly novel, iterative randomized rounding technique. We consider a directed-component cut relaxation for the k-restricted Steiner tree problem. We sample one of these components with probability proportional to the value of the associated variable in the optimal fractional solution and contract it. We iterate this process for a proper number of times and finally output the sampled components together with a minimum-cost terminal spanning tree in the remaining graph. Our algorithm delivers a solution of cost at most ln(4) times the cost of an optimal k-restricted Steiner tree. This directly implies a ln(4)+ε<1.39 approximation for Steiner tree. As a byproduct of our analysis, we show that the integrality gap of our LP is at most $1.55$, hence answering to the mentioned open question. This might have consequences for a number of related problems.
SODA Conference 2010 Conference Paper
In the synchronous periodic task model, a set τ 1, …, τ n of tasks is given, each releasing jobs of running time c i with relative deadline d i, at each integer multiple of the period p i. It is a classical result that Earliest Deadline First (EDF) is an optimal preemptive uniprocessor scheduling policy. For constrained deadlines, i. e. d i ≤ p i, the EDF-schedule is feasible if and only if Though an enormous amount of literature deals with this topic, the complexity status of this test has remained unknown. We prove that testing EDF-schedulability of such a task system is (weakly) coNP-hard. This solves Problem 2 from the survey “Open Problems in Real-time Scheduling” by Baruah & Pruhs. The hardness result is achieved by applying recent results on inapproximability of Diophantine approximation.
SODA Conference 2008 Conference Paper