Arrow Research search

Author name cluster

Karl Bringmann

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

43 papers
2 author rows

Possible papers

43

FOCS Conference 2023 Conference Paper

Negative-Weight Single-Source Shortest Paths in Near-Linear Time: Now Faster!

  • Karl Bringmann
  • Alejandro Cassis
  • Nick Fischer

In this work we revisit the fundamental Single-Source Shortest Paths (SSSP) problem with possibly negative edge weights. A recent breakthrough result by Bernstein, Nanongkai and Wulff-Nilsen established a near-linear $O\left(m \log ^{8}(n) \log (W)\right)$-time algorithm for negative-weight SSSP, where W is an upper bound on the magnitude of the smallest negative-weight edge. In this work we improve the running time to $O\left(m \log ^{2}(n) \log (n W) \log \log n\right)$, which is an improvement by nearly six log-factors. Some of these log-factors are easy to shave (e. g. replacing the priority queue used in Dijkstra’s algorithm), while others are significantly more involved (e. g. to find negative cycles we design an algorithm reminiscent of noisy binary search and analyze it with drift analysis). As side results, we obtain an algorithm to compute the minimum cycle mean in the same running time as well as a new construction for computing Low-Diameter Decompositions in directed graphs.

SODA Conference 2021 Conference Paper

A Fine-Grained Perspective on Approximating Subset Sum and Partition

  • Karl Bringmann
  • Vasileios Nakos

Approximating S ubset S um is a classic and fundamental problem in computer science and mathematical optimization. The state-of-the-art approximation scheme for S ubset S um computes a (1 – ∊ )-approximation in time [Gens, Levner'78, Kellerer et al. '97]. In particular, a (1 – 1/ n )-approximation can be computed in time. We establish a connection to Min-Plus-Convolution, a problem that is of particular interest in fine-grained complexity theory and can be solved naively in time. Our main result is that computing a (1 – 1/ n )-approximation for S ubset S um is subquadratically equivalent to Min-Plus-Convolution. Thus, assuming the Min-Plus-Convolution conjecture from fine-grained complexity theory, there is no approximation scheme for S ubset S um with strongly subquadratic dependence on n and 1/ ∊. In the other direction, our reduction allows us to transfer known lower order improvements from Min-Plus-Convolution to S ubset S um, which yields a mildly subquadratic randomized approximation scheme. This adds the first approximation problem to the list of problems that are equivalent to Min-Plus-Convolution. For the related P artition problem, an important special case of S ubset S um, the state of the art is a randomized approximation scheme running in time [Mucha et al. '19]. We adapt our reduction from S ubset S um to Min-Plus-Convolution to obtain a related reduction from P artition to Min-Plus-Convolution. This yields an improved approximation scheme for P artition running in time. Our algorithm is the first deterministic approximation scheme for P artition that breaks the quadratic barrier.

SODA Conference 2021 Conference Paper

On Near-Linear-Time Algorithms for Dense Subset Sum

  • Karl Bringmann
  • Philip Wellnitz

In the Subset Sum problem we are given a set of n positive integers X and a target t and are asked whether some subset of X sums to t. Natural parameters for this problem that have been studied in the literature are n and t as well as the maximum input number mx x and the sum of all input numbers Σ x. In this paper we study the dense case of Subset Sum, where all these parameters are polynomial in n. In this regime, standard pseudo-polynomial algorithms solve Subset Sum in polynomial time n O (1). Our main question is: When can dense Subset Sum be solved in near-linear time Õ ( n )? We provide an essentially complete dichotomy by designing improved algorithms and proving conditional lower bounds, thereby determining essentially all settings of the parameters n, t, mx x, Σ x for which dense Subset Sum is in time Õ ( n ). For notational convenience we assume without loss of generality that t ≥ mx x (as larger numbers can be ignored) and t ≤ Σ x /2 (using symmetry). Then our dichotomy reads as follows: • By reviving and improving an additive-combinatorics-based approach by Galil and Margalit [SICOMP'91], we show that Subset Sum is in near-linear time Õ ( n ) if t » mx x Σ x / n 2. • We prove a matching conditional lower bound: If Subset Sum is in near-linear time for any setting with t « mx x Σ x / n 2, then the Strong Exponential Time Hypothesis and the Strong k-Sum Hypothesis fail. We also generalize our algorithm from sets to multi-sets, albeit with non-matching upper and lower bounds.

NeurIPS Conference 2020 Conference Paper

Impossibility Results for Grammar-Compressed Linear Algebra

  • Amir Abboud
  • Arturs Backurs
  • Karl Bringmann
  • Marvin Künnemann

To handle vast amounts of data, it is natural and popular to compress vectors and matrices. When we compress a vector from size N down to size n << N, it certainly makes it easier to store and transmit efficiently, but does it also make it easier to process? In this paper we consider lossless compression schemes, and ask if we can run our computations on the compressed data as efficiently as if the original data was that small. That is, if an operation has time complexity T(input-size), can we perform it on the compressed representation in time T(n) rather than T(N)? We consider the most basic linear algebra operations: inner product, matrix-vector multiplication, and matrix multiplication. In particular, given two compressed vectors, can we compute their inner product in time O(n)? Or perhaps we must decompress first and then multiply, spending Omega(N) time? The answer depends on the compression scheme. While for simple ones such as Run-Length-Encoding (RLE) the inner product can be done in O(n) time, we prove that this is impossible for compressions from a richer class: essentially n^2 or even larger runtimes are needed in the worst case (under complexity assumptions). This is the class of \emph{grammar-compressions} containing most popular methods such as the Lempel-Ziv family. These schemes are more compressing than the simple RLE, but alas, we prove that performing computations on them is much harder.

STOC Conference 2020 Conference Paper

Top-k-convolution and the quest for near-linear output-sensitive subset sum

  • Karl Bringmann
  • Vasileios Nakos

In the classical SubsetSum problem we are given a set X and a target t , and the task is to decide whether there exists a subset of X which sums to t . A recent line of research has resulted in ( t · poly (log t ))-time algorithms, which are (near-)optimal under popular complexity-theoretic assumptions. On the other hand, the standard dynamic programming algorithm runs in time O ( n · | S ( X , t )|), where S ( X , t ) is the set of all subset sums of X that are smaller than t . All previous pseudopolynomial algorithms actually solve a stronger task, since they actually compute the whole set S ( X , t ). As the aforementioned two running times are incomparable, in this paper we ask whether one can achieve the best of both worlds: running time | S ( X , t )|· poly (log t ). In particular, we ask whether S ( X , t ) can be computed in near-linear time in the output-size. Using a diverse toolkit containing techniques such as color coding, sparse recovery, and sumset estimates, we make considerable progress towards this question and design an algorithm running in time | S ( X , t )| 4/3 · poly (log t ). Central to our approach is the study of top- k -convolution , a natural problem of independent interest: given degree- d sparse polynomials with non-negative coefficients, compute the lowest k non-zero monomials of their product. We design an algorithm running in time k 4/3 poly (log d ), by a combination of sparse convolution and sumset estimates considered in Additive Combinatorics. Moreover, we provide evidence that going beyond some of the barriers we have faced requires either an algorithmic breakthrough or possibly new techniques from Additive Combinatorics on how to pass from information on restricted sumsets to information on unrestricted sumsets.

SODA Conference 2019 Conference Paper

A PTAS for ℓp-Low Rank Approximation

  • Frank Ban
  • Vijay Bhattiprolu
  • Karl Bringmann
  • Pavel Kolev
  • Euiwoong Lee
  • David P. Woodruff

A number of recent works have studied algorithms for entrywise ℓ p -low rank approximation, namely algorithms which given an n × d matrix A (with n ≥ d ), output a rank- k matrix B minimizing ‖ A – B ‖ p p = ∑ i, j |A i, j – B i, j | p when p > 0; and ‖ A – B ‖0 = ∑ i, j [ A i, j ≠ B i, j ] for p = 0, where [·] is the Iverson bracket, that is, ‖ A – B ‖ 0 denotes the number of entries ( i, j ) for which A i, j ≠ B i, j. For p = 1, this is often considered more robust than the SVD, while for p = 0 this corresponds to minimizing the number of disagreements, or robust PCA. This problem is known to be NP-hard for p ∊ {0, 1}, already for k = 1, and while there are polynomial time approximation algorithms, their approximation factor is at best poly( k ). It was left open if there was a polynomial-time approximation scheme (PTAS) for ℓ p -approximation for any p ≥ 0. We show the following: 1. On the algorithmic side, for p ∊ (0, 2), we give the first n poly( k / ε ) time (1 + ε )-approximation algorithm. For p = 0, there are various problem formulations, a common one being the binary setting in which A ∊ {0, 1} n × d and B = U · V, where U ∊ {0, 1} n × k and V ∊ {0, 1} k × d. There are also various notions of multiplication U · V, such as a matrix product over the reals, over a finite field, or over a Boolean semiring. We give the first almost-linear time approximation scheme for what we call the Generalized Binary ℓ 0 - Rank-k problem, for which these variants are special cases. Our algorithm computes (1 + ε )-approximation in time (1/ ε ) 2 O ( k ) / ε 2 · nd 1 + o (1), where o (1) hides a factor (log log d ) 1. 1 / log d. In addition, for the case of finite fields of constant size, we obtain an alternate PTAS running in time n · d poly( k / ε ). 2. On the hardness front, for p ∊ (1, 2), we show under the Small Set Expansion Hypothesis and Exponential Time Hypothesis (ETH), there is no constant factor approximation algorithm running in time 2 kδ for a constant δ > 0, showing an exponential dependence on k is necessary. For p = 0, we observe that there is no approximation algorithm for the Generalized Binary ℓ 0 -Rank- k problem running in time 2 2 δk for a constant δ > 0. We also show for finite fields of constant size, under the ETH, that any fixed constant factor approximation algorithm requires 2 kδ time for a constant δ > 0.

STOC Conference 2019 Conference Paper

Approximating APSP without scaling: equivalence of approximate min-plus and exact min-max

  • Karl Bringmann
  • Marvin Künnemann
  • Karol Wegrzycki

Zwick’s (1+ε)-approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time Õ( n ω /ε log W ), where ω ≤ 2.373 is the exponent of matrix multiplication and W denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimum-weight triangle, and minimum-weight cycle in the same time bound. Since Zwick’s algorithm uses the scaling technique, it has a factor log W in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of W ; this is called strongly polynomial . Our main results are as follows. (1) We design approximation schemes in strongly polynomial time O ( n ω /ε polylog ( n /ε)) for APSP on undirected graphs as well as for the graph characteristics diameter, radius, median, minimum-weight triangle, and minimum-weight cycle on directed or undirected graphs. (2) For APSP on directed graphs we design an approximation scheme in strongly polynomial time O ( n ω + 3/2 ε −1 polylog ( n /ε)). This is significantly faster than the best exact algorithm. (3) We explain why our approximation scheme for APSP on directed graphs has a worse exponent than ω: Any improvement over our exponent ω + 3/2 would improve the best known algorithm for Min-Max Product. In fact, we prove that approximating directed APSP and exactly computing the Min-Max Product are equivalent. Our techniques yield a framework for approximation problems over the (min,+)-semiring that can be applied more generally. In particular, we obtain the first strongly polynomial approximation scheme for Min-Plus Convolution in strongly subquadratic time, and we prove an equivalence of approximate Min-Plus Convolution and exact Min-Max Convolution.

SODA Conference 2019 Conference Paper

Few Matches or Almost Periodicity: Faster Pattern Matching with Mismatches in Compressed Texts

  • Karl Bringmann
  • Marvin Künnemann
  • Philip Wellnitz

A fundamental problem on strings in the realm of approximate string matching is pattern matching with mismatches: Given a text t, a pattern p, and a number k, determine whether some substring of t has Hamming distance at most k to p; such a substring is called a k-match. As real-world texts often come in compressed form, we study the case of searching for a small pattern p in a text t that is compressed by a straight-line program. This grammar compression is popular in the string community, since it is mathematically elegant and unifies many practically relevant compression schemes such as the Lempel-Ziv family, dictionary methods, and others. We denote by m the length of p and by n the compressed size of t. While exact pattern matching, that is, the case k = 0, is known to be solvable in near-linear time Õ ( n + m ) [Jeż TALG’15], despite considerable interest in the string community, the fastest known algorithm for pattern matching with mismatches runs in time [Gawrychowski, Straszak ISAAC’13], which is far from linear even for very small k. In this paper, we obtain an algorithm for pattern matching with mismatches running in time Õ (( n + m ) poly( k )). This is near-linear in the input size for any constant (or slightly superconstant) k. We obtain analogous running time for counting and enumerating all k -matches. Our algorithm is based on a new structural insight for approximate pattern matching, essentially showing that either the number of k -matches is very small or both text and pattern must be almost periodic. While intuitive and simple for exact matches, such a characterization is surprising when allowing k mismatches.

TCS Journal 2019 Journal Article

Geometric inhomogeneous random graphs

  • Karl Bringmann
  • Ralph Keusch
  • Johannes Lengler

Real-world networks, like social networks or the internet infrastructure, have structural properties such as large clustering coefficients that can best be described in terms of an underlying geometry. This is why the focus of the literature on theoretical models for real-world networks shifted from classic models without geometry, such as Chung–Lu random graphs, to modern geometry-based models, such as hyperbolic random graphs. With this paper we contribute to the theoretical analysis of these modern, more realistic random graph models. Instead of studying directly hyperbolic random graphs, we use a generalization that we call geometric inhomogeneous random graphs (GIRGs). Since we ignore constant factors in the edge probabilities, GIRGs are technically simpler (specifically, we avoid hyperbolic cosines), while preserving the qualitative behavior of hyperbolic random graphs, and we suggest to replace hyperbolic random graphs by this new model in future theoretical studies. We prove the following fundamental structural and algorithmic results on GIRGs. (1) We provide a sampling algorithm that generates a random graph from our model in expected linear time, improving the best-known sampling algorithm for hyperbolic random graphs by a substantial factor O ( n ). (2) We establish that GIRGs have clustering coefficients in Ω ( 1 ), (3) we prove that GIRGs have small separators, i. e. , it suffices to delete a sublinear number of edges to break the giant component into two large pieces, and (4) we show how to compress GIRGs using an expected linear number of bits.

STOC Conference 2018 Conference Paper

Fast fencing

  • Mikkel Abrahamsen
  • Anna Adamaszek
  • Karl Bringmann
  • Vincent Cohen-Addad
  • Mehran Mehr
  • Eva Rotenberg
  • Alan Roytman
  • Mikkel Thorup

STOC Conference 2018 Conference Paper

More consequences of falsifying SETH and the orthogonal vectors conjecture

  • Amir Abboud
  • Karl Bringmann
  • Holger Dell
  • Jesper Nederlof

The Strong Exponential Time Hypothesis and the OV-conjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomial-time algorithms. The OV-conjecture in moderate dimension states there is no ε>0 for which an O ( N 2−ε ) poly ( D ) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size N that contains D -dimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OV-conjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ε>0 such that: - For all d and all large enough k , there is a randomized algorithm that takes O ( n (1−ε) k ) time to solve the Zero-Weight- k -Clique and Min-Weight- k -Clique problems on d -hypergraphs with n vertices. As a consequence, the OV-conjecture is implied by the Weighted Clique conjecture. - For all c , the satisfiability of sparse TC1 circuits on n inputs (that is, circuits with cn wires, depth c log n , and negation, AND, OR, and threshold gates) can be computed in time O ((2−ε) n ).

FOCS Conference 2017 Conference Paper

A Dichotomy for Regular Expression Membership Testing

  • Karl Bringmann
  • Allan Grønlund Jørgensen
  • Kasper Green Larsen

We study regular expression membership testing: Given a regular expression of size m and a string of size n, decide whether the string is in the language described by the regular expression. Its classic O(nm) algorithm is one of the big success stories of the 70s, which allowed pattern matching to develop into the standard tool that it is today. Many special cases of pattern matching have been studied that can be solved faster than in quadratic time. However, a systematic study of tractable cases was made possible only recently, with the first conditional lower bounds reported by Backurs and Indyk [FOCS'16]. Restricted to any “type” of homogeneous regular expressions of depth 2 or 3, they either presented a near-linear time algorithm or a quadratic conditional lower bound, with one exception known as the Word Break problem. In this paper we complete their work as follows: (1) We present two almost-linear time algorithms that generalize all known almost-linear time algorithms for special cases of regular expression membership testing. (2) We classify all types, except for the Word Break problem, into almost-linear time or quadratic time assuming the Strong Exponential Time Hypothesis. This extends the classification from depth 2 and 3 to any constant depth. (3) For the Word Break problem we give an improved Õ(nm 1/3 + m) algorithm. Surprisingly, we also prove a matching conditional lower bound for combinatorial algorithms. This establishes Word Break as the only intermediate problem. In total, we prove matching upper and lower bounds for any type of bounded-depth homogeneous regular expressions, which yields a full dichotomy for regular expression membership testing.

SODA Conference 2017 Conference Paper

A Near-Linear Pseudopolynomial Time Algorithm for Subset Sum

  • Karl Bringmann

Given a set Z of n positive integers and a target value t, the SuBSETSuM problem asks whether any subset of Z sums to t. A textbook pseudopolynomial time algorithm by Bellman from 1957 solves SuBSETSuM in time O ( nt ). This has been improved to O ( n maxZ) by Pisinger [J. Algorithms’99] and recently to by Koiliaris and Xu [SODA’17]. Here we present a simple and elegant randomized algorithm running in time Õ ( n +1). This improves upon a classic algorithm and is likely to be near-optimal, since it matches conditional lower bounds from S et C oyer and k -C lique. We then use our new algorithm and additional tricks to improve the best known polynomial space solution from time Õ( n 3 t ) and space Õ ( n 2 ) to time Õ( n t ) and space Õ ( n log t ), assuming the Extended Riemann Hypothesis. Unconditionally, we obtain time Õ ( n t 1+e ) and space Õ ( nt e ) for any constant ∊ > 0.

NeurIPS Conference 2017 Conference Paper

Approximation Algorithms for $\ell_0$-Low Rank Approximation

  • Karl Bringmann
  • Pavel Kolev
  • David Woodruff

We study the $\ell_0$-Low Rank Approximation Problem, where the goal is, given an $m \times n$ matrix $A$, to output a rank-$k$ matrix $A'$ for which $\|A'-A\|_0$ is minimized. Here, for a matrix $B$, $\|B\|_0$ denotes the number of its non-zero entries. This NP-hard variant of low rank approximation is natural for problems with no underlying metric, and its goal is to minimize the number of disagreeing data positions. We provide approximation algorithms which significantly improve the running time and approximation factor of previous work. For $k > 1$, we show how to find, in poly$(mn)$ time for every $k$, a rank $O(k \log(n/k))$ matrix $A'$ for which $\|A'-A\|_0 \leq O(k^2 \log(n/k)) \OPT$. To the best of our knowledge, this is the first algorithm with provable guarantees for the $\ell_0$-Low Rank Approximation Problem for $k > 1$, even for bicriteria algorithms. For the well-studied case when $k = 1$, we give a $(2+\epsilon)$-approximation in {\it sublinear time}, which is impossible for other variants of low rank approximation such as for the Frobenius norm. We strengthen this for the well-studied case of binary matrices to obtain a $(1+O(\psi))$-approximation in sublinear time, where $\psi = \OPT/\nnz{A}$. For small $\psi$, our approximation factor is $1+o(1)$.

FOCS Conference 2017 Conference Paper

Fine-Grained Complexity of Analyzing Compressed Data: Quantifying Improvements over Decompress-and-Solve

  • Amir Abboud
  • Arturs Backurs
  • Karl Bringmann
  • Marvin Künnemann

Can we analyze data without decompressing it? As our data keeps growing, understanding the time complexity of problems on compressed inputs, rather than in convenient uncompressed forms, becomes more and more relevant. Suppose we are given a compression of size n of data that originally has size N, and we want to solve a problem with time complexity T(·). The naive strategy of “decompress-and-solve” gives time T(N), whereas “the gold standard” is time T(n): to analyze the compression as efficiently as if the original data was small. We restrict our attention to data in the form of a string (text, files, genomes, etc.) and study the most ubiquitous tasks. While the challenge might seem to depend heavily on the specific compression scheme, most methods of practical relevance (Lempel-Ziv-family, dictionary methods, and others) can be unified under the elegant notion of Grammar-Compressions. A vast literature, across many disciplines, established this as an influential notion for Algorithm design. We introduce a direly needed framework for proving (conditional) lower bounds in this field, allowing us to assess whether decompress-and-solve can be improved, and by how much. Our main results are: (1) The O(nN√(log N/n)) bound for LCS and the O(min{N log N, nM}) bound for Pattern Matching with Wildcards are optimal up to N o(1) factors, under the Strong Exponential Time Hypothesis. (Here, M denotes the uncompressed length of the compressed pattern.) (2) Decompress-and-solve is essentially optimal for ContextFree Grammar Parsing and RNA Folding, under the k-Clique conjecture. (3) We give an algorithm showing that decompress-and-solve is not optimal for Disjointness.

FOCS Conference 2016 Conference Paper

Truly Sub-cubic Algorithms for Language Edit Distance and RNA-Folding via Fast Bounded-Difference Min-Plus Product

  • Karl Bringmann
  • Fabrizio Grandoni 0001
  • Barna Saha
  • Virginia Vassilevska Williams

It is a major open problem whether the (min, +)-product of two n by n matrices has a truly sub-cubic time algorithm, as it is equivalent to the famous All-Pairs-Shortest-Paths problem (APSP) in n-vertex graphs. There are some restrictions of the (min, +)-product to special types of matrices that admit truly sub-cubic algorithms, each giving rise to a special case of APSP that can be solved faster. In this paper we consider a new, different and powerful restriction in which one matrix can be arbitrary, as long as the other matrix has "bounded differences" in either its columns or rows, i. e. any two consecutive entries differ by only a small amount. We obtain the first truly sub-cubic algorithm for this Bounded Differences (min, +)-product (answering an open problem of Chan and Lewenstein). Our new algorithm, combined with a strengthening of an approach of L. Valiant for solving context-free grammar parsing with matrix multiplication, yields the first truly sub-cubic algorithms for the following problems: Language Edit Distance (a major problem in the parsing community), RNA-folding (a major problem in bioinformatics) and Optimum Stack Generation (answering an open problem of Tarjan).

FOCS Conference 2015 Conference Paper

Quadratic Conditional Lower Bounds for String Problems and Dynamic Time Warping

  • Karl Bringmann
  • Marvin Künnemann

Classic similarity measures of strings are longest common subsequence and Levenshtein distance (i. e. , The classic edit distance). A classic similarity measure of curves is dynamic time warping. These measures can be computed by simple O(n2) dynamic programming algorithms, and despite much effort no algorithms with significantly better running time are known. We prove that, even restricted to binary strings or one-dimensional curves, respectively, these measures do not have strongly sub quadratic time algorithms, i. e. , No algorithms with running time O(n2 -- ε) for any ε > 0, unless the Strong Exponential Time Hypothesis fails. We generalize the result to edit distance for arbitrary fixed costs of the four operations (deletion in one of the two strings, matching, substitution), by identifying trivial cases that can be solved in constant time, and proving quadratic-time hardness on binary strings for all other cost choices. This improves and generalizes the known hardness result for Levenshtein distance [Backurs, Indyk STOC'15] by the restriction to binary strings and the generalization to arbitrary costs, and adds important problems to a recent line of research showing conditional lower bounds for a growing number of quadratic time problems. As our main technical contribution, we introduce a framework for proving quadratic-time hardness of similarity measures. To apply the framework it suffices to construct a single gadget, which encapsulates all the expressive power necessary to emulate a reduction from satisfiability. Finally, we prove quadratic-time hardness for longest palindromic subsequence and longest tandem subsequence via reductions from longest common subsequence, showing that conditional lower bounds based on the Strong Exponential Time Hypothesis also apply to string problems that are not necessarily similarity measures.

FOCS Conference 2014 Conference Paper

Why Walking the Dog Takes Time: Frechet Distance Has No Strongly Subquadratic Algorithms Unless SETH Fails

  • Karl Bringmann

The Fréchet distance is a well-studied and very popular measure of similarity of two curves. Many variants and extensions have been studied since Alt and Godau introduced this measure to computational geometry in 1991. Their original algorithm to compute the Fréchet distance of two polygonal curves with n vertices has a runtime of O(n^2 log n). More than 20 years later, the state of the art algorithms for most variants still take time more than O(n2 / log n), but no matching lower bounds are known, not even under reasonable complexity theoretic assumptions. To obtain a conditional lower bound, in this paper we assume the Strong Exponential Time Hypothesis or, more precisely, that there is no O*((2-δ)N) algorithm for CNF-SAT for any delta > 0. Under this assumption we show that the Fréchet distance cannot be computed in strongly subquadratic time, i. e. , in time O(n2-δ) for any delta > 0. This means that finding faster algorithms for the Fréchet distance is as hard as finding faster CNF-SAT algorithms, and the existence of a strongly subquadratic algorithm can be considered unlikely. Our result holds for both the continuous and the discrete Fréchet distance. We extend the main result in various directions. Based on the same assumption we (1) show non-existence of a strongly subquadratic 1. 001-approximation, (2) present tight lower bounds in case the numbers of vertices of the two curves are imbalanced, and (3) examine realistic input assumptions (c-packed curves).

AIJ Journal 2013 Journal Article

Approximation quality of the hypervolume indicator

  • Karl Bringmann
  • Tobias Friedrich

In order to allow a comparison of (otherwise incomparable) sets, many evolutionary multi-objective optimizers use indicator functions to guide the search and to evaluate the performance of search algorithms. The most widely used indicator is the hypervolume indicator. It measures the volume of the dominated portion of the objective space bounded from below by a reference point. Though the hypervolume indicator is very popular, it has not been shown that maximizing the hypervolume indicator of sets of bounded size is indeed equivalent to the overall objective of finding a good approximation of the Pareto front. To address this question, we compare the optimal approximation ratio with the approximation ratio achieved by two-dimensional sets maximizing the hypervolume indicator. We bound the optimal multiplicative approximation ratio of n points by 1 + Θ ( 1 / n ) for arbitrary Pareto fronts. Furthermore, we prove that the same asymptotic approximation ratio is achieved by sets of n points that maximize the hypervolume indicator. However, there is a provable gap between the two approximation ratios which is even exponential in the ratio between the largest and the smallest value of the front. We also examine the additive approximation ratio of the hypervolume indicator in two dimensions and prove that it achieves the optimal additive approximation ratio apart from a small ratio ⩽ n / ( n − 2 ), where n is the size of the population. Hence the hypervolume indicator can be used to achieve a good additive but not a good multiplicative approximation of a Pareto front. This motivates the introduction of a “logarithmic hypervolume indicator” which provably achieves a good multiplicative approximation ratio.

MFCS Conference 2013 Conference Paper

Bringing Order to Special Cases of Klee's Measure Problem

  • Karl Bringmann

Abstract Klee’s Measure Problem (KMP) asks for the volume of the union of n axis-aligned boxes in ℝ d. Omitting logarithmic factors, the best algorithm has runtime \(\mathcal{O}^*(n^{d/2})\) [Overmars, Yap’91]. There are faster algorithms known for several special cases: Cube-KMP (where all boxes are cubes), Unitcube-KMP (where all boxes are cubes of equal side length), Hypervolume (where all boxes share a vertex), and k -Grounded (where the projection onto the first k dimensions is a Hypervolume instance). In this paper we bring some order to these special cases by providing reductions among them. In addition to the trivial inclusions, we establish Hypervolume as the easiest of these special cases, and show that the runtimes of Unitcube-KMP and Cube-KMP are polynomially related. More importantly, we show that any algorithm for one of the special cases with runtime T ( n, d ) implies an algorithm for the general case with runtime T ( n, 2 d ), yielding the first non-trivial relation between KMP and its special cases. This allows to transfer W[1]-hardness of KMP to all special cases, proving that no n o ( d ) algorithm exists for any of the special cases assuming the Exponential Time Hypothesis. Furthermore, assuming that there is no improved algorithm for the general case of KMP (no algorithm with runtime \(\mathcal{O}(n^{d/2 - \epsilon})\) ) this reduction shows that there is no algorithm with runtime \(\mathcal{O}(n^{\lfloor d/2 \rfloor /2 - \epsilon})\) for any of the special cases. Under the same assumption we show a tight lower bound for a recent algorithm for 2 -Grounded [Yıldız, Suri’12].

MFCS Conference 2013 Conference Paper

Random Shortest Paths: Non-euclidean Instances for Metric Optimization Problems

  • Karl Bringmann
  • Christian Engels
  • Bodo Manthey
  • B. V. Raghavendra Rao

Abstract Probabilistic analysis for metric optimization problems has mostly been conducted on random Euclidean instances, but little is known about metric instances drawn from distributions other than the Euclidean. This motivates our study of random metric instances for optimization problems obtained as follows: Every edge of a complete graph gets a weight drawn independently at random. The length of an edge is then the length of a shortest path (with respect to the weights drawn) that connects its two endpoints. We prove structural properties of the random shortest path metrics generated in this way. Our main structural contribution is the construction of a good clustering. Then we apply these findings to analyze the approximation ratios of heuristics for matching, the traveling salesman problem (TSP), and the k -center problem, as well as the running-time of the 2-opt heuristic for the TSP. The bounds that we obtain are considerably better than the respective worst-case bounds. This suggests that random shortest path metrics are easy instances, similar to random Euclidean instances, albeit for completely different structural reasons.

AIJ Journal 2013 Journal Article

Speeding up many-objective optimization by Monte Carlo approximations

  • Karl Bringmann
  • Tobias Friedrich
  • Christian Igel
  • Thomas Voß

Many state-of-the-art evolutionary vector optimization algorithms compute the contributing hypervolume for ranking candidate solutions. However, with an increasing number of objectives, calculating the volumes becomes intractable. Therefore, although hypervolume-based algorithms are often the method of choice for bi-criteria optimization, they are regarded as not suitable for many-objective optimization. Recently, Monte Carlo methods have been derived and analyzed for approximating the contributing hypervolume. Turning theory into practice, we employ these results in the ranking procedure of the multi-objective covariance matrix adaptation evolution strategy (MO-CMA-ES) as an example of a state-of-the-art method for vector optimization. It is empirically shown that the approximation does not impair the quality of the obtained solutions given a budget of objective function evaluations, while considerably reducing the computation time in the case of multiple objectives. These results are obtained on common benchmark functions as well as on two design optimization tasks. Thus, employing Monte Carlo approximations makes hypervolume-based algorithms applicable to many-objective optimization.

TCS Journal 2012 Journal Article

Approximating the least hypervolume contributor: NP-hard in general, but fast in practice

  • Karl Bringmann
  • Tobias Friedrich

The hypervolume indicator is an increasingly popular set measure to compare the quality of two Pareto sets. The basic ingredient of most hypervolume indicator based optimization algorithms is the calculation of the hypervolume contribution of single solutions regarding a Pareto set. We show that exact calculation of the hypervolume contribution is #P-hard while its approximation is NP-hard. The same holds for the calculation of the minimal contribution. We also prove that it is NP-hard to decide whether a solution has the least hypervolume contribution. Even deciding whether the contribution of a solution is at most ( 1 + ε ) times the minimal contribution is NP-hard. This implies that it is neither possible to efficiently find the least contributing solution (unless P = NP ) nor to approximate it (unless NP = BPP ). Nevertheless, in the second part of the paper we present a fast approximation algorithm for this problem. We prove that for arbitrarily given ε, δ > 0 it calculates a solution with contribution at most ( 1 + ε ) times the minimal contribution with probability at least ( 1 − δ ). Though it cannot run in polynomial time for all instances, it performs extremely fast on various benchmark datasets. The algorithm solves very large problem instances which are intractable for exact algorithms (e. g. , 10, 000 solutions in 100 dimensions) within a few seconds.

IJCAI Conference 2011 Conference Paper

Approximation-Guided Evolutionary Multi-Objective Optimization

  • Karl Bringmann
  • Tobias Friedrich
  • Frank Neumann
  • Markus Wagner

Multi-objective optimization problems arise frequently in applications but can often only be solved approximately by heuristic approaches. Evolutionary algorithms have been widely used to tackle multi-objective problems. These algorithms use different measures to ensure diversity in the objective space but are not guided by a formal notion of approximation. We present a new framework of an evolutionary algorithm for multi-objective optimization that allows to work with a formal notion of approximation. Our experimental results show that our approach outperforms state-of-the-art evolutionary algorithms in terms of the quality of the approximation that is obtained in particular for problems with many objectives.