Arrow Research search

Author name cluster

Bernhard Haeupler

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

50 papers
2 author rows

Possible papers

50

FOCS Conference 2025 Conference Paper

Parallel (1+ε)-Approximate Multi-Commodity Min-Cost Flow in Almost Optimal Depth and Work

  • Bernhard Haeupler
  • Yonggang Jiang
  • Yaowei Long
  • Thatchaphol Saranurak
  • Shengzhe Wang

We present a parallel algorithm for computing ($1+ \epsilon$)-approximate min-cost flow on an undirected graph with m edges, where capacities and costs are assigned to both edges and vertices. Our algorithm achieves $\hat{O}(m)$ work and $\hat{O}(1)$ depth when $\epsilon\gt 1 / \operatorname{polylog}(m)$, making both the work and depth almost optimal, up to a subpolynomial factor. Previous algorithms with $\hat{O}(m)$ work required $\Omega(m)$ depth, even for special cases of min-cost flow with only edge capacities or max flow with vertex capacities. Our result generalizes prior almost-optimal parallel $(1+\epsilon)$-approximation algorithms for these special cases, including shortest paths [1]–[3] and max flow with only edge capacities [4], [5]. Our key technical contribution is the first construction of length-constrained flow shortcuts with $(1+\epsilon)$ length slack, $\hat{O}(1)$ congestion slack, and $\hat{O}(1)$ step bound. This provides a strict generalization of the influential concept of $(\hat{O}(1), \epsilon)$-hopsets [6], allowing for additional control over congestion. Previous lengthconstrained flow shortcuts [7] incur a large constant in the length slack, which would lead to a large approximation factor. To enable our flow algorithms to work under vertex capacities, we also develop a close-to-linear time algorithm for computing length-constrained vertex expander decomposition. Building on Cohen’s idea of path-count flows [8], we further extend our algorithm to solve $(1+\epsilon)$-approximate k-commodity min-cost flow problems with almost-optimal $\hat{O}(m k)$ work and $\hat{O}(1)$ depth, independent of the number of commodities k.

FOCS Conference 2024 Conference Paper

Dynamic Deterministic Constant-Approximate Distance Oracles with n ε Worst-Case Update Time

  • Bernhard Haeupler
  • Yaowei Long
  • Thatchaphol Saranurak

We present a new distance oracle in the fully dynamic setting: given a weighted undirected graph G = (V, E) with $n$ vertices undergoing both edge insertions and deletions, and an arbitrary parameter $\epsilon\in[1/\log^{c}n, 1$ where $c$ > 0 is a small constant, we can deterministically maintain a data structure with $O(n^{\epsilon})$ worst-case update time that, given any pair of vertices (u, v), returns a $2^{\text{poly}(1/\epsilon)}$ -approximate distance between $u$ and $v$ in poly(1/E) log log $n$ query time. Our algorithm significantly advances the state-of-the-art in two aspects, both for fully dynamic algorithms and even decremental algorithms. First, no existing algorithm with worst-case update time guarantees a o( $n$ )-approximation while also achieving an n 2-Ω(1) update and $n^{o(1)}$ query time, while our algorithm offers a constant $O_{\epsilon}(1)$ -approximation with $O(n^{\epsilon})$ update time and $o_{\epsilon}$ (log log n) query time. Second, even if amortized update time is allowed, it is the first deterministic constant-approximation algorithm with $n^{1-\Omega(1)}$ update and query time. The best result in this direction is the recent deterministic distance oracle by Chuzhoy and Zhang [STOC 2023] which achieves an approxi- mation of (log log $n)^{2^{O (1 / \epsilon^3)}}$ with amortized update time of $O(n^{\epsilon)}$ and query time of $2^{\mathrm{p}\circ 1\mathrm{y}(1/\epsilon)}\log n$ log log n. We obtain the result by dynamizing tools related to length- constrained expanders [Haeupler-Racke-Ghaffari, STOC 2022; Haeupler-Hershkowitz-Tan, FOCS 2024]. Our technique com- pletely bypasses the 40-year-old Even-Shiloach tree, which has remained the most pervasive tool in the area but is inherently amortized.

FOCS Conference 2024 Conference Paper

New Structures and Algorithms for Length-Constrained Expander Decompositions

  • Bernhard Haeupler
  • D. Ellis Hershkowitz
  • Zihan Tan

Expander decompositions form the basis of one of the most flexible paradigms for close-to-linear-time graph algorithms. Length-constrained expander de-compositions generalize this paradigm to better work for problems with lengths, distances and costs. Roughly, an $(h, s)$ -length $\phi$ -expander decomposition is a small collection of length increases to a graph so that nodes within distance $h$ can route flow over paths of length $hs$ with congestion at most $1/\phi$. In this work, we give a close-to-linear time algorithm for computing length-constrained expander decompositions in graphs with general lengths and capacities. Notably, and unlike previous works, our algorithm allows for one to trade off off between the size of the decomposition and the length of routing paths: for any $\epsilon > 0$ not too small, our algorithm computes in close-to-linear time an $(h, s)$ -length $\phi$ -expander decomposition of size $m\cdot\phi\cdot n^{\epsilon}$ where $s$ = exp(poly $(1/\epsilon)$ ). The key foundations of our algorithm are: (1) a simple yet powerful structural theorem which states that the union of a sequence of sparse length-constrained cuts is itself sparse and (2) new algorithms for efficiently computing sparse length-constrained flows.

FOCS Conference 2024 Conference Paper

Universal Optimality of Dijkstra Via Beyond-Worst-Case Heaps

  • Bernhard Haeupler
  • Richard Hladík
  • Václav Rozhon
  • Robert Endre Tarjan
  • Jakub Tetek

This paper proves that Dijkstra's shortest-path algorithm is universally optimal in both its running time and number of comparisons when combined with a sufficiently efficient heap data structure. Universal optimality is a powerful beyond-worst-case performance guarantee for graph algorithms that informally states that a single algorithm performs as well as possible for every single graph topology. We give the first application of this notion to any sequential algorithm. We design a new heap data structure with a working-set property guaranteeing that the heap takes advantage of locality in heap operations. Our heap matches the optimal (worst-case) bounds of Fibonacci heaps but also provides the beyond-worst-case guarantee that the cost of extracting the minimum element is merely logarithmic in the number of elements inserted after it instead of logarithmic in the number of all elements in the heap. This makes the extraction of recently added elements cheaper. We prove that our working-set property guarantees universal optimality for the problem of ordering vertices by their distance from the source vertex: The sequence of heap operations generated by any run of Dijkstra's algorithm on a fixed graph possesses enough locality that one can couple the number of comparisons performed by any heap with our working-set bound to the minimum number of comparisons required to solve the distance ordering problem on this graph for a worst-case choice of arc lengths.

STOC Conference 2023 Conference Paper

Maximum Length-Constrained Flows and Disjoint Paths: Distributed, Deterministic, and Fast

  • Bernhard Haeupler
  • D. Ellis Hershkowitz
  • Thatchaphol Saranurak

Computing routing schemes that support both high throughput and low latency is one of the core challenges of network optimization. Such routes can be formalized as h -length flows which are defined as flows whose flow paths have length at most h . Many well-studied algorithmic primitives—such as maximal and maximum length-constrained disjoint paths—are special cases of h -length flows. Likewise the optimal h -length flow is a fundamental quantity in network optimization, characterizing, up to poly-log factors, how quickly a network can accomplish numerous distributed primitives. In this work, we give the first efficient algorithms for computing (1 − є)-approximate h -length flows that are nearly “as integral as possible.” We give deterministic algorithms that take Õ(poly( h , 1/є)) parallel time and Õ(poly( h , 1/є) · 2 O (√log n ) ) distributed CONGEST time. We also give a CONGEST algorithm that succeeds with high probability and only takes Õ(poly( h , 1/є)) time. Using our h -length flow algorithms, we give the first efficient deterministic CONGEST algorithms for the maximal disjoint paths problem with length constraints—settling an open question of Chang and Saranurak (FOCS 2020)—as well as essentially-optimal parallel and distributed approximation algorithms for maximum length-constrained disjoint paths. The former greatly simplifies deterministic CONGEST algorithms for computing expander decompositions. We also use our techniques to give the first efficient and deterministic (1−є)-approximation algorithms for bipartite b -matching in CONGEST. Lastly, using our flow algorithms, we give the first algorithms to efficiently compute h -length cutmatches, an object at the heart of recent advances in length-constrained expander decompositions.

STOC Conference 2023 Conference Paper

Parallel Breadth-First Search and Exact Shortest Paths and Stronger Notions for Approximate Distances

  • Václav Rozhon
  • Bernhard Haeupler
  • Anders Martinsson
  • Christoph Grunau
  • Goran Zuzic

This paper introduces stronger notions for approximate single-source shortest-path distances and gives simple reductions to compute them from weaker standard notions of approximate distances. Strongly-approximate distances isolate, capture, and address the well-known barriers for using approximate distances algorithmically and their reductions directly address these barriers in a clean and modular manner. The reductions are model-independent and require only log O (1) n black-box approximate distance computations. They apply equally to parallel, distributed, and semi-streaming settings. Strongly (1+ε)-approximate distances are equivalent to exact distances in a (1+ε)-perturbed graph and approximately satisfy the subtractive triangle inequality. In directed graphs, this is sufficient to reduce even exact distance computation to arbitrary (1+ε)-approximate ones.

FOCS Conference 2022 Conference Paper

Deterministic Low-Diameter Decompositions for Weighted Graphs and Distributed and Parallel Applications

  • Václav Rozhon
  • Michael Elkin
  • Christoph Grunau
  • Bernhard Haeupler

This paper presents new deterministic and distributed low-diameter decomposition algorithms for weighted graphs. In particular, we show that if one can efficiently compute approximate distances in a parallel or a distributed setting, one can also efficiently compute low-diameter decompositions. This consequently implies solutions to many fundamental distance based problems using a polylogarithmic number of approximate distance computations. Our low-diameter decomposition generalizes and extends the line of work starting from [RG20] to weighted graphs in a very model-independent manner. Moreover, our clustering results have additional useful properties, including strong-diameter guarantees, separation properties, restricting cluster centers to specified terminals, and more. Applications include: –The first near-linear work and polylogarithmic depth randomized and deterministic parallel algorithm for low-stretch spanning trees (LSST) with polylogarithmic stretch. Previously, the best parallel LSST algorithm required $m. n^{o(1)}$ work and $n^{o(1)}$ depth and was inherently randomized. No deterministic LSST algorithm with truly sub-quadratic work and sub-linear depth was known. –The first near-linear work and polylogarithmic depth deterministic algorithm for computing an $\ell_{1}-$embedding into polylogarithmic dimensional space with polylogarithmic distortion. The best prior deterministic algorithms for $\ell_{1}$-embeddings either require large polynomial work or are inherently sequential. Even when we apply our techniques to the classical problem of computing a ball-carving with strong-diameter $O(\log^{2}n)$ in an unweighted graph, our new clustering algorithm still leads to an improvement in round complexity from $O(\log^{10}n)$ rounds [CG21] to $O(\log^{4}n)$.

SODA Conference 2022 Conference Paper

Universally-Optimal Distributed Shortest Paths and Transshipment via Graph-Based ℓ 1 -Oblivious Routing

  • Goran Zuzic
  • Gramoz Goranci
  • Mingquan Ye
  • Bernhard Haeupler
  • Xiaorui Sun

We provide universally-optimal distributed graph algorithms for (1+ ∊ )-approximate shortest path problems including shortest-path-tree and transshipment. The universal optimality of our algorithms guarantees that, on any n -node network G, our algorithm completes in T · n o (1) rounds whenever a T -round algorithm exists for G. This includes D · n o (1) -round algorithms for any planar or excluded-minor network. Our algorithms never require more than rounds, resulting in the first sub-linear-round distributed algorithm for transshipment. The key technical contribution leading to these results is the first efficient n o (1) -competitive linear ℓ 1 -oblivious routing operator that does not require the use of ℓ 1 -embeddings. Our construction is simple, solely based on low-diameter decompositions, and—in contrast to all known constructions—directly produces an oblivious flow instead of just an approximation of the optimal flow cost. This also has the benefit of simplifying the interaction with Sherman's multiplicative weight framework [SODA'17] in the distributed setting and its subsequent rounding procedures.

SODA Conference 2021 Conference Paper

A Time-Optimal Randomized Parallel Algorithm for MIS

  • Mohsen Ghaffari 0001
  • Bernhard Haeupler

We present a randomized parallel algorithm, in the Exclusive-Read Exclusive-Write (EREW) PRAM model, that computes a Maximal Independent Set (MIS) in O (log n ) time and using O ( m log 2 n ) work, with high probability. Thus, MIS ∊ RNC 1. This time complexity is optimal and it improves on the celebrated O (log 2 n ) time algorithms of Luby [STOC'85] and Alon, Babai, and Itai [JALG'86], which had remained the state of the art for the past 35 years.

SODA Conference 2021 Conference Paper

Efficient Linear and Affine Codes for Correcting Insertions/Deletions

  • Kuan Cheng
  • Venkatesan Guruswami
  • Bernhard Haeupler
  • Xin Li 0006

This paper studies linear and affine error-correcting codes for correcting synchronization errors such as insertions and deletions. We call such codes linear/affine insdel codes. Linear codes that can correct even a single deletion are limited to have information rate at most 1/2 (achieved by the trivial 2-fold repetition code). Previously it was (erroneously) reported that more generally no non-trivial linear codes correcting k deletions exist, i. e. , that the ( k + 1)-fold repetition codes and its rate of 1/( k + 1) are basically optimal for any k. We disprove this and show the existence of binary linear codes of length n and rate just below 1/2 capable of correcting Ω( n ) insertions and deletions. This identifies rate 1/2 as a sharp threshold for recovery from deletions for linear codes, and reopens the quest for a better understanding of the capabilities of linear codes for correcting insertions/deletions. We prove novel outer bounds and existential inner bounds for the rate vs. (edit) distance trade-off of linear insdel codes. We complement our existential results with an efficient synchronization-string-based transformation that converts any asymptotically-good linear code for Hamming errors into an asymptotically-good linear code for insdel errors. Lastly we show that the ½-rate limitation does not hold for affine codes by giving an explicit affine code of rate 1 – ∊ which can efficiently correct a constant fraction of insdel errors.

FOCS Conference 2020 Conference Paper

Network Coding Gaps for Completion Times of Multiple Unicasts

  • Bernhard Haeupler
  • David Wajc
  • Goran Zuzic

We study network coding gaps for the problem of makespan minimization of multiple unicasts. In this problem distinct packets at different nodes in a network need to be delivered to a destination specific to each packet, as fast as possible. The network coding gap specifies how much coding packets together in a network can help compared to the more natural approach of routing. While makespan minimization using routing has been intensely studied for the multiple unicasts problem, no bounds on network coding gaps for this problem are known. We develop new techniques which allow us to upper bound the network coding gap for the makespan of k unicasts, proving this gap is at most polylogarithmic in k. Complementing this result, we show there exist instances of k unicasts for which this coding gap is polylogarithmic in k. Our results also hold for average completion time, and more generally any lp norm of completion times.

FOCS Conference 2019 Conference Paper

Optimal Document Exchange and New Codes for Insertions and Deletions

  • Bernhard Haeupler

We give the first communication-optimal document exchange protocol. For any n and k < n our randomized scheme takes any n-bit file F and computes a Θ(k log n/k) -bit summary from which one can reconstruct F, with high probability, given a related file F' with edit distance ED(F, F') ≤ k. The size of our summary is information-theoretically order optimal for all values of k, giving a randomized solution to a longstanding open question of [Orlitsky; FOCS'91]. It also is the first non-trivial solution for the interesting setting where a small constant fraction of symbols have been edited, producing an optimal summary of size O(H(δ)n) for k=δ n. This concludes a long series of better-and-better protocols which produce larger summaries for sub-linear values of k and sub-polynomial failure probabilities. In particular, the recent break-through of [Belazzougui, Zhang; FOCS'16] assumes that k < n^ε, produces a summary of size O(klog^2 k + k log n), and succeeds with probability 1-(k log n)^ -O(1). We also give an efficient derandomized document exchange protocol with summary size O(k log^2 n/k). This improves, for any k, over a deterministic document exchange protocol by Belazzougui with summary size O(k^2 + k log^2 n). Our deterministic document exchange directly provides new efficient systematic error correcting codes for insertions and deletions. These (binary) codes correct any δ fraction of adversarial insertions/deletions while having a rate of 1 - O(δ log^2 1/δ) and improve over the codes of Guruswami and Li and Haeupler, Shahrasbi and Vitercik which have rate 1 - Θ (√δ log^O(1) 1/ε).

STOC Conference 2018 Conference Paper

Explicit binary tree codes with polylogarithmic size alphabet

  • Gil Cohen
  • Bernhard Haeupler
  • Leonard J. Schulman

This paper makes progress on the problem of explicitly constructing a binary tree code with constant distance and constant alphabet size. We give an explicit binary tree code with constant distance and alphabet size poly (log n ), where n is the depth of the tree. This is the first improvement over a two-decade-old construction that has an exponentially larger alphabet of size poly ( n ). At the core of our construction is the first explicit tree code with constant rate and constant distance, though with non-constant arity - a result of independent interest. This construction adapts the polynomial interpolation framework to the online setting.

STOC Conference 2018 Conference Paper

Synchronization strings: explicit constructions, local decoding, and applications

  • Bernhard Haeupler
  • Amirbehshad Shahrasbi

This paper gives new results for synchronization strings, a powerful combinatorial object introduced by [Haeupler, Shahrasbi; STOC’17] that allows to efficiently deal with insertions and deletions in various communication problems: - We give a deterministic, linear time synchronization string construction, improving over an O ( n 5 ) time randomized construction. Independently of this work, a deterministic O ( n log 2 log n ) time construction was proposed by Cheng, Li, and Wu. - We give a deterministic construction of an infinite synchronization string which outputs the first n symbols in O ( n ) time. Previously it was not known whether such a string was computable. - Both synchronization string constructions are highly explicit, i.e., the i th symbol can be deterministically computed in O (log i ) time. - This paper also introduces a generalized notion we call long-distance synchronization strings. Such strings allow for local and very fast decoding. In particular only O (log 3 n ) time and access to logarithmically many symbols is required to decode any index. The paper also provides several applications for these improved synchronization strings: - For any δ 0 we provide an insdel error correcting block code with rate 1 − δ − є which can correct any δ/3 fraction of insertion and deletion errors in O ( n log 3 n ) time. This near linear computational efficiency is surprising given that we do not even know how to compute the (edit) distance between the decoding input and output in sub-quadratic time. - We show that local decodability implies that error correcting codes constructed with long-distance synchronization strings can not only efficiently recover from δ fraction of insdel errors but, similar to [Schulman, Zuckerman; TransInf’99], also from any O (δ / log n ) fraction of block transpositions and block replications. These block corruptions allow arbitrarily long substrings to be swapped or replicated anywhere. - We show that highly explicitness and local decoding allow for infinite channel simulations with exponentially smaller memory and decoding time requirements. These simulations can then be used to give the first near linear time interactive coding scheme for insdel errors, similar to the result of [Brakerski, Naor; SODA’13] for Hamming errors.

SODA Conference 2017 Conference Paper

Bridging the Capacity Gap Between Interactive and One-Way Communication

  • Bernhard Haeupler
  • Ameya Velingker

We study the communication rate of coding schemes for interactive communication that transform any two-party interactive protocol into a protocol that is robust to noise. Recently, Haeupler [11] showed that if an ∊ > 0 fraction of transmissions are corrupted, adversarially or randomly, then it is possible to achieve a communication rate of Furthermore, Haeupler conjectured that this rate is optimal for general input protocols. This stands in contrast to the classical setting of one-way communication in which error-correcting codes are known to achieve an optimal communication rate of 1 In this work, we show that the quadratically smaller rate loss of the one-way setting can also be achieved in interactive coding schemes for a very natural class of input protocols. We introduce the notion of average message length, or the average number of bits a party sends before receiving a reply, as a natural parameter for measuring the level of interactivity in a protocol. Moreover, we show that any protocol with average message length ℓ = Ω(poly(1/∊)) can be simulated by a protocol with optimal communication rate 1 — Θ(Η(∊)) over an oblivious adversarial channel with error fraction e. Furthermore, under the additional assumption of access to public shared randomness, the optimal communication rate is achieved ratelessly, i. e. , the communication rate adapts automatically to the actual error rate e without having to specify it in advance. This shows that the capacity gap between one-way and interactive communication can be bridged even for very small (constant in e) average message lengths, which are likely to be found in many applications.

SODA Conference 2017 Conference Paper

Parallel algorithms and concentration bounds for the Lovász Local Lemma via witness-DAGs

  • Bernhard Haeupler
  • David G. Harris 0001

The Lovász Local Lemma (LLL) is a cornerstone principle in the probabilistic method of combinatorics, and a seminal algorithm of Moser & Tardos (2010) provides an efficient randomized algorithm to implement it. This algorithm can be parallelized to give an algorithm that uses polynomially many processors and runs in O (log 3 n ) time, stemming from O (log n ) adaptive computations of a maximal independent set (MIS). Chung et al. (2014) developed faster local and parallel algorithms, potentially running in time O (log 2 n ), but these algorithms work under significantly more stringent conditions than the LLL. We give a new parallel algorithm that works under essentially the same conditions as the original algorithm of Moser & Tardos but uses only a single MIS computation, thus running in O (log 2 n ) time. This conceptually new algorithm also gives a clean combinatorial description of a satisfying assignment which might be of independent interest. Our techniques extend to the deterministic LLL algorithm given by Chandrasekaran et al. (2013) leading to an NC-algorithm running in time O (log 2 n ) as well. We also provide improved bounds on the runtimes of the sequential and parallel resampling-based algorithms originally developed by Moser & Tardos. Our bounds extend to any problem instance in which the tighter Shearer LLL criterion is satisfied. We also improve on the analysis of Kolipaka & Szegedy (2011) to give tighter concentration results.

STOC Conference 2017 Conference Paper

Synchronization strings: codes for insertions and deletions approaching the Singleton bound

  • Bernhard Haeupler
  • Amirbehshad Shahrasbi

We introduce synchronization strings, which provide a novel way of efficiently dealing with synchronization errors, i.e., insertions and deletions. Synchronization errors are strictly more general and much harder to deal with than more commonly considered half-errors, i.e., symbol corruptions and erasures. For every ε > 0, synchronization strings allow to index a sequence with an ε - O (1) size alphabet such that one can efficiently transform k synchronization errors into (1 + ε) k half-errors. This powerful new technique has many applications. In this paper we focus on designing insdel codes, i.e., error correcting block codes (ECCs) for insertion deletion channels. While ECCs for both half-errors and synchronization errors have been intensely studied, the later has largely resisted progress. As Mitzenmacher puts it in his 2009 survey: "Channels with synchronization errors ... are simply not adequately understood by current theory. Given the near-complete knowledge we have for channels with erasures and errors ... our lack of understanding about channels with synchronization errors is truly remarkable." Indeed, it took until 1999 for the first insdel codes with constant rate, constant distance, and constant alphabet size to be constructed and only since 2016 are there constructions of constant rate indel codes for asymptotically large noise rates. Even in the asymptotically large or small noise regime these codes are polynomially far from the optimal rate-distance tradeoff. This makes the understanding of insdel codes up to this work equivalent to what was known for regular ECCs after Forney introduced concatenated codes in his doctoral thesis 50 years ago. A straight forward application of our synchronization strings based indexing method gives a simple black-box construction which transforms any ECC into an equally efficient insdel code with only a small increase in the alphabet size. This instantly transfers much of the highly developed understanding for regular ECCs over large constant alphabets into the realm of insdel codes. Most notably, for the complete noise spectrum we obtain efficient "near-MDS" insdel codes which get arbitrarily close to the optimal rate-distance tradeoff given by the Singleton bound. In particular, for any δ ∈ (0,1) and ε > 0 we give insdel codes achieving a rate of 1 - ξ - ε over a constant size alphabet that efficiently correct a δ fraction of insertions or deletions.

SODA Conference 2016 Conference Paper

Distributed Algorithms for Planar Networks II: Low-Congestion Shortcuts, MST, and Min-Cut

  • Mohsen Ghaffari 0001
  • Bernhard Haeupler

This paper introduces the concept of low-congestion shortcuts for (near-)planar networks, and demonstrates their power by using them to obtain near-optimal distributed algorithms for problems such as Minimum Spanning Tree (MST) or Minimum Cut, in planar networks. Consider a graph G = ( V, E ) and a partitioning of V into subsets of nodes S 1, …, S N, each inducing a connected subgraph G [ S i ]. We define an α-congestion shortcut with dilation β to be a set of subgraphs H 1, …, H N ⊆ G, one for each subset S i, such that 1. For each i ∊ [1, N ], the diameter of the subgraph G [ S i ] + H i is at most β. 2. For each edge e ∊ E, the number of subgraphs G [ S i ] + H i containing e is at most α. We prove that any partition of a D -diameter planar graph into individually-connected parts admits an O ( D log D )-congestion shortcut with dilation O ( D log D ), and we also present a distributed construction of it in Õ ( D ) rounds. We moreover prove these parameters to be near-optimal; i. e. , there are instances in which, unavoidably, . Finally, we use low-congestion shortcuts, and their efficient distributed construction, to derive Õ ( D )-round distributed algorithms for MST and Min-Cut, in planar networks. This complexity nearly matches the trivial lower bound of Ω ( D ). We remark that this is the first result bypassing the well-known existential lower bound of general graphs (see Peleg and Rubinovich [FOCS'99]; Elkin [STOC'04]; and Das Sarma et al. [STOC'11]) in a family of graphs of interest.

SODA Conference 2016 Conference Paper

Towards Optimal Deterministic Coding for Interactive Communication

  • Ran Gelles
  • Bernhard Haeupler
  • Gillat Kol
  • Noga Ron-Zewi
  • Avi Wigderson

We study efficient, deterministic interactive coding schemes that simulate any interactive protocol both under random and adversarial errors, and can achieve a constant communication rate independent of the protocol length. For channels that flip bits independently with probability ∊ < 1/2, our coding scheme achieves a communication rate of and a failure probability of exp(− n /log n ) in length n protocols. Prior to our work, all nontrivial deterministic schemes (either efficient or not) had a rate bounded away from 1. Furthermore, the best failure probability achievable by an efficient deterministic coding scheme with constant rate was only quasi-polynomial, i. e. , of the form exp(− log O (1) n ) (Braverman, ITCS 2012). For channels in which an adversary controls the noise pattern our coding scheme can tolerate Ω(1/log n ) fraction of errors with rate approaching 1. Once more, all previously known nontrivial deterministic schemes (either efficient or not) in the adversarial setting had a rate bounded away from 1, and no nontrivial efficient deterministic coding schemes were known with any constant rate. Essential to both results is an explicit, efficiently encodable and decodable systematic tree code of length n that has relative distance Ω(1/log n ) and rate approaching 1, defined over an O (log n )-bit alphabet. No nontrivial tree code (either efficient or not) was known to approach rate 1, and no nontrivial distance bound was known for any efficient constant rate tree code. The fact that our tree code is systematic, turns out to play an important role in obtaining rate in the random error model, and approaching rate 1 in the adversarial error model.

SODA Conference 2015 Conference Paper

Capacity of Interactive Communication over Erasure Channels and Channels with Feedback

  • Ran Gelles
  • Bernhard Haeupler

We consider interactive communication performed over two simple types of noisy channels: binary error channels with noiseless feedback and binary erasure channels. In both cases, the noise model is adversarial Assuming at most ε- fraction of the bits can be corrupted, we show coding schemes that simulate any alternating interactive protocol with rate 1 — Θ( H (ε)). All our simulations are simple, randomized, and computationally efficient. The rates of our coding schemes stand in contrast to the interactive communication rates supported by random or adversarial error channels without feedback, for which the best known coding schemes achieve rates of and, respectively. As these rates are conjectured to be optimal, our result implies a large asymptotic gap between interactive communication rate over noisy channels with and without feedback. Such a gap has no equivalent in the standard one-way communication setting.

SODA Conference 2014 Conference Paper

Broadcast Throughput in Radio Networks: Routing vs. Network Coding

  • Noga Alon
  • Mohsen Ghaffari 0001
  • Bernhard Haeupler
  • Majid Khabbazian

The broadcast throughput in a network is defined as the average number of messages that can be transmitted per unit time from a given source to all other nodes when time goes to infinity. Classical broadcast algorithms treat messages as atomic tokens and route them from the source to the receivers by making intermediate nodes store and forward messages. The more recent network coding approach, in contrast, prompts intermediate nodes to mix and code together messages. It has been shown that certain wired networks have an asymptotic network coding gap, that is, they have asymptotically higher broadcast throughput when using network coding compared to routing. Whether such a gap exists for wireless networks has been an open question of great interest. We approach this question by studying the broadcast throughput of the radio network model which has been a standard mathematical model to study wireless communication. We show that there is a family of radio networks with a tight Θ(log log n ) network coding gap, that is, networks in which the asymptotic throughput achievable via routing messages is a Θ(log log n ) factor smaller than that of the optimal network coding algorithm. We also provide new tight upper and lower bounds showing that the asymptotic worst-case broadcast throughput over all networks with n nodes is messages-per-round for both routing and network coding.

FOCS Conference 2014 Conference Paper

Interactive Channel Capacity Revisited

  • Bernhard Haeupler

We provide the first capacity approaching coding schemes that robustly simulate any interactive protocol over an adversarial channel that corrupts any fraction of the transmitted symbols. Our coding schemes achieve a communication rate of 1 - O(∈√loglog1/∈) can be improved to 1 - O(√∈) for random, oblivious, and over any adversarial channel. This computationally bounded channels, or if parties have shared randomness unknown to the channel. Surprisingly, these rates exceed the 1 - Ω( H(ϵ)) = 1 - Ω(ϵ√log1/ϵ) interactive channel capacity bound which [Kol and Raz; STOC'13] recently proved for random errors. We conjecture 1- Θ(ϵ log log 1/ϵ) and 1- Θ(√ϵ) to be the optimal rates for their respective settings and therefore to capture the interactive channel capacity for random and adversarial errors. In addition to being very communication efficient, our randomized coding schemes have multiple other advantages. They are computationally efficient, extremely natural, and significantly simpler than prior (non-capacity approaching) schemes. In particular, our protocols do not employ any coding but allow the original protocol to be performed as-is, interspersed only by short exchanges of hash values. When hash values do not match, the parties backtrack. Our approach is, as we feel, by far the simplest and most natural explanation for why and how robust interactive communication in a noisy environment is possible.

FOCS Conference 2014 Conference Paper

Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding

  • Mohsen Ghaffari 0001
  • Bernhard Haeupler

We study coding schemes for error correction in interactive communications. Such interactive coding schemes simulate any n-round interactive protocol using N rounds over an adversarial channel that corrupts up to ρN transmissions. Important performance measures for a coding scheme are its maximum tolerable error rate ρ, communication complexity N, and computational complexity. We give the first coding scheme for the standard setting which performs optimally in all three measures: Our randomized non-adaptive coding scheme has a near-linear computational complexity and tolerates any error rate δ 1.

FOCS Conference 2010 Conference Paper

New Constructive Aspects of the Lovasz Local Lemma

  • Bernhard Haeupler
  • Barna Saha
  • Aravind Srinivasan

The Lovász Local Lemma (LLL) is a powerful tool that gives sufficient conditions for avoiding all of a given set of "bad" events, with positive probability. A series of results have provided algorithms to efficiently construct structures whose existence is non-constructively guaranteed by the LLL, culminating in the recent breakthrough of Moser & Tardos. We show that the output distribution of the Moser-Tardos algorithm well-approximates the conditional LLL-distribution - the distribution obtained by conditioning on all bad events being avoided. We show how a known bound on the probabilities of events in this distribution can be used for further probabilistic analysis and give new constructive and non-constructive results. We also show that when an LLL application provides a small amount of slack, the number of resamplings of the Moser-Tardos algorithm is nearly linear in the number of underlying independent variables (not events!), and can thus be used to give efficient constructions in cases where the underlying proof applies the LLL to super-polynomially many events. Even in cases where finding a bad event that holds is computationally hard, we show that applying the algorithm to avoid a polynomial-sized "core" subset of bad events leads to a desired outcome with high probability. We demonstrate this idea on several applications. We give the first constant-factor approximation algorithm for the Santa Claus problem by making an LLL-based proof of Feige constructive. We provide Monte Carlo algorithms for acyclic edge coloring, non-repetitive graph colorings, and Ramsey-type graphs. In all these applications the algorithm falls directly out of the non-constructive LLL-based proof. Our algorithms are very simple, often provide better bounds than previous algorithms, and are in several cases the first efficient algorithms known. As a second type of application we consider settings beyond the critical dependency threshold of the LLL: avoiding all bad events is impossible in these cases. As the first (even non-constructive) result of this kind, we show that by sampling from the LLL-distribution of a selected smaller core, we can avoid a fraction of bad events that is higher than the expectation. MAX k-SAT is an example of this.