Arrow Research search

Author name cluster

Danupon Nanongkai

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

45 papers
2 author rows

Possible papers

45

STOC Conference 2023 Conference Paper

Fast Algorithms via Dynamic-Oracle Matroids

  • Joakim Blikstad
  • Sagnik Mukhopadhyay
  • Danupon Nanongkai
  • Ta-Wei Tu

We initiate the study of matroid problems in a new oracle model called dynamic oracle . Our algorithms in this model lead to new bounds for some classic problems, and a “unified” algorithm whose performance matches previous results developed in various papers for various problems. We also show a lower bound that answers some open problems from a few decades ago. Concretely, our results are as follows. Improved algorithms for matroid union and disjoint spanning trees. We show an algorithm with Õ k ( n + r √ r ) dynamic-rank-query and time complexities for the matroid union problem over k matroids, where n is the input size, r is the output size, and Õ k hides poly ( k , log( n )). This implies the following consequences. (i) An improvement over the Õ k ( n √ r ) bound implied by [Chakrabarty-Lee-Sidford-Singla-Wong FOCS’19] for matroid union in the traditional rank-query model. (ii) An Õ k (| E |+| V |√| V |)-time algorithm for the k -disjoint spanning tree problem. This is nearly linear for moderately dense input graphs and improves the Õ k (| V |√| E |) bounds of Gabow-Westermann [STOC’88] and Gabow [STOC’91]. Consequently, this gives improved bounds for, e.g., Shannon Switching Game and Graph Irreducibility. Matroid intersection. We show a matroid intersection algorithm with Õ( n √ r ) dynamic-rank-query and time complexities. This implies new bounds for some problems (e.g. maximum forest with deadlines) and bounds that match the classic ones obtained in various papers for various problems, e.g. colorful spanning tree [Gabow-Stallmann ICALP’85], graphic matroid intersection [Gabow-Xu FOCS’89], simple job scheduling matroid intersection [Xu-Gabow ISAAC’94], and Hopcroft-Karp combinatorial bipartite matching. More importantly, this is done via a “unified” algorithm in the sense that an improvement over our dynamic-rank-query algorithm would imply improved bounds for all the above problems simultaneously. Lower bounds. We show simple super-linear (Ω( n log n )) query lower bounds for matroid intersection and union problems in our dynamic-rank-oracle and the traditional independence-query models; the latter improves the previous log 2 (3) n − o ( n ) bound by Harvey [SODA’08] and answers an open problem raised by, e.g., Welsh [1976] and CLSSW [FOCS’19].

FOCS Conference 2022 Conference Paper

Cut Query Algorithms with Star Contraction

  • Simon Apers
  • Yuval Efron
  • Pawel Gawrychowski
  • Troy Lee
  • Sagnik Mukhopadhyay
  • Danupon Nanongkai

We study the complexity of determining the edge connectivity of a simple graph with cut queries. We show that (i) there is a bounded-error randomized algorithm that computes edge connectivity with $O(n)$ cut queries, and (ii) there is a bounded-error quantum algorithm that computes edge connectivity with $\tilde{O}(\sqrt{}$n) cut queries. To prove these results we introduce a new technique, called star contraction, to randomly contract edges of a graph while preserving non-trivial minimum cuts. In star contraction vertices randomly contract an edge incident on a small set of randomly chosen “center” vertices. In contrast to the related 2-out contraction technique of Ghaffari, Nowicki, and Thorup [SODA’20], star contraction only contracts vertex-disjoint star subgraphs, which allows it to be efficiently implemented via cut queries. The $O(n)$ bound from item (i) was not known even for the simpler problem of connectivity, and it improves the $O(n\log^{3}n)$ upper bound by Rubinstein, Schramm, and Weinberg [ITCS’18]. The bound is tight under the reasonable conjecture that the randomized communication complexity of connectivity is $\Omega(n\log n)$, an open question since the seminal work of Babai, Frankl, and Simon [FOCS’86]. The bound also excludes using edge connectivity on simple graphs to prove a superlinear randomized query lower bound for minimizing a symmetric submodular function. The quantum algorithm from item (ii) gives a nearlyquadratic separation with the randomized complexity, and addresses an open question of Lee, Santha, and Zhang [SODA’21]. The algorithm can alternatively be viewed as computing the edge connectivity of a simple graph with $\tilde{O}(\sqrt{}$n) matrix-vector multiplication queries to its adjacency matrix. Finally, we demonstrate the use of star contraction outside of the cut query setting by designing a one-pass semi-streaming algorithm for computing edge connectivity in the complete vertex arrival setting. This contrasts with the edge arrival setting where two passes are required.

FOCS Conference 2022 Conference Paper

Nearly Optimal Communication and Query Complexity of Bipartite Matching

  • Joakim Blikstad
  • Jan van den Brand
  • Yuval Efron
  • Sagnik Mukhopadhyay
  • Danupon Nanongkai

We settle the complexities of the maximum-cardinality bipartite matching problem (BMM) up to polylogarithmic factors in five models of computation: the two-party communication, AND query, OR query, XOR query, and quantum edge query models. Our results answer open problems that have been raised repeatedly since at least three decades ago [Hajnal, Maass, and Turan STOC’88; Ivanyos, Klauck, Lee, Santha, and de Wolf FSTTCS’12; Dobzinski, Nisan, and Oren STOC’14; Nisan SODA’21] and tighten the lower bounds shown by Beniamini and Nisan [STOC’21] and Zhang [ICALP’04]. We also settle the communication complexity of the generalizations of BMM, such as maximum-cost bipartite b-matching and transshipment; and the query complexity of unique bipartite perfect matching (answering an open question by Beniamini [2022]). Our algorithms and lower bounds follow from simple applications of known techniques such as cutting planes methods and set disjointness.

FOCS Conference 2022 Conference Paper

Negative-Weight Single-Source Shortest Paths in Near-linear Time

  • Aaron Bernstein
  • Danupon Nanongkai
  • Christian Wulff-Nilsen

We present a randomized algorithm that computes single-source shortest paths (SSSP) in $O\left(m \log ^{8}(n) \log W\right)$ time when edge weights are integral and can be negative. 1 This essentially resolves the classic negative-weight SSSP problem. The previous bounds are $\tilde{O}\left(\left(m+n^{1. 5}\right) \log W\right)$ [BLNPSSSW FOCS’20] and $m^{4 / 3+o(1)} \log W$ [AMV FOCS’20]. Near-linear time algorithms were known previously only for the special case of planar directed graphs [Fakcharoenphol and Rao FOCS’01]. In contrast to all recent developments that rely on sophisticated continuous optimization methods and dynamic algorithms, our algorithm is simple: it requires only a simple graph decomposition and elementary combinatorial tools. In fact, ours is the first combinatorial algorithm for negative-weight SSSP to break through the classic $O(m \sqrt{n} \log W)$ bound from over three decades ago [Gabow and Tarjan SICOMP’89].

STOC Conference 2021 Conference Paper

Breaking the quadratic barrier for matroid intersection

  • Joakim Blikstad
  • Jan van den Brand
  • Sagnik Mukhopadhyay
  • Danupon Nanongkai

The matroid intersection problem is a fundamental problem that has been extensively studied for half a century. In the classic version of this problem, we are given two matroids M 1 = ( V , I 1 ) and M 2 = ( V , I 2 ) on a comment ground set V of n elements, and then we have to find the largest common independent set S ∈ I 1 ∩ I 2 by making independence oracle queries of the form ”Is S ∈ I 1 ?” or ”Is S ∈ I 2 ?” for S ⊆ V . The goal is to minimize the number of queries. Beating the existing Õ( n 2 ) bound, known as the quadratic barrier, is an open problem that captures the limits of techniques from two lines of work. The first one is the classic Cunningham’s algorithm [SICOMP 1986], whose Õ( n 2 )-query implementations were shown by CLS+ [FOCS 2019] and Nguyen [2019] (more generally, these algorithms take Õ( nr ) queries where r denotes the rank which can be as big as n ). The other one is the general cutting plane method of Lee, Sidford, and Wong [FOCS 2015]. The only progress towards breaking the quadratic barrier requires either approximation algorithms or a more powerful rank oracle query [CLS+ FOCS 2019]. No exact algorithm with o ( n 2 ) independence queries was known. In this work, we break the quadratic barrier with a randomized algorithm guaranteeing Õ( n 9/5 ) independence queries with high probability, and a deterministic algorithm guaranteeing Õ( n 11/6 ) independence queries. Our key insight is simple and fast algorithms to solve a graph reachability problem that arose in the standard augmenting path framework [Edmonds 1968]. Combining this with previous exact and approximation algorithms leads to our results.

FOCS Conference 2021 Conference Paper

Minimum Cuts in Directed Graphs via Partial Sparsification

  • Ruoxu Cen
  • Jason Li 0006
  • Danupon Nanongkai
  • Debmalya Panigrahi
  • Thatchaphol Saranurak
  • Kent Quanrud

We give an algorithm to find a minimum cut in an edge-weighted directed graph with $n$ vertices and $m$ edges in $\tilde{O}(n\cdot\max\{m^{2/3}, \ n\})$ time. This improves on the 30 year old bound of $\tilde{O}(nm)$ obtained by Hao and Orlin for this problem. Using similar techniques, we also obtain $\tilde{O}(n^{2}/\epsilon^{2})$ -time $(1+{\epsilon})$ -approximation algorithms for both the minimum edge and minimum vertex cuts in directed graphs, for any fixed $\epsilon$. Before our work, no (1 + $\epsilon)$ -approximation algorithm better than the exact runtime of $\tilde{O}(nm)$ is known for either problem. Our algorithms follow a two-step template. In the first step, we employ a partial sparsification of the input graph to preserve a critical subset of cut values approximately. In the second step, we design algorithms to find the (edge/vertex) mincut among the preserved cuts from the first step. For edge mincut, we give a new reduction to $\tilde{O}(\min\{{n}/m^{1/3}, \sqrt{n}\}){-}$ calls of any maxflow subroutine, via packing arborescences in the sparsifier. For vertex mincut, we develop new local flow algorithms to identify small unbalanced cuts in the sparsified graph.

STOC Conference 2021 Conference Paper

Vertex connectivity in poly-logarithmic max-flows

  • Jason Li 0006
  • Danupon Nanongkai
  • Debmalya Panigrahi
  • Thatchaphol Saranurak
  • Sorrachai Yingchareonthawornchai

The vertex connectivity of an m -edge n -vertex undirected graph is the smallest number of vertices whose removal disconnects the graph, or leaves only a singleton vertex. In this paper, we give a reduction from the vertex connectivity problem to a set of maxflow instances. Using this reduction, we can solve vertex connectivity in ( m α ) time for any α ≥ 1, if there is a m α -time maxflow algorithm. Using the current best maxflow algorithm that runs in m 4/3+ o (1) time (Kathuria, Liu and Sidford, FOCS 2020), this yields a m 4/3+ o (1) -time vertex connectivity algorithm. This is the first improvement in the running time of the vertex connectivity problem in over 20 years, the previous best being an Õ( mn )-time algorithm due to Henzinger, Rao, and Gabow (FOCS 1996). Indeed, no algorithm with an o ( mn ) running time was known before our work, even if we assume an ( m )-time maxflow algorithm. Our new technique is robust enough to also improve the best Õ( mn )-time bound for directed vertex connectivity to mn 1−1/12+ o (1) time

FOCS Conference 2020 Conference Paper

A Deterministic Algorithm for Balanced Cut with Applications to Dynamic Connectivity, Flows, and Beyond

  • Julia Chuzhoy
  • Yu Gao 0001
  • Jason Li 0006
  • Danupon Nanongkai
  • Richard Peng
  • Thatchaphol Saranurak

We consider the classical Minimum Balanced Cut problem: given a graph G, compute a partition of its vertices into two subsets of roughly equal volume, while minimizing the number of edges connecting the subsets. We present the first deterministic, almost-linear time approximation algorithm for this problem. Specifically, our algorithm, given an n-vertex m-edge graph G and any parameter 1 ≤ r ≤ O(logn), computes a (logm) r2 -approximation for Minimum Balanced Cut in G, in time O(m 1+O(1/r)+o(1) ·(logm) O(r2 )). In particular, we obtain a (logm) 1/ε -approximation in time m 1+O(√{ε}) for any constant, and a (logm) f(m) -approximation in time m 1+o(1), for any slowly growing function f(m). We obtain deterministic algorithms with similar guarantees for the Sparsest Cut and the Lowest-Conductance Cut problems. Our algorithm for the Minimum Balanced Cut problem in fact provides a stronger guarantee: it either returns a balanced cut whose value is close to a given target value, or it certifies that such a cut does not exist by exhibiting a large subgraph of G that has high conductance. We use this algorithm to obtain deterministic algorithms for dynamic connectivity and minimum spanning forest, whose worst-case update time on an n-vertex graph is n o(1), thus resolving a major open problem in the area of dynamic graph algorithms. Our work also implies deterministic algorithms for a host of additional problems, whose time complexities match, up to subpolynomial in n factors, those of known randomized algorithms. The implications include almost-linear time deterministic algorithms for solving Laplacian systems and for approximating maximum flows in undirected graphs.

FOCS Conference 2020 Conference Paper

Bipartite Matching in Nearly-linear Time on Moderately Dense Graphs

  • Jan van den Brand
  • Yin Tat Lee
  • Danupon Nanongkai
  • Richard Peng
  • Thatchaphol Saranurak
  • Aaron Sidford
  • Zhao Song 0002
  • Di Wang 0005

We present an $\tilde{O}(m+n^{1. 5})$ -time randomized algorithm for maximum cardinality bipartite matching and related problems (e. g. transshipment, negative-weight shortest paths, and optimal transport) on m-edge, n-node graphs. For maximum cardinality bipartite matching on moderately dense graphs, i. e. $m=\Omega(n^{1. 5})$, our algorithm runs in time nearly linear in the input size and constitutes the first improvement over the classic $O(m\sqrt{n})$ -time [Dinic 1970; Hopcroft-Karp 1971; Karzanov 1973] and $\widetilde{O}(n^{\omega})$ -time algorithms [Ibarra-Moran 1981] (where currently $\omega\approx 2. 373$ ). On sparser graphs, i. e. when $m=n^{9/8+\delta}$ for any constant $\delta > 0$, our result improves upon the recent advances of [Madry 2013] and [Liu-Sidford 2020b, 2020a] which achieve an $\widetilde{O}(m^{4/3+o(1)})$ runtime. We obtain these results by combining and advancing recent lines of research in interior point methods (IPMs) and dynamic graph algorithms. First, we simplify and improve the IPM of [v. d. Brand-Lee-Sidford-Song 2020], providing a general primal-dual IPM framework and new sampling-based techniques for handling infeasibility induced by approximate linear system solvers. Second, we provide a simple sublinear-time algorithm for detecting and sampling high-energy edges in electric flows on expanders and show that when combined with recent advances in dynamic expander decompositions, this yields efficient data structures for maintaining the iterates of both [v. d. Brand et al. ] and our new IPMs. Combining this general machinery yields a simpler $\widetilde{O}(n\sqrt{m})$ time algorithm for matching based on the logarithmic barrier function, and our state-of-the-art $\widetilde{O}(m+n^{1. 5})$ time algorithm for matching based on the [Lee-Sidford 2014] barrier (as regularized in [v. d. Brand et al. ]).

FOCS Conference 2019 Conference Paper

A New Deterministic Algorithm for Dynamic Set Cover

  • Sayan Bhattacharya
  • Monika Henzinger
  • Danupon Nanongkai

We present a deterministic dynamic algorithm for maintaining a (1+ε)f-approximate minimum cost set cover with O(f log(Cn)/ε^2) amortized update time, when the input set system is undergoing element insertions and deletions. Here, n denotes the number of elements, each element appears in at most f sets, and the cost of each set lies in the range [1/C, 1]. Our result, together with that of Gupta~et~al. ~[STOC'17], implies that there is a deterministic algorithm for this problem with O(f log(Cn)) amortized update time and O(min(log n, f)) -approximation ratio, which nearly matches the polynomial-time hardness of approximation for minimum set cover in the static setting. Our update time is only O(log (Cn)) away from a trivial lower bound. Prior to our work, the previous best approximation ratio guaranteed by deterministic algorithms was O(f^2), which was due to Bhattacharya~et~al. ~[ICALP`15]. In contrast, the only result that guaranteed O(f) -approximation was obtained very recently by Abboud~et~al. ~[STOC`19], who designed a dynamic algorithm with (1+ε)f-approximation ratio and O(f^2 log n/ε) amortized update time. Besides the extra O(f) factor in the update time compared to our and Gupta~et~al. 's results, the Abboud~et~al. ~algorithm is randomized, and works only when the adversary is oblivious and the sets are unweighted (each set has the same cost). We achieve our result via the primal-dual approach, by maintaining a fractional packing solution as a dual certificate. This approach was pursued previously by Bhattacharya~et~al. ~and Gupta~et~al. , but not in the recent paper by Abboud~et~al. Unlike previous primal-dual algorithms that try to satisfy some local constraints for individual sets at all time, our algorithm basically waits until the dual solution changes significantly globally, and fixes the solution only where the fix is needed.

STOC Conference 2019 Conference Paper

Breaking quadratic time for small vertex connectivity and an approximation scheme

  • Danupon Nanongkai
  • Thatchaphol Saranurak
  • Sorrachai Yingchareonthawornchai

Vertex connectivity a classic extensively-studied problem. Given an integer k , its goal is to decide if an n -node m -edge graph can be disconnected by removing k vertices. Although a linear-time algorithm was postulated since 1974 [Aho, Hopcroft and Ullman], and despite its sibling problem of edge connectivity being resolved over two decades ago [Karger STOC’96], so far no vertex connectivity algorithms are faster than O ( n 2 ) time even for k =4 and m = O ( n ). In the simplest case where m = O ( n ) and k = O (1), the O ( n 2 ) bound dates five decades back to [Kleitman IEEE Trans. Circuit Theory’69]. For higher m , O ( m ) time is known for k ≤ 3 [Tarjan FOCS’71; Hopcroft, Tarjan SICOMP’73], the first O ( n 2 ) time is from [Kanevsky, Ramachandran, FOCS’87] for k =4 and from [Nagamochi, Ibaraki, Algorithmica’92] for k = O (1). For general k and m , the best bound is Õ(min( kn 2 , n ω + nk ω )) [Henzinger, Rao, Gabow FOCS’96; Linial, Lovász, Wigderson FOCS’86] where Õ hides polylogarithmic terms and ω<2.38 is the matrix multiplication exponent. In this paper, we present a randomized Monte Carlo algorithm with Õ( m + k 7/3 n 4/3 ) time for any k = O (√ n ). This gives the first subquadratic time bound for any 4≤ k ≤ o ( n 2/7 ) (subquadratic time refers to O ( m )+ o ( n 2 ) time.) and improves all above classic bounds for all k ≤ n 0.44 . We also present a new randomized Monte Carlo (1+є)-approximation algorithm that is strictly faster than the previous Henzinger’s 2-approximation algorithm [J. Algorithms’97] and all previous exact algorithms. The story is the same for the directed case, where our exact Õ( min{ km 2/3 n , km 4/3 } )-time for any k = O (√ n ) and (1+є)-approximation algorithms improve classic bounds for small and large k , respectively. Additionally, our algorithm is the first approximation algorithm on directed graphs. The key to our results is to avoid computing single-source connectivity, which was needed by all previous exact algorithms and is not known to admit o ( n 2 ) time. Instead, we design the first local algorithm for computing vertex connectivity; without reading the whole graph, our algorithm can find a separator of size at most k or certify that there is no separator of size at most k “near” a given seed node.

STOC Conference 2019 Conference Paper

Distributed edge connectivity in sublinear time

  • Mohit Daga
  • Monika Henzinger
  • Danupon Nanongkai
  • Thatchaphol Saranurak

We present the first sublinear-time algorithm that can compute the edge connectivity λ of a network exactly on distributed message-passing networks (the CONGEST model), as long as the network contains no multi-edge. We present the first sublinear-time algorithm for a distributed message-passing network sto compute its edge connectivity λ exactly in the CONGEST model, as long as there are no parallel edges. Our algorithm takes Õ( n 1−1/353 D 1/353 + n 1−1/706 ) time to compute λ and a cut of cardinality λ with high probability, where n and D are the number of nodes and the diameter of the network, respectively, and Õ hides polylogarithmic factors. This running time is sublinear in n (i.e. Õ( n 1−є )) whenever D is. Previous sublinear-time distributed algorithms can solve this problem either (i) exactly only when λ= O ( n 1/8−є ) [Thurimella PODC’95; Pritchard, Thurimella, ACM Trans. Algorithms’11; Nanongkai, Su, DISC’14] or (ii) approximately [Ghaffari, Kuhn, DISC’13; Nanongkai, Su, DISC’14]. To achieve this we develop and combine several new techniques. First, we design the first distributed algorithm that can compute a k -edge connectivity certificate for any k = O ( n 1−є ) in time Õ(√ nk + D ). The previous sublinear-time algorithm can do so only when k = o (√ n ) [Thurimella PODC’95]. In fact, our algorithm can be turned into the first parallel algorithm with polylogarithmic depth and near-linear work. Previous near-linear work algorithms are essentially sequential and previous polylogarithmic-depth algorithms require Ω( mk ) work in the worst case (e.g. [Karger, Motwani, STOC’93]). Second, we show that by combining the recent distributed expander decomposition technique of [Chang, Pettie, Zhang, SODA’19] with techniques from the sequential deterministic edge connectivity algorithm of [Kawarabayashi, Thorup, STOC’15], we can decompose the network into a sublinear number of clusters with small average diameter and without any mincut separating a cluster (except the “trivial” ones). This leads to a simplification of the Kawarabayashi-Thorup framework (except that we are randomized while they are deterministic). This might make this framework more useful in other models of computation. Finally, by extending the tree packing technique from [Karger STOC’96], we can find the minimum cut in time proportional to the number of components. As a byproduct of this technique, we obtain an Õ( n )-time algorithm for computing exact minimum cut for weighted graphs.

FOCS Conference 2019 Conference Paper

Dynamic Approximate Shortest Paths and Beyond: Subquadratic and Worst-Case Update Time

  • Jan van den Brand
  • Danupon Nanongkai

Consider the following distance query for an n-node graph G undergoing edge insertions and deletions: given two sets of nodes I and J, return the distances between every pair of nodes in I×J. This query is rather general and captures several versions of the dynamic shortest paths problem. In this paper, we develop an efficient (1 + ε)-approximation algorithm for this query using fast matrix multiplication. Our algorithm leads to answers for some open problems for Single-Source and All-Pairs Shortest Paths (SSSP and APSP), as well as for Diameter, Radius, and Eccentricities. Below are some highlights. Note that all our algorithms guarantee worst-case update time and are randomized (Monte Carlo), but do not need the oblivious adversary assumption. Subquadratic update time for SSSP, Diameter, Centralities, ect. : When we want to maintain distances from a single node explicitly (without queries), a fundamental question is to beat trivially calling Dijkstra's static algorithm after each update, taking Θ(n 2 ) update time on dense graphs. A better time complexity was not known even with amortization. It was known to be improbable for exact algorithms and for combinatorial any-approximation algorithms to polynomially beat the Ω(n 2 ) bound (under some conjectures) [Roditty, Zwick, ESA'04; Abboud, V. Williams, FOCS'14]. 1 Our algorithm with I = {s} and J = V (G) implies a (1 + ε)-approximation algorithm for this, guaranteeing Õ(n 1. 823 /ε 2 ) worst-case update time for directed graphs with positive real weights in [1, W]. 2 With ideas from [Roditty, V. Williams, STOC'13], we also obtain the first subquadratic worst-case update time for (5/3 + ε)-approximating the eccentricities and (1. 5 + ε)-approximating the diameter and radius for unweighted graphs (with small additive errors). We also obtain the first subquadratic worst-case update time for (1 + ε)-approximating the closeness centralities for undirected unweighted graphs. Worst-case update time for APSP: When we want to maintain distances between all-pairs of nodes explicitly, the Õ(n 2 ) amortized update time by Demetrescu and Italiano [STOC'03] already matches the trivial Ω(n 2 ) lower bound. A fundamental question is whether it can be made worst-case. The state-of-the-art algorithm takes Õ(n 2+2/3 ) worst-case update time to maintain the distances exactly [Abraham, Chechik, Krinninger, SODA'17; Thorup STOC'05]. When it comes to (1+ε) approximation, this bound is still higher than calling the Õ(n ω /ε)-time static algorithm of Zwick [FOCS'98], where ω ≈ 2. 373. Our algorithm with I = J = V (G) implies nearly tight bounds for this, namely Õ(n 2 /ε 1+ω ) for undirected unweighted graphs and Õ(n 2. 045 /ε 2 ) for directed graphs with positive real weights. Besides this, we also obtain the first dynamic APSP algorithm with subquadratic update time and sublinear query time.

FOCS Conference 2019 Conference Paper

Dynamic Matrix Inverse: Improved Algorithms and Matching Conditional Lower Bounds

  • Jan van den Brand
  • Danupon Nanongkai
  • Thatchaphol Saranurak

The dynamic matrix inverse problem is to maintain the inverse of a matrix undergoing element and column updates. It is the main subroutine behind the best algorithms for many dynamic problems whose complexity is not yet well-understood, such as maintaining the largest eigenvalue, rank and determinant of a matrix and maintaining reachability, distances, maximum matching size, and k-paths/cycles in a graph. Understanding the complexity of dynamic matrix inverse is a key to understand these problems. In this paper, we present (i) improved algorithms for dynamic matrix inverse and their extensions to some incremental/look-ahead variants, and (ii) variants of the Online Matrix-Vector conjecture [Henzinger~et~al. STOC'15] that, if true, imply that these algorithms are tight. Our algorithms automatically lead to faster dynamic algorithms for the aforementioned problems, some of which are also tight under our conjectures, e. g. reachability and maximum matching size (closing the gaps for these two problems was in fact asked by Abboud and V. Williams [FOCS'14]). Prior best bounds for most of these problems date back to more than a decade ago [Sankowski FOCS'04, COCOON'05, SODA'07; Kavitha FSTTCS'08; Mucha and Sankowski Algorithmica'10; Bosek et al. FOCS'14]. Our improvements stem mostly from the ability to use fast matrix multiplication “one more time'', to maintain a certain transformation matrix which could be maintained only combinatorially previously (i. e. without fast matrix multiplication). Oddly, unlike other dynamic problems where this approach, once successful, could be repeated several times (“bootstrapping''), our conjectures imply that this is not the case for dynamic matrix inverse and some related problems. However, when a small additional “look-ahead'' information is provided we can perform such repetition to drive the bounds down further.

FOCS Conference 2018 Conference Paper

A Faster Distributed Single-Source Shortest Paths Algorithm

  • Sebastian Forster
  • Danupon Nanongkai

We devise new algorithms for the single-source shortest paths (SSSP) problem with non-negative edge weights in the CONGEST model of distributed computing. While close-to-optimal solutions, in terms of the number of rounds spent by the algorithm, have recently been developed for computing SSSP approximately, the fastest known exact algorithms are still far away from matching the lower bound of Ω (n + D) rounds by Peleg and Rubinovich [SIAM Journal on Computing 2000], where n is the number of nodes in the network and D is its diameter. The state of the art is Elkin's randomized algorithm [STOC 2017] that performs Õ(n^2/3 D^1/3 + n^5/6) rounds. We significantly improve upon this upper bound with our two new randomized algorithms for polynomially bounded integer edge weights, the first performing Õ(√n D) rounds and the second performing Õ(√n D^1/4 + n^3/5 + D) rounds. Our bounds also compare favorably to the independent result by Ghaffari and Li [STOC 2018]. As side results, we obtain a (1+ε)-approximation Õ((√n D^1/4+D)/ε)-round algorithm for directed SSSP and a new work/depth trade-off for exact SSSP on directed graphs in the PRAM model.

FOCS Conference 2017 Conference Paper

Distributed Exact Weighted All-Pairs Shortest Paths in Õ(n 5/4 ) Rounds

  • Chien-Chung Huang 0001
  • Danupon Nanongkai
  • Thatchaphol Saranurak

We study computing all-pairs shortest paths (APSP) on distributed networks (the CONGEST model). The goal is for every node in the (weighted) network to know the distance from every other node using communication. The problem admits (1+o(1))-approximation Õ(n)-time algorithms [2], [3], which are matched with Ω(n)-time lower bounds [3], [4], [5] 1. No ω(n) lower bound or o(m) upper bound were known for exact computation. In this paper, we present an Õ(n 5/4 )-time randomized (Las Vegas) algorithm for exact weighted APSP; this provides the first improvement over the naive O(m)-time algorithm when the network is not so sparse. Our result also holds for the case where edge weights are asymmetric (a. k. a. the directed case where communication is bidirectional). Our techniques also yield an Õ(n 3/4 k 1/2 + n)-time algorithm for the k-source shortest paths problem where we want every node to know distances from k sources; this improves Elkin's recent bound [6] when k = ω̃(n 1/4 ). We achieve the above results by developing distributed algorithms on top of the classic scaling technique, which we believe is used for the first time for distributed shortest paths computation. One new algorithm which might be of an independent interest is for the reversed r-sink shortest paths problem, where we want every of r sinks to know its distances from all other nodes, given that every node already knows its distance to every sink. We show an Õ(n√r)-time algorithm for this problem. Another new algorithm is called short range extension, where we show that in Õ(n√h) time the knowledge about distances can be “extended” for additional h hops. For this, we use weight rounding to introduce small additive errors which can be later fixed.

FOCS Conference 2017 Conference Paper

Dynamic Minimum Spanning Forest with Subpolynomial Worst-Case Update Time

  • Danupon Nanongkai
  • Thatchaphol Saranurak
  • Christian Wulff-Nilsen

We present a Las Vegas algorithm for dynamically maintaining a minimum spanning forest of an nnode graph undergoing edge insertions and deletions. Our algorithm guarantees an O(n o(1) ) worst-case update time with high probability. This significantly improves the two recent Las Vegas algorithms by Wulff-Nilsen [2] with update time O(n 0. 5-ε ) for some constant ε > 0 and, independently, by Nanongkai and Saranurak [3] with update time O(n 0. 494 ) (the latter works only for maintaining a spanning forest). Our result is obtained by identifying the common framework that both two previous algorithms rely on, and then improve and combine the ideas from both works. There are two main algorithmic components of the framework that are newly improved and critical for obtaining our result. First, we improve the update time from O(n 0. 5-ε ) in [2] to O(n o(1) ) for decrementally removing all low-conductance cuts in an expander undergoing edge deletions. Second, by revisiting the “contraction technique” by Henzinger and King [4] and Holm et al. [5], we show a new approach for maintaining a minimum spanning forest in connected graphs with very few (at most (1 + o(1))n) edges. This significantly improves the previous approach in [2], [3] which is based on Frederickson's 2-dimensional topology tree [6] and illustrates a new application to this old technique.

STOC Conference 2017 Conference Paper

Dynamic spanning forest with worst-case update time: adaptive, Las Vegas, and O(n 1/2 - ε )-time

  • Danupon Nanongkai
  • Thatchaphol Saranurak

We present two algorithms for dynamically maintaining a spanning forest of a graph undergoing edge insertions and deletions. Our algorithms guarantee worst-case update time and work against an adaptive adversary , meaning that an edge update can depend on previous outputs of the algorithms. We provide the first polynomial improvement over the long-standing O (√ n ) bound of [Frederickson STOC'84, Eppstein, Galil, Italiano and Nissenzweig FOCS'92] for such type of algorithms. The previously best improvement was O (√ n (loglog n ) 2 /log n ) [Kejlberg-Rasmussen, Kopelowitz, Pettie and Thorup ESA'16]. We note however that these bounds were obtained by deterministic algorithms while our algorithms are randomized. Our first algorithm is Monte Carlo and guarantees an O ( n 0.4+ o (1) ) worst-case update time, where the o (1) term hides the O (√loglog n /log n ) factor. Our second algorithm is Las Vegas and guarantee an O ( n 0.49306 ) worst-case update time with high probability. Algorithms with better update time either needed to assume that the adversary is oblivious (e.g. [Kapron, King and Mountjoy SODA'13]) or can only guarantee an amortized update time. Our second result answers an open problem by Kapron et al. To the best of our knowledge, our algorithms are among a few non-trivial randomized dynamic algorithms that work against adaptive adversaries. The key to our results is a decomposition of graphs into subgraphs that either have high expansion or sparse. This decomposition serves as an interface between recent developments on (static) flow computation and many old ideas in dynamic graph algorithms: On the one hand, we can combine previous dynamic graph techniques to get faster dynamic spanning forest algorithms if such decomposition is given. On the other hand, we can adapt flow-related techniques (e.g. those from [Khandekar, Rao and Vazirani STOC'06], [Peng SODA'16], and [Orecchia and Zhu SODA'14]) to maintain such decomposition. To the best of our knowledge, this is the first time these flow techniques are used in fully dynamic graph algorithms.

FOCS Conference 2017 Conference Paper

From Gap-ETH to FPT-Inapproximability: Clique, Dominating Set, and More

  • Parinya Chalermsook
  • Marek Cygan
  • Guy Kortsarz
  • Bundit Laekhanukit
  • Pasin Manurangsi
  • Danupon Nanongkai
  • Luca Trevisan 0001

We consider questions that arise from the intersection between the areas of approximation algorithms, subexponential-time algorithms, and fixed-parameter tractable algorithms. The questions, which have been asked several times (e. g. , [1], [2], [3]) are whether there is a non-trivial FPT-approximation algorithm for the Maximum Clique (Clique) and Minimum Dominating Set (DomSet) problems parameterized by the size of the optimal solution. In particular, letting OPT be the optimum and N be the size of the input, is there an algorithm that runs in t(OPT) poly(N) time and outputs a solution of size f(OPT), for any functions t and f that are independent of N (for Clique, we want f(OPT) = ω(1))? In this paper, we show that both Clique and DomSet admit no non-trivial FPT-approximation algorithm, i. e. , there is no o(OPT)-FPT-approximation algorithm for Clique and no f(OPT)-FPT-approximation algorithm for DomSet, for any function f (e. g. , this holds even if f is an exponential or the Ackermann function). In fact, our results imply something even stronger: The best way to solve Clique and DomSet, even approximately, is to essentially enumerate all possibilities. Our results hold under the Gap Exponential Time Hypothesis (GapETH) [4], [5], which states that no 2 o(n) -time algorithm can distinguish between a satisfiable 3SAT formula and one which is not even (1 - ε)-satisfiable for some constant ε > 0. Besides Clique and DomSet, we also rule out non-trivial FPT-approximation for Maximum Balanced Biclique, the problem of finding maximum subgraphs with hereditary properties (e. g. , Maximum Induced Planar Subgraph), and Maximum Induced Matching in bipartite graphs. Previously only exact versions of these problems were known to be W[1]-hard [6], [7], [8]. Additionally, we rule out k o(1) -FPT-approximation algorithm for Densest k-Subgraph although this ratio does not yet match the trivial O(k)-approximation algorithm. To the best of our knowledge, prior results only rule out constant factor approximation for Clique [9], [10] and log 1/4+ε (OPT) approximation for DomSet for any constant ε > 0 [11]. Our result on Clique significantly improves on [9], [10]. However, our result on DomSet is incomparable to [11] since their results hold under ETH while our results hold under Gap-ETH, which is a stronger assumption.

STOC Conference 2016 Conference Paper

New deterministic approximation algorithms for fully dynamic matching

  • Sayan Bhattacharya
  • Monika Henzinger
  • Danupon Nanongkai

We present two deterministic dynamic algorithms for the maximum matching problem. (1) An algorithm that maintains a (2+є)-approximate maximum matching in general graphs with O ( poly (log n , 1/є)) update time. (2) An algorithm that maintains an α K approximation of the value of the maximum matching with O ( n 2/ K ) update time in bipartite graphs, for every sufficiently large constant positive integer K . Here, 1≤ α K < 2 is a constant determined by the value of K . Result (1) is the first deterministic algorithm that can maintain an o (log n )-approximate maximum matching with polylogarithmic update time, improving the seminal result of Onak et al. [STOC 2010]. Its approximation guarantee almost matches the guarantee of the best randomized polylogarithmic update time algorithm [Baswana et al. FOCS 2011]. Result (2) achieves a better-than-two approximation with arbitrarily small polynomial update time on bipartite graphs. Previously the best update time for this problem was O ( m 1/4 ) [Bernstein et al. ICALP 2015], where m is the current number of edges in the graph.

FOCS Conference 2014 Conference Paper

Decremental Single-Source Shortest Paths on Undirected Graphs in Near-Linear Total Update Time

  • Monika Henzinger
  • Sebastian Forster
  • Danupon Nanongkai

The decremental single-source shortest paths (SSSP) problem concerns maintaining the distances between a given source node s to every node in an n-node m-edge graph G undergoing edge deletions. While its static counterpart can be easily solved in near-linear time, this decremental problem is much more challenging even in the undirected unweighted case. In this case, the classic O(mn) total update time of Even and Shiloach (JACM 1981) has been the fastest known algorithm for three decades. With the loss of a (1 + ε)-approximation factor, the running time was recently improved to O(n 2+o(1) ) by Bernstein and Roditty (SODA 2011), and more recently to O(n 1. 8+o(1) + m 1+o(1) ) by Henzinger, Krinninger, and Nanongkai (SODA 2014). In this paper, we finally bring the running time of this case down to near-linear: We give a (1 + ε)-approximation algorithm with O(m 1+o(1) ) total update time, thus obtaining near-linear time. Moreover, we obtain O(m 1+o(1) log W) time for the weighted case, where the edge weights are integers from 1 to W. The only prior work on weighted graphs in o(mn log W) time is the O(mn 0. 986 log W)-time algorithm by Henzinger, Krinninger, and Nanongkai (STOC 2014) which works for the general weighted directed case. In contrast to the previous results which rely on maintaining a sparse emulator, our algorithm relies on maintaining a so-called sparse (d, ε)-hop set introduced by Cohen (JACM 2000) in the PRAM literature. A (d, ε)-hop set of a graph G = (V, E) is a set E' of weighted edges such that the distance between any pair of nodes in G can be (1 + ε)-approximated by their d-hop distance (given by a path containing at most d edges) on G'=(V, E∪E'). Our algorithm can maintain an (n o(1), ε)-hop set of near-linear size in near-linear time under edge deletions. It is the first of its kind to the best of our knowledge. To maintain the distances on this hop set, we develop a monotone bounded-hop Even-Shiloach tree. It results from extending and combining the monotone Even-Shiloach tree of Henzinger, Krinninger, and Nanongkai (FOCS 2013) with the bounded-hop SSSP technique of Bernstein (STOC 2013). These two new tools might be of independent interest.

FOCS Conference 2014 Conference Paper

Pre-reduction Graph Products: Hardnesses of Properly Learning DFAs and Approximating EDP on DAGs

  • Parinya Chalermsook
  • Bundit Laekhanukit
  • Danupon Nanongkai

The study of graph products is a major research topic and typically concerns the term f(G * H), e. g. , to show that f(G * H) = f(G)f(H). In this paper, we study graph products in a non-standard form f(R[G * H]) where R is a “reduction”, a transformation of any graph into an instance of an intended optimization problem. We resolve some open problems as applications. The first problem is minimum consistent deterministic finite automaton (DFA). We show a tight n 1-ϵ approximation hardness, improving the n 1/14-ϵ hardness of [Pitt and Warmuth, STOC 1989 and JACM 1993], where n is the sample size. (In fact, we also give improved hardnesses for the case of acyclic DFA and NFA.) Due to Board and Pitt [Theoretical Computer Science 1992], this implies the hardness of properly learning DFAs assuming NP ≠ RP (the weakest possible assumption). This affirmatively answers an open problem raised 25 years ago in the paper of Pitt and Warmuth and the survey of Pitt [All 1989]. Prior to our results, this hardness only follows from the stronger hardness of improperly learning DFAs, which requires stronger assumptions, i. e. , either a cryptographic or an average case complexity assumption [Kearns and Valiant STOC 1989 and J. ACM 1994; Daniely et al. STOC 2014]. The second problem is edge-disjoint paths (EDP) on directed acyclic graphs (DAGs). This problem admits an O(√n)-approximation algorithm [Chekuri, Khanna, and Shepherd, Theory of Computing 2006] and a matching Ω(√n) integrality gap, but so far only an n 1/26-ϵ hardness factor is known [Chuzhoy et al. , STOC 2007]. (n denotes the number of vertices.) Our techniques give a tight n 1/2-ϵ hardness for EDP on DAGs, thus resolving its approximability status. As by-products of our techniques: (i) We give a tight hardness of packing vertex-disjoint k-cycles for large k, complimenting [Guruswami and Lee, ECCC 2014] and matching [Krivelevich et al. , SODA 2005 and ACM Transactions on Algorithms 2007]. (ii) We give an alternative (and perhaps simpler) proof for the hardness of properly learning DNF, CNF and intersection of halfspaces [Alekhnovich et al. , FOCS 2004 and J. Comput. Syst. Sci. 2008]. Our new concept reduces the task of proving hardnesses to merely analyzing graph product inequalities, which are often as simple as textbook exercises. This concept was inspired by, and can be viewed as a generalization of, the graph product subadditivity technique we previously introduced in SODA 2013. This more general concept might be useful in proving other hardness results as well.

FOCS Conference 2013 Conference Paper

Dynamic Approximate All-Pairs Shortest Paths: Breaking the O(mn) Barrier and Derandomization

  • Monika Henzinger
  • Sebastian Forster
  • Danupon Nanongkai

We study dynamic (1 + ϵ)-approximation algorithms for the all-pairs shortest paths problem in unweighted undirected n-node m-edge graphs under edge deletions. The fastest algorithm for this problem is a randomized algorithm with a total update time of Ȏ(mn) and constant query time by Roditty and Zwick (FOCS 2004). The fastest deterministic algorithm is from a 1981 paper by Even and Shiloach (JACM 1981); it has a total update time of O(mn 2 ) and constant query time. We improve these results as follows: (1) We present an algorithm with a total update time of Ȏ(n 5/2 ) and constant query time that has an additive error of two in addition to the 1 + ϵ multiplicative error. This beats the previous Ȏ(mn) time when m = Ω(n 3/2 ). Note that the additive error is unavoidable since, even in the static case, an O(n 3-δ )-time (a so-called truly sub cubic) combinatorial algorithm with 1 + ϵ multiplicative error cannot have an additive error less than 2 - ϵ, unless we make a major breakthrough for Boolean matrix multiplication (Dor, Halperin and Zwick FOCS 1996) and many other long-standing problems (Vassilevska Williams and Williams FOCS 2010). The algorithm can also be turned into a (2 + ϵ)-approximation algorithm (without an additive error) with the same time guarantees, improving the recent (3 + ϵ)-approximation algorithm with Ȏ(n 5/2+O(1√(log n)) ) running time of Bernstein and Roditty (SODA 2011) in terms of both approximation and time guarantees. (2) We present a deterministic algorithm with a total update time of Ȏ(mn) and a query time of O(log log n). The algorithm has a multiplicative error of 1 + ϵ and gives the first improved deterministic algorithm since 1981. It also answers an open question raised by Bernstein in his STOC 2013 paper. In order to achieve our results, we introduce two new techniques: (1) A lazy Even-Shiloach tree algorithm which maintains a bounded-distance shortest-paths tree on a certain type of emulator called locally persevering emulator. (2) A derandomization technique based on moving Even-Shiloach trees as a way to derandomize the standard random set argument. These techniques might be of independent interest.

FOCS Conference 2013 Conference Paper

Independent Set, Induced Matching, and Pricing: Connections and Tight (Subexponential Time) Approximation Hardnesses

  • Parinya Chalermsook
  • Bundit Laekhanukit
  • Danupon Nanongkai

We present a series of almost settled inapproximability results for three fundamental problems. The first in our series is the subexponential-time inapproximability of the independent set problem, a question studied in the area of parameterized complexity. The second is the hardness of approximating the bipartite induced matching problem on bounded-degree bipartite graphs. The last in our series is the tight hardness of approximating the k-hypergraph pricing problem, a fundamental problem arising from the area of algorithmic game theory. In particular, assuming the Exponential Time Hypothesis, our two main results are: For any r larger than some constant, any r-approximation algorithm for the independent set problem must run in at least 2n 1-ε/ r 1+ε time. This nearly matches the upper bound of 2 n/r [23]. It also improves some hardness results in the domain of parameterized complexity (e. g. , [26], [19]). For any k larger than some constant, there is no polynomial time min{k 1-ε, n 1/2-ε } time min -approximation algorithm for the k-hypergraph pricing problem, where n is the number of vertices in an input graph. This almost matches the upper bound of min{O(k), Õ(√n) } min (by Balcan and Blum [3] and an algorithm in this paper). We note an interesting fact that, in contrast to n 1/2-ε hardness for polynomial-time algorithms, the k-hypergraph pricing problem admits n δ approximation for any δ > 0 in quasi-polynomial time. This puts this problem in a rare approximability class in which approximability thresholds can be improved significantly by allowing algorithms to run in quasi-polynomial time. The proofs of our hardness results rely on unexpectedly tight connections between the three problems. First, we establish a connection between the first and second problems by proving a new graph-theoretic property related to an induced matching number of dispersers. Then, we show that the n 1/2-ε hardness of the last problem follows from nearly tight subexponential time inapproximability of the first problem, illustrating a rare application of the second type of inapproximability result to the first one. Finally, to prove the subexponential-time inapproximability of the first problem, we construct a new PCP with several properties; it is sparse and has nearly-linear size, large degree, and small free-bit complexity. Our PCP requires no ground-breaking ideas but rather a very careful assembly of the existing ingredients in the PCP literature.