Arrow Research search

Author name cluster

Rasmus Kyng

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

26 papers
2 author rows

Possible papers

26

MFCS Conference 2025 Conference Paper

Almost-Linear Time Algorithms for Partially Dynamic Graphs (Invited Talk)

  • Rasmus Kyng

A partially dynamic graph is a graph that undergoes edge insertions or deletions, but not both. In this talk, I present a unifying framework that yields the first almost-optimal, almost-linear time algorithms for many well-studied problems on partially dynamic graphs [Chen-Kyng-Liu-Meierhans-Probst-Gutenberg, STOC’24; Brand-Chen-Kyng-Liu-Meierhans-Probst Gutenberg-Sachdevea, FOCS’24]. These problems include cycle detection, strongly connected components, s-t distances, transshipment, bipartite matching, maximum flow, and minimum-cost flow. We achieve this unification by solving the partially dynamic threshold minimum-cost flow problem. We solve these problems by combining a partially dynamic L1 interior point method (Brand-Liu-Sidford STOC'23) with powerful new data structures that solve fully-dynamic APSP and min-cut with sub-polynomial approximation quality and sub-polynomial update and query time.

FOCS Conference 2025 Conference Paper

Deterministic Almost-Linear-Time Gomory-Hu Trees

  • Amir Abboud
  • Rasmus Kyng
  • Jason Li 0006
  • Debmalya Panigrahi
  • Maximilian Probst Gutenberg
  • Thatchaphol Saranurak
  • Weixuan Yuan
  • Wuwei Yuan

Given an undirected, weighted graph $G=(V, E, w)$, a Gomory-Hu tree or cut tree (Gomory and Hu, 1961) is a tree T over the vertex set V such that for every pair of vertices $s, t \in V$, the ($s, t$) min-cut in T is also an ($s, t$) min-cut in G and has the same value. In this article, we give the first deterministic almost-linear-time algorithm for constructing a Gomory-Hu tree. Our algorithm runs in $m^{1+o(1)}$-time, where m denotes the number of edges in the input graph G; this is clearly optimal up to the $m^{o(1)}$ term in the running time. Prior to our work, the best deterministic algorithm for this problem dated back to the original algorithm of Gomory and Hu that runs in $n m^{1+o(1)}$ time using current maxflow algorithms. In fact, our algorithm is also the first almost-linear-time deterministic algorithm for even simpler problems, such as finding the k-edge-connected components of a graph. Our new result hinges on two separate and novel components that each introduce a distinct set of de-randomization tools of independent interest: - a deterministic reduction from the all-pairs min-cuts problem to the single-source min-cuts problem incurring only sub-polynomial overhead, and - a deterministic almost-linear time algorithm for the singlesource min-cuts problem.

FOCS Conference 2025 Conference Paper

Random-Shift Revisited: Tight Approximations for Tree Embeddings and ℓ₁-Oblivious Routings

  • Rasmus Kyng
  • Maximilian Probst Gutenberg
  • Tim Rieder

We present a new and surprisingly simple analysis of random-shift decompositions-originally proposed by Miller, Peng, and Xu [SPAA’13]: We show that decompositions for exponentially growing scales $D=2^{0}, 2^{1}, \ldots, 2^{\log _{2}(\operatorname{diam}(G))}$, have a tight constant trade-off between distance-to-center and separation probability on average across the distance scales - opposed to a necessary $\Omega(\log n)$ trade-off for a single scale. This almost immediately yields a way to compute a tree T for graph G that preserves all graph distances with expected $O(\log n)$-stretch. This gives an alternative proof that obtains tight approximation bounds of the seminal result by Fakcharoenphol, Rao, and Talwar [STOC’03] matching the $\Omega(\log n)$ lower bound by Bartal [FOCS’96]. Our insights can also be used to refine the analysis of a simple $\ell_{1}$-oblivious routing proposed in [FOCS’22], yielding a tight $O(\log n)$ competitive ratio. Our algorithms for constructing tree embeddings and $\ell_{1}$ oblivious routings can be implemented in the sequential, parallel, and distributed settings with optimal work, depth, and rounds, up to polylogarithmic factors. Previously, fast algorithms with tight guarantees were not known for tree embeddings in parallel and distributed settings, and for $\ell_{1}$-oblivious routings, not even a fast sequential algorithm was known.

STOC Conference 2024 Conference Paper

A Dynamic Shortest Paths Toolbox: Low-Congestion Vertex Sparsifiers and Their Applications

  • Rasmus Kyng
  • Simon Meierhans
  • Maximilian Probst Gutenberg

We present a general toolbox, based on new vertex sparsifiers, for designing data structures to maintain shortest paths in graphs undergoing edge insertions and/or deletions. In particular, we obtain the following results: the first data structure to maintain m o (1) -approximate all-pairs shortest paths (APSP) in an m -edge graph undergoing edge insertions and deletions with worst-case update time m o (1) and query time Õ(1), and a data structure to maintain a tree T that has diameter no larger than a subpolynomial factor than the underlying graph G that is undergoing edge insertions and deletions where each update is handled in amortized subpolynomial time, and a simpler and more efficient data structure to maintain a (1+є)-approximate single-source shortest paths (SSSP) tree T in a graph undergoing edge deletions in amortized time m o (1) per update. All our data structures are deterministic. For the last two data structures, we further have that while the trees T are not subgraphs of G , they do embed with small edge congestion into G . This is in stark contrast to previous approaches and is particularly useful for algorithms that use these data structures internally to route flow along shortest paths. To illustrate the power of our new toolbox, we show that our SSSP data structure can be used directly to give a deterministic implementation of the classic MWU algorithm for approximate undirected minimum-cost flow running in time m 1+ o (1) . Previously, Bernstein-Gutenberg-Saranurak [FOCS’21] had built a randomized data structure achieving m 1+ o (1) time whp. By using our SSSP data structure in the recent almost-linear time algorithm for computing Gomory-Hu trees by Abboud-Li-Panigrahi-Saranurak [FOCS’23], we simplify their algorithm significantly and slightly improve their runtime. To obtain our toolbox, we give the first algorithm that, given a graph G undergoing edge insertions and deletions and a dynamic terminal set A , maintains a vertex sparsifier H that approximately preserves distances between terminals in A , consists of at most | A | m o (1) vertices and edges, and can be updated in worst-case time m o (1) . Crucially, our vertex sparsifier construction allows us to maintain a low edge-congestion embedding of H into G . This low congestion embedding is needed when using our toolbox in data structures that are then in turn used to implement algorithms routing flows along shortest paths. A concurrent work Chen-Kyng-Liu-Meierhans-Probst Gutenberg [STOC’24] takes our toolbox as the starting point for developing new data structures that solve min-ratio cycle problems on fully dynamic graphs, which in turn leads to the first almost-linear time algorithms for a host of problems on incremental graphs including cycle detection, maintaining SCCs, s - t shortest paths, and minimum-cost flow.

FOCS Conference 2024 Conference Paper

Almost-Linear Time Algorithms for Decremental Graphs: Min-Cost Flow and More via Duality

  • Jan van den Brand
  • Li Chen 0028
  • Rasmus Kyng
  • Yang P. Liu
  • Simon Meierhans
  • Maximilian Probst Gutenberg
  • Sushant Sachdeva

We give the first almost-linear total time algorithm for deciding if a flow of cost at most $F$ still exists in a directed graph, with edge costs and capacities, undergoing decremental updates, i. e. , edge deletions, capacity decreases, and cost increases. This implies almost-linear time algorithms for approximating the minimum-cost flow value and s-t distance on such decremental graphs. Our framework additionally allows us to maintain decremental strongly connected components in almost-linear time deterministically. These algorithms also improve over the current best known runtimes for statically computing minimum-cost flow, in both the randomized and deterministic settings. We obtain our algorithms by taking the dual perspective, which yields cut-based algorithms. More precisely, our algorithm computes the flow via a sequence of $m^{1+o(1)}$ -dynamic min-ratio cut problems, the dual analog of the dynamic min-ratio cycle problem that underlies recent fast algorithms for minimum-cost flow. Our main technical contribution is a new data structure that returns an approximately optimal min-ratio cut in amortized $m^{o(1)}$ time by maintaining a tree-cut sparsifier. This is achieved by devising a new algorithm to maintain the dynamic expander hierarchy of [ $\text{Goranci-Racke-}$ SaranurakTan, SODA 2021] that also works in capacitated graphs. All our algorithms are deterministc, though they can be sped up further using randomized techniques while still working against an adaptive adversary.

STOC Conference 2024 Conference Paper

Almost-Linear Time Algorithms for Incremental Graphs: Cycle Detection, SCCs, s-t Shortest Path, and Minimum-Cost Flow

  • Li Chen 0028
  • Rasmus Kyng
  • Yang P. Liu
  • Simon Meierhans
  • Maximilian Probst Gutenberg

We give the first almost-linear time algorithms for several problems in incremental graphs including cycle detection, strongly connected component maintenance, s - t shortest path, maximum flow, and minimum-cost flow. To solve these problems, we give a deterministic data structure that returns a m o (1) -approximate minimum-ratio cycle in fully dynamic graphs in amortized m o (1) time per update. Combining this with the interior point method framework of Brand-Liu-Sidford (STOC 2023) gives the first almost-linear time algorithm for deciding the first update in an incremental graph after which the cost of the minimum-cost flow attains value at most some given threshold F . By rather direct reductions to minimum-cost flow, we are then able to solve the problems in incremental graphs mentioned above. Our new data structure also leads to a modular and deterministic almost-linear time algorithm for minimum-cost flow by removing the need for complicated modeling of a restricted adversary, in contrast to the recent randomized and deterministic algorithms for minimum-cost flow in Chen-Kyng-Liu-Peng-Probst Gutenberg-Sachdeva (FOCS 2022)Brand-Chen-Kyng-Liu-Peng-Probst Gutenberg-Sachdeva-Sidford (FOCS 2023). At a high level, our algorithm dynamizes the ℓ 1 oblivious routing of Rozhoň-Grunau-Haeupler-Zuzic-Li (STOC 2022), and develops a method to extract an approximate minimum ratio cycle from the structure of the oblivious routing. To maintain the oblivious routing, we use tools from concurrent work of Kyng-Meierhans-Probst Gutenberg (STOC 2024) which designed vertex sparsifiers for shortest paths, in order to maintain a sparse neighborhood cover in fully dynamic graphs. To find a cycle, we first show that an approximate minimum ratio cycle can be represented as a fundamental cycle on a small set of trees resulting from the oblivious routing. Then, we find a cycle whose quality is comparable to the best tree cycle. This final cycle query step involves vertex and edge sparsification procedures reminiscent of the techniques introduced in Chen-Kyng-Liu-Peng-Probst Gutenberg-Sachdeva (FOCS 2022), but crucially requires a more powerful dynamic spanner, which can handle far more edge insertions than prior work. We build such a spanner via a construction that hearkens back to the classic greedy spanner algorithm of Althöfer-Das-Dobkin-Joseph-Soares (DiscreteComputational Geometry 1993).

FOCS Conference 2023 Conference Paper

A Deterministic Almost-Linear Time Algorithm for Minimum-Cost Flow

  • Jan van den Brand
  • Li Chen 0028
  • Richard Peng
  • Rasmus Kyng
  • Yang P. Liu
  • Maximilian Probst Gutenberg
  • Sushant Sachdeva
  • Aaron Sidford

We give a deterministic $m^{1+o(1)}$ time algorithm that computes exact maximum flows and minimum-cost flows on directed graphs with m edges and polynomially bounded integral demands, costs, and capacities. As a consequence, we obtain the first running time improvement for deterministic algorithms that compute maximum-flow in graphs with polynomial bounded capacities since the work of Goldberg-Rao [J. ACM ’98]. Our algorithm builds on the framework of Chen-Kyng-Liu-Peng-Gutenberg-Sachdeva [FOCS ’22] that computes an optimal flow by computing a sequence of $m^{1+o(1)}$-approximate undirected minimum-ratio cycles. We develop a deterministic dynamic graph data-structure to compute such a sequence of minimum-ratio cycles in an amortized $m^{o(1)}$ time per edge update. Our key technical contributions are deterministic analogues of the vertex sparsification and edge sparsification components of the data-structure from Chen et al. For the vertex sparsification component, we give a method to avoid the randomness in Chen et al. which involved sampling random trees to recurse on. For the edge sparsification component, we design a deterministic algorithm that maintains an embedding of a dynamic graph into a sparse spanner. We also show how our dynamic spanner can be applied to give a deterministic data structure that maintains a fully dynamic low-stretch spanning tree on graphs with polynomially bounded edge lengths, with subpolynomial average stretch and subpolynomial amortized time per edge update.

SODA Conference 2023 Conference Paper

Maintaining Expander Decompositions via Sparse Cuts

  • Yiding Hua
  • Rasmus Kyng
  • Maximilian Probst Gutenberg
  • Zihang Wu

In this article, we show that the algorithm of maintaining expander decompositions in graphs undergoing edge deletions directly by removing sparse cuts repeatedly can be made efficient. Formally, for an m -edge undirected graph G, we say a cut is ϕ -sparse if. A ϕ -expander decomposition of G is a partition of V into sets X 1, X 2, …, X k such that each cluster G [ X 1 ] contains no ϕ -sparse cut (meaning it is a ϕ -expander) with Õ(ϕm) edges crossing between clusters. A natural way to compute a ϕ -expander decomposition is to decompose clusters by ϕ -sparse cuts until no such cut is contained in any cluster. We show that even in graphs undergoing edge deletions, a slight relaxation of this meta-algorithm can be implemented efficiently with amortized update time m o(1) /ϕ 2. Our approach naturally extends to maintaining directed ϕ -expander decompositions and ϕ -expander hierarchies and thus gives a unifying framework while having simpler proofs than previous state-of-the-art work. In all settings, our algorithm matches the run-times of previous algorithms up to subpolynomial factors. Moreover, our algorithm provides stronger guarantees for ϕ -expander decompositions. For example, for graphs undergoing edge deletions, our approach is the first to maintain a dynamic expander decomposition where each updated decomposition is a refinement of the previous decomposition, and our approach is the first to guarantee a sublinear ϕm 1+ο(1) bound on the total number of edges that cross between clusters across the entire sequence of dynamic updates. Our techniques also give by far the simplest, deterministic algorithms for maintaining Strongly-Connected Components (SCCs) in directed graphs undergoing edge deletions, and for maintaining connectivity in undirected fully-dynamic graphs, both matching the current state-of-the art run-times up to subpolynomial factors. * The full version of the paper can be accessed at https: //arxiv. org/abs/2204. 02519

FOCS Conference 2022 Conference Paper

Derandomizing Directed Random Walks in Almost-Linear Time

  • Rasmus Kyng
  • Simon Meierhans
  • Maximilian Probst Gutenberg

In this article, we present the first deterministic directed Laplacian L systems solver that runs in time almost-linear in the number of non-zero entries of L. Previous reductions imply the first deterministic almost-linear time algorithms for computing various fundamental quantities on directed graphs including stationary distributions, personalized PageRank, hitting times and escape probabilities. We obtain these results by introducing partial symmetrization, a new technique that makes the Laplacian of an Eulerian directed graph “less directed” in a useful sense, which may be of independent interest. The usefulness of this technique comes from two key observations: Firstly, the partially symmetrized Laplacian preconditions the original Eulerian Laplacian well in Richardson iteration, enabling us to construct a solver for the original matrix from a solver for the partially symmetrized one. Secondly, the undirected structure in the partially symmetrized Laplacian makes it possible to sparsify the matrix very crudely, i. e. with large spectral error, and still show that Richardson iterations convergence when using the sparsified matrix as a preconditioner. This allows us to develop deterministic sparsification tools for the partially symmetrized Laplacian. Together with previous reductions from directed Laplacians to Eulerian Laplacians, our technique results in the first deterministic almost-linear time algorithm for solving linear equations in directed Laplacians. To emphasize the generality of our new technique, we show that two prominent existing (randomized) frameworks for solving linear equations in Eulerian Laplacians can be derandomized in this way: the squaring-based framework of [1] and the sparsified Cholesky-based framework of [2].

SODA Conference 2022 Conference Paper

Incremental SSSP for Sparse Digraphs Beyond the Hopset Barrier

  • Rasmus Kyng
  • Simon Meierhans
  • Maximilian Probst Gutenberg

Given a directed, weighted graph G = (V, E ) undergoing edge insertions, the incremental single-source shortest paths (SSSP) problem asks for the maintenance of approximate distances from a dedicated source s while optimizing the total time required to process the insertion sequence of m edges. Recently, Gutenberg, Williams and Wein [STOC'20] introduced a deterministic Õ(n 2 ) algorithm for this problem, achieving near linear time for very dense graphs. For sparse graphs, Chechik and Zhang [SODA'21] recently presented a deterministic Õ ( m 5/3 ) algorithm, and an adaptive randomized algorithm with run-time. This algorithm is remarkable for two reasons: 1) in very spare graphs it reaches the directed hopset barrier of Ω( n 3/2 ) that applied to all previous approaches for partially-dynamic SSSP [STOC'14, SODA'20, FOCS'20] and 2) it does not resort to a directed hopset technique itself. In this article we introduce propagation synchronization, a new technique for controlling the error build-up on paths throughout batches of insertions. This leads us to a significant improvement of the approach in [SODA'21] yielding a deterministic O ( m 3/2 ) algorithm for the problem. By a very careful combination of our new technique with the sampling approach from [SODA'21], we further obtain an adaptive randomized algorithm with total update time Õ ( m 4/3 ). This is the first partially-dynamic SSSP algorithm in sparse graphs to bypass the notorious directed hopset barrier which is often seen as the fundamental challenge towards achieving truly near-linear time algorithms.

FOCS Conference 2022 Conference Paper

Maximum Flow and Minimum-Cost Flow in Almost-Linear Time

  • Li Chen 0028
  • Rasmus Kyng
  • Yang P. Liu
  • Richard Peng
  • Maximilian Probst Gutenberg
  • Sushant Sachdeva

We give an algorithm that computes exact maximum flows and minimum-cost flows on directed graphs with m edges and polynomially bounded integral demands, costs, and capacities in $m^{1+o(1)}$ time. Our algorithm builds the flow through a sequence of $m^{1+o(1)}$ approximate undirected minimum-ratio cycles, each of which is computed and processed in amortized $m^{o(1)}$ time using a new dynamic graph data structure. Our framework extends to algorithms running in $m^{1+o(1)}$ time for computing flows that minimize general edge-separable convex functions to high accuracy. This gives almost-linear time algorithms for several problems including entropy-regularized optimal transport, matrix scaling, p-norm flows, and p-norm isotonic regression on arbitrary directed acyclic graphs.

SODA Conference 2022 Conference Paper

Scalar and Matrix Chernoff Bounds from ℓ ∞ -Independence

  • Tali Kaufman
  • Rasmus Kyng
  • Federico Soldà

We present new scalar and matrix Chernoff-style concentration bounds for a broad class of probability distributions over the binary hypercube {0, 1} n. Motivated by recent tools developed for the study of mixing times of Markov chains on discrete distributions, we say that a distribution is ℓ ∞ -independent when the infinity norm of its influence matrix is bounded by a constant. We show that any distribution which is ℓ ∞ -infinity independent satisfies a matrix Chernoff bound that matches the matrix Chernoff bound for independent random variables due to Tropp. Our matrix Chernoff bound is a broad generalization and strengthening of the matrix Chernoff bound of Kyng and Song (FOCS'18). Using our bound, we can conclude as a corollary that a union of O (log |V| ) random spanning trees gives a spectral graph sparsifier of a graph with |V| vertices with high probability matching results for independent edge sampling, and matching lower bounds from Kyng and Song.

SODA Conference 2020 Conference Paper

Packing LPs are Hard to Solve Accurately, Assuming Linear Equations are Hard

  • Rasmus Kyng
  • Di Wang 0005
  • Peng Zhang 0052

We study the complexity of approximately solving packing linear programs. In the Real RAM model, it is known how to solve packing LPs with N non-zeros in time Õ ( N / ϵ ). We investigate whether the ϵ dependence in the running time can be improved. Our first main result relates the difficulty of this problem to hardness assumptions for solving dense linear equations. We show that, in the Real RAM model, unless linear equations in matrices n × n with condition number O ( n 10 ) can be solved to ϵ accuracy faster than Õ ( n 2. 01 log(1/ϵ)), no algorithm (1−ϵ)-approximately solves a O ( n )× O ( n ) packing LPs (where N = O ( n 2 )) in time Õ ( n 2 ϵ −0. 0003 ). It would be surprising to solve linear equations in the Real RAM model this fast, as we currently cannot solve them faster than Õ ( n ω ), where ω denotes the exponent in the running time for matrix multiplication in the Real RAM model (and equivalently matrix inversion). The current best bound on this exponent is roughly ω ≤ 2. 372. Note, however, that a fast solver for linear equations does not directly imply faster matrix multiplication. But, our reduction shows that if fast and accurate packing LP solvers exist, then either linear equations can be solved much faster than matrix multiplication or the matrix multiplication constant is very close to 2. Instantiating the same reduction with different parameters, we show that unless linear equations in matrices with condition number O ( n 1. 5 ) can be solved to ϵ accuracy faster than Õ ( n 2. 372 log(1/ ϵ )), no algorithm (1 – ϵ )-approximately solves packing LPs in time Õ ( n 2 ϵ −0. 067 ). Thus smaller improvements in the exponent for ϵ in the running time of Packing LP solvers also imply improvements in the current state-of-the-art for solving linear equations. Our second main result relates the difficulty of approximately solving packing linear programs to hardness assumptions for solving sparse linear equations: In the Real RAM model, unless well-conditioned sparse systems of linear equations can be solved faster than Õ ((no. non-zeros of matrix) ), no algorithm (1 – ϵ )-approximately solves packing LPs with N non-zeros in time Õ ( Nϵ −0. 165 ). This running time of Õ ((no. non-zeros of matrix) ) is obtained by the classical Conjugate Gradient algorithm by a standard analysis. Our reduction implies that if sufficiently good packing LP solvers exist, then this long-standing best-known bound on the running time for solving well-conditioned systems of linear equations is sub-optimal 1. While we prove results in the Real RAM model, our condition number assumptions ensure that our results can be translated to fixed point arithmetic with (log n ) O (1) bits per number.

FOCS Conference 2018 Conference Paper

A Matrix Chernoff Bound for Strongly Rayleigh Distributions and Spectral Sparsifiers from a few Random Spanning Trees

  • Rasmus Kyng
  • Zhao Song 0002

Strongly Rayleigh distributions are a class of negatively dependent distributions of binary-valued random variables [Borcea, Brändén, Liggett JAMS 09]. Recently, these distributions have played a crucial role in the analysis of algorithms for fundamental graph problems, e. g. Traveling Salesman Problem [Gharan, Saberi, Singh FOCS 11]. We prove a new matrix Chernoff bound for Strongly Rayleigh distributions. As an immediate application, we show that adding together the Laplacians of ε -2 log 2 n random spanning trees gives an (1± ε) spectral sparsifiers of graph Laplacians with high probability. Thus, we positively answer an open question posted in [Baston, Spielman, Srivastava, Teng JACM 13]. Our number of spanning trees for spectral sparsifier matches the number of spanning trees required to obtain a cut sparsifier in [Fung, Hariharan, Harvey, Panigraphi STOC 11]. The previous best result was by naively applying a classical matrix Chernoff bound which requires ε -2 n log n spanning trees. For the tree averaging procedure to agree with the original graph Laplacian in expectation, each edge of the tree should be reweighted by the inverse of the edge leverage score in the original graph. We also show that when using this reweighting of the edges, the Laplacian of single random tree is bounded above in the PSD order by the original graph Laplacian times a factor log n with high probability, i. e. L T ≤ O(log n) L G. We show a lower bound that almost matches our last result, namely that in some graphs, with high probability, the random spanning tree is not bounded above in the spectral order by [log n/loglog n] times the original graph Laplacian. We also show a lower bound that in ε -2 log n spanning trees are necessary to get a (1± ε) spectral sparsifier.

STOC Conference 2018 Conference Paper

Incomplete nested dissection

  • Rasmus Kyng
  • Richard Peng
  • Robert Schwieterman
  • Peng Zhang 0052

We present an asymptotically faster algorithm for solving linear systems in well-structured 3-dimensional truss stiffness matrices. These linear systems arise from linear elasticity problems, and can be viewed as extensions of graph Laplacians into higher dimensions. Faster solvers for the 2-D variants of such systems have been studied using generalizations of tools for solving graph Laplacians [Daitch-Spielman CSC’07, Shklarski-Toledo SIMAX’08]. Given a 3-dimensional truss over n vertices which is formed from a union of k convex structures (tetrahedral meshes) with bounded aspect ratios, whose individual tetrahedrons are also in some sense well-conditioned, our algorithm solves a linear system in the associated stiffness matrix up to accuracy є in time O ( k 1/3 n 5/3 log(1 / є)). This asymptotically improves the running time O ( n 2 ) by Nested Dissection for all k ≪ n . We also give a result that improves on Nested Dissection even when we allow any aspect ratio for each of the k convex structures (but we still require well-conditioned individual tetrahedrons). In this regime, we improve on Nested Dissection for k ≪ n 1/44 . The key idea of our algorithm is to combine nested dissection and support theory. Both of these techniques for solving linear systems are well studied, but usually separately. Our algorithm decomposes a 3-dimensional truss into separate and balanced regions with small boundaries. We then bound the spectrum of each such region separately, and utilize such bounds to obtain improved algorithms by preconditioning with partial states of separator-based Gaussian elimination.

FOCS Conference 2018 Conference Paper

Solving Directed Laplacian Systems in Nearly-Linear Time through Sparse LU Factorizations

  • Michael B. Cohen
  • Jonathan A. Kelner
  • Rasmus Kyng
  • John Peebles
  • Richard Peng
  • Anup B. Rao
  • Aaron Sidford

In this paper, we show how to solve directed Laplacian systems in nearly-linear time. Given a linear system in an n × n Eulerian directed Laplacian with m nonzero entries, we show how to compute an ε-approximate solution in time O(m log O(1) (n) log (1/ε)). Through reductions from [Cohen et al. FOCS'16], this gives the first nearly-linear time algorithms for computing ε-approximate solutions to row or column diagonally dominant linear systems (including arbitrary directed Laplacians) and computing ε-approximations to various properties of random walks on directed graphs, including stationary distributions, personalized PageRank vectors, hitting times, and escape probabilities. These bounds improve upon the recent almost-linear algorithms of [Cohen et al. STOC'17], which gave an algorithm to solve Eulerian Laplacian systems in time O((m+n2 O(√ log n log log n) )log O(1) (n ε -1 )). To achieve our results, we provide a structural result that we believe is of independent interest. We show that Eulerian Laplacians (and therefore the Laplacians of all strongly connected directed graphs) have sparse approximate LU-factorizations. That is, for every such directed Laplacian there are lower upper triangular matrices each with at most Õ(n) nonzero entries such that there product spectrally approximates the directed Laplacian in an appropriate norm. This claim can be viewed as an analog of recent work on sparse Cholesky factorizations of Laplacians of undirected graphs. We show how to construct such factorizations in nearly-linear time and prove that once constructed they yield nearly-linear time algorithms for solving directed Laplacian systems.

SODA Conference 2017 Conference Paper

A Framework for Analyzing Resparsification Algorithms

  • Rasmus Kyng
  • Jakub Pachocki
  • Richard Peng
  • Sushant Sachdeva

A spectral sparsifier of a graph G is a sparser graph H that approximately preserves the quadratic form of G, i. e. , for all vectors x, x T L Gx ≈ x T L H x, where L G and L H denote the respective graph Laplacians. Spectral sparsifiers generalize cut sparsifiers, and have found many applications in designing graph algorithms. In recent years, there has been interest in computing spectral sparsifiers in semi-streaming and dynamic settings. Natural algorithms in these settings often involve repeated sparsification of a graph, and in turn accumulation of errors across these steps. We present a framework for analyzing algorithms that perform repeated sparsifications that only incur error corresponding to a single sparsification step, leading to better results for many of these reseparsification based algorithms. As an application, we show how to maintain a spectral sparsifier in the semi-streaming setting: We present a simple algorithm that, for a graph G on n vertices and m edges, computes a spectral sparsifier of G with O ( n log n ) edges in a single pass over G, using only O ( n log n ) space, and O ( m log 2 n ) total time. This improves on previous best semi-streaming algorithms for both spectral and cut sparsifiers by a factor of log n in both space and runtime. The algorithm also extends to semi-streaming row sampling for general PSD matrices. As another application, we use this framework to combine a spectral sparsification algorithm by Koutis with improved spanner constructions to give a parallel algorithm for constructing O ( n log 2 n log log n ) sized spectral sparsifiers in O ( m log 2 n log log n ) time. This is the best combinatorial graph sparsification algorithm to date, and the size of the sparsifiers produced is only a factor log n log log n more than ones produced by numerical routines.

FOCS Conference 2017 Conference Paper

Hardness Results for Structured Linear Systems

  • Rasmus Kyng
  • Peng Zhang 0052

We show that if the nearly-linear time solvers for Laplacian matrices and their generalizations can be extended to solve just slightly larger families of linear systems, then they can be used to quickly solve all systems of linear equations over the reals. This result can be viewed either positively or negatively: either we will develop nearly-linear time algorithms for solving all systems of linear equations over the reals, or progress on the families we can solve in nearly-linear time will soon halt.

STOC Conference 2017 Conference Paper

Sampling random spanning trees faster than matrix multiplication

  • David Durfee
  • Rasmus Kyng
  • John Peebles
  • Anup B. Rao
  • Sushant Sachdeva

We present an algorithm that, with high probability, generates a random spanning tree from an edge-weighted undirected graph in ( n 5/3 m 1/3 ) time. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due to Colbourn et al. that runs in matrix multiplication time, O ( n ω ). For the special case of unweighted graphs, this improves upon the best previously known running time of Õ(min{ n ω , m √ n , m 4/3 }) for m ⪢ n 7/4 (Colbourn et al. '96, Kelner-Madry '09, Madry et al. '15). The effective resistance metric is essential to our algorithm, as in the work of Madry et al., but we eschew determinant-based and random walk-based techniques used by previous algorithms. Instead, our algorithm is based on Gaussian elimination, and the fact that effective resistance is preserved in the graph resulting from eliminating a subset of vertices (called a Schur complement). As part of our algorithm, we show how to compute -approximate effective resistances for a set S of vertex pairs via approximate Schur complements in Õ( m +( n + | S |)ε -2 ) time, without using the Johnson-Lindenstrauss lemma which requires Õ( min{( m + | S |) ε2 , m + n ε -4 +| S |ε 2 }) time. We combine this approximation procedure with an error correction procedure for handling edges where our estimate isn't sufficiently accurate.

FOCS Conference 2016 Conference Paper

Approximate Gaussian Elimination for Laplacians - Fast, Sparse, and Simple

  • Rasmus Kyng
  • Sushant Sachdeva

We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. Our algorithm performs a subsampled Cholesky factorization, which we analyze using matrix martingales. As part of the analysis, we give a proof of a concentration inequality for matrix martingales where the differences are sums of conditionally independent variables.

NeurIPS Conference 2015 Conference Paper

Fast, Provable Algorithms for Isotonic Regression in all L_p-norms

  • Rasmus Kyng
  • Anup Rao
  • Sushant Sachdeva

Given a directed acyclic graph $G, $ and a set of values $y$ on the vertices, the Isotonic Regression of $y$ is a vector $x$ that respects the partial order described by $G, $ and minimizes $\|x-y\|, $ for a specified norm. This paper gives improved algorithms for computing the Isotonic Regression for all weighted $\ell_{p}$-norms with rigorous performance guarantees. Our algorithms are quite practical, and their variants can be implemented to run fast in practice.