Arrow Research search

Author name cluster

Bart M. P. Jansen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

7 papers
1 author row

Possible papers

7

STOC Conference 2022 Conference Paper

Lossy planarization: a constant-factor approximate kernelization for planar vertex deletion

  • Bart M. P. Jansen
  • Michal Wlodarczyk 0001

In the F -minor-free deletion problem we are given an undirected graph G and the goal is to find a minimum vertex set that intersects all minor models of graphs from the family F . This captures numerous important problems including Vertex cover, Feedback vertex set, Treewidth-η modulator, and Vertex planarization. In the latter one, we ask for a minimum vertex set whose removal makes the graph planar. This is a special case of F -minor-free deletion for the family F = { K 5 , K 3,3 }. Whenever the family F contains at least one planar graph, then F -minor-free deletion is known to admit a constant-factor approximation algorithm and a polynomial kernelization [Fomin, Lokshtanov, Misra, and Saurabh, FOCS’12]. A polynomial kernelization is a polynomial-time algorithm that, given a graph G and integer k , outputs a graph G ′ on poly ( k ) vertices and integer k ′, so that OPT ( G ) ≤ k if and only if OPT ( G ′) ≤ k ′. The Vertex planarization problem is arguably the simplest setting for which F does not contain a planar graph and the existence of a constant-factor approximation or a polynomial kernelization remains a major open problem. In this work we show that Vertex planarization admits an algorithm which is a combination of both approaches. Namely, we present a polynomial α-approximate kernelization, for some constant α > 1, based on the framework of lossy kernelization [Lokshtanov, Panolan, Ramanujan, and Saurabh, STOC’17]. Simply speaking, when given a graph G and integer k , we show how to compute a graph G ′ on poly ( k ) vertices so that any β-approximate solution to G ′ can be lifted to an (α· β)-approximate solution to G , as long as OPT ( G ) ≤ k /α· β. In order to achieve this, we develop a framework for sparsification of planar graphs which approximately preserves all separators and near-separators between subsets of the given terminal set. Our result yields an improvement over the state-of-art approximation algorithms for Vertex planarization. The problem admits a polynomial-time O ( n є )-approximation algorithm, for any є > 0, and a quasi-polynomial-time (log n ) O (1) -approximation algorithm, where n is the input size, both randomized [Kawarabayashi and Sidiropoulos, FOCS’17]. By pipelining these algorithms with our approximate kernelization, we improve the approximation factors to respectively O ( OPT є ) and (log OPT ) O (1) .

MFCS Conference 2021 Conference Paper

On the Hardness of Compressing Weights

  • Bart M. P. Jansen
  • Shivesh K. Roy
  • Michal Wlodarczyk 0001

We investigate computational problems involving large weights through the lens of kernelization, which is a framework of polynomial-time preprocessing aimed at compressing the instance size. Our main focus is the weighted Clique problem, where we are given an edge-weighted graph and the goal is to detect a clique of total weight equal to a prescribed value. We show that the weighted variant, parameterized by the number of vertices n, is significantly harder than the unweighted problem by presenting an 𝒪(n^{3 - ε}) lower bound on the size of the kernel, under the assumption that NP ̸ ⊆ coNP/poly. This lower bound is essentially tight: we show that we can reduce the problem to the case with weights bounded by 2^𝒪(n), which yields a randomized kernel of 𝒪(n³) bits. We generalize these results to the weighted d-Uniform Hyperclique problem, Subset Sum, and weighted variants of Boolean Constraint Satisfaction Problems (CSPs). We also study weighted minimization problems and show that weight compression is easier when we only want to {preserve the collection of} optimal solutions. Namely, we show that for node-weighted Vertex Cover on bipartite graphs it is possible to maintain the set of optimal solutions using integer weights from the range [1, n], but if we want to maintain the ordering of the weights of all inclusion-minimal solutions, then weights as large as 2^Ω(n) are necessary.

SODA Conference 2017 Conference Paper

Approximation and Kernelization for Chordal Vertex Deletion

  • Bart M. P. Jansen
  • Marcin Pilipczuk

The C hordal V ertex D eletion (C h VD) problem asks to delete a minimum number of vertices from an input graph to obtain a chordal graph. In this paper we develop a polynomial kernel for C h VD under the parameterization by the solution size. Using a new Erdos-Posa type packing/covering duality for holes in nearly-chordal graphs, we present a polynomial-time algorithm that reduces any instance ( G, k ) of C h VD to an equivalent instance with poly( k ) vertices. The existence of a polynomial kernel answers an open problem of Marx from 2006 [WG 2006, LNCS 4271, 37–48]. To obtain the kernelization, we develop the first poly(oPT)- approximation algorithm for ChVD, which is of independent interest. In polynomial time, it either decides that G has no chordal deletion set of size k, or outputs a solution of size O ( k 4 log 2 k ).

MFCS Conference 2016 Conference Paper

Optimal Sparsification for Some Binary CSPs Using Low-Degree Polynomials

  • Bart M. P. Jansen
  • Astrid Pieterse

This paper analyzes to what extent it is possible to efficiently reduce the number of clauses in NP-hard satisfiability problems, without changing the answer. Upper and lower bounds are established using the concept of kernelization. Existing results show that if NP is not contained in coNP/poly, no efficient preprocessing algorithm can reduce n-variable instances of CNF-SAT with d literals per clause, to equivalent instances with O(n^{d-epsilon}) bits for any epsilon > 0. For the Not-All-Equal SAT problem, a compression to size tilde-O(n^{d-1}) exists. We put these results in a common framework by analyzing the compressibility of binary CSPs. We characterize constraint types based on the minimum degree of multivariate polynomials whose roots correspond to the satisfying assignments, obtaining (nearly) matching upper and lower bounds in several settings. Our lower bounds show that not just the number of constraints, but also the encoding size of individual constraints plays an important role. For example, for Exact Satisfiability with unbounded clause length it is possible to efficiently reduce the number of constraints to n+1, yet no polynomial-time algorithm can reduce to an equivalent instance with O(n^{2-epsilon}) bits for any epsilon > 0, unless NP is contained in coNP/poly.

SODA Conference 2015 Conference Paper

Characterizing the easy-to-find subgraphs from the viewpoint of polynomial-time algorithms, kernels, and Turing kernels

  • Bart M. P. Jansen
  • Dániel Marx

We study two fundamental problems related to finding subgraphs: (1) given graphs G and H, S ubgraph T est asks if H is isomorphic to a subgraph of G, (2) given graphs G, H, and an integer t, PACKING asks if G contains t vertex-disjoint subgraphs isomorphic to H. For every graph class ℱ, let ℱ-S ubgraph T est and ℱ-P acking be the special cases of the two problems where H is restricted to be in F. Our goal is to study which classes ℱ make the two problems tractable in one of the following senses: (randomized) polynomial-time solvable, admits a polynomial (many-one) kernel (that is, has a polynomial-time preprocessing procedure that creates an equivalent instance whose size is polynomially bounded by the size of the solution), or admits a polynomial Turing kernel (that is, has an adaptive polynomial-time procedure that reduces the problem to a polynomial number of instances, each of which has size bounded polynomially by the size of the solution). To obtain a more robust setting, we restrict our attention to hereditary classes F. It is known that if every component of every graph in ℱ has at most two vertices, then ℱ-P acking is polynomial-time solvable, and NP-hard otherwise. We identify a simple combinatorial property (every component of every graph in ℱ either has bounded size or is a bipartite graph with one of the sides having bounded size) such that if a hereditary class ℱ has this property, then ℱ-P acking admits a polynomial kernel, and has no polynomial (many-one) kernel otherwise, unless the polynomial hierarchy collapses. Furthermore, if ℱ does not have this property, then ℱ-P acking is either WK[1]-hard, W[1]-hard, or L ong P ath -hard, giving evidence that it does not admit polynomial Turing kernels either. For ℱ-S ubgraph T est, we show that if every graph of a hereditary class ℱ satisfies the property that it is possible to delete a bounded number of vertices such that every remaining component has size at most two, then F- S ubgraph T est is solvable in randomized polynomial time and it is NP-hard otherwise. We introduce a combinatorial property called ( a, b, c, d )-splittability and show that if every graph in a hereditary class ℱ has this property, then F- S ubgraph T est admits a polynomial Turing kernel and it is WK[1]-hard, W[1]-hard, or L ong P ath -hard otherwise. We do not give a complete characterization of the cases when F- S ubgraph T est admits polynomial many-one kernels, but show examples that this question is much more fragile than the characterization for Turing kernels.

SODA Conference 2014 Conference Paper

A Near-Optimal Planarization Algorithm

  • Bart M. P. Jansen
  • Daniel Lokshtanov
  • Saket Saurabh 0001

The problem of testing whether a graph is planar has been studied for over half a century, and is known to be solvable in ( n ) time using a myriad of different approaches and techniques. Robertson and Seymour established the existence of a cubic algorithm for the more general problem of deciding whether an n -vertex graph can be made planar by at most k vertex deletions, for every fixed k. Of the known algorithms for k -V ertex P lanarization, the algorithm of Marx and Schlotter (WG 2007, Algorithmica 2012) running in time achieves the best running time dependence on k. The algorithm of Kawarabayashi (FOCS 2009), running in time f ( k ) n for some f ( k ) ∊ that is not stated explicitly, achieves the best dependence on n. In this paper we present an algorithm for k -Vertex Planarization with running time 2 ( k log k ) · n, significantly improving the running time dependence on k without compromising the linear dependence on n. Our main technical contribution is a novel scheme to reduce the treewidth of the input graph to ( k ) in time 2 O ( k log k ) · n. It combines new insights into the structure of graphs that become planar after contracting a matching, with a Baker-type subroutine that reduces the number of disjoint paths through planar parts of the graph that are not affected by the sought solution. To solve the reduced instances we formulate a dynamic programming algorithm for W eighted V ertex P lanarization on graphs of treewidth w with running time 2 ( w log w ) · n, thereby improving over previous double-exponential algorithms. While Kawarabayashi's planarization algorithm relies heavily on deep results from the graph minors project, our techniques are elementary and practically self-contained. We expect them to be applicable to related edge-deletion and contraction variants of planarization problems.