Arrow Research search

Author name cluster

Wolfgang Mulzer

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

9 papers
2 author rows

Possible papers

9

TCS Journal 2019 Journal Article

A time–space trade-off for computing the k-visibility region of a point in a polygon

  • Yeganeh Bahoo
  • Bahareh Banyassady
  • Prosenjit K. Bose
  • Stephane Durocher
  • Wolfgang Mulzer

Let P be a simple polygon with n vertices, and let q ∈ P be a point in P. Let k ∈ { 0, …, n − 1 }. A point p ∈ P is k-visible from q if and only if the line segment pq crosses the boundary of P at most k times. The k-visibility region of q in P is the set of all points that are k-visible from q. We study the problem of computing the k-visibility region in the limited workspace model, where the input resides in a random-access read-only memory of O ( n ) words, each with Ω ( log ⁡ n ) bits. The algorithm can read and write O ( s ) additional words of workspace, where s ∈ N is a parameter of the model. The output is written to a write-only stream. Given a simple polygon P with n vertices and a point q ∈ P, we present an algorithm that reports the k-visibility region of q in P in O ( c n / s + c log ⁡ s + min ⁡ { ⌈ k / s ⌉ n, n log ⁡ log s ⁡ n } ) expected time using O ( s ) words of workspace. Here, c ∈ { 1, …, n } is the number of critical vertices of P for q where the k-visibility region of q may change. We generalize this result for polygons with holes and for sets of non-crossing line segments.

MFCS Conference 2019 Conference Paper

On the Stretch Factor of Polygonal Chains

  • Ke Chen 0011
  • Adrian Dumitrescu
  • Wolfgang Mulzer
  • Csaba D. Tóth

Let P=(p_1, p_2, .. ., p_n) be a polygonal chain. The stretch factor of P is the ratio between the total length of P and the distance of its endpoints, sum_{i = 1}^{n-1} |p_i p_{i+1}|/|p_1 p_n|. For a parameter c >= 1, we call P a c-chain if |p_ip_j|+|p_jp_k| <= c|p_ip_k|, for every triple (i, j, k), 1 <= i<j<k <= n. The stretch factor is a global property: it measures how close P is to a straight line, and it involves all the vertices of P; being a c-chain, on the other hand, is a fingerprint-property: it only depends on subsets of O(1) vertices of the chain. We investigate how the c-chain property influences the stretch factor in the plane: (i) we show that for every epsilon > 0, there is a noncrossing c-chain that has stretch factor Omega(n^{1/2-epsilon}), for sufficiently large constant c=c(epsilon); (ii) on the other hand, the stretch factor of a c-chain P is O(n^{1/2}), for every constant c >= 1, regardless of whether P is crossing or noncrossing; and (iii) we give a randomized algorithm that can determine, for a polygonal chain P in R^2 with n vertices, the minimum c >= 1 for which P is a c-chain in O(n^{2. 5} polylog n) expected time and O(n log n) space.

SODA Conference 2017 Conference Paper

Dynamic Planar Voronoi Diagrams for General Distance Functions and their Algorithmic Applications

  • Haim Kaplan
  • Wolfgang Mulzer
  • Liam Roditty
  • Paul Seiferth
  • Micha Sharir

We describe a new data structure for dynamic nearest neighbor queries in the plane with respect to a general family of distance functions that includes L p -norms and additively weighted Euclidean distances, and for general (convex, pair- wise disjoint) sites that have constant description complexity (line segments, disks, etc.). Our data structure has a polylogarithmic update and query time, improving an earlier data structure of Agarwal, Efrat and Sharir that required O ( n ∊ ) time for an update and O (log n ) time for a query [1]. Our data structure has numerous applications, and in all of them it gives faster algorithms, typically reducing an O ( n ∊ ) factor in the bounds to polylogarithmic. To further demonstrate its effectiveness, we give here two new applications: an efficient construction of a spanner in a disk intersection graph, and a data structure for efficient connectivity queries in a dynamic disk graph. To obtain this data structure, we combine and extend various techniques and obtain several side results that are of independent interest. Our data structure depends on the existence and an efficient construction of “vertical” shallow cuttings in arrangements of bivariate algebraic functions. We prove that an appropriate level in an arrangement of a random sample of a suitable size provides such a cutting. To compute it efficiently, we develop a randomized incremental construction algorithm for finding the lowest k levels in an arrangement of bivariate algebraic functions (we mostly consider here collections of functions whose lower envelope has linear complexity, as is the case in the dynamic nearest- neighbor context). To analyze this algorithm, we improve a longstanding bound on the combinatorial complexity of the vertical decomposition of these levels. Finally, to obtain our structure, we plug our vertical shallow cutting construction into Chan's algorithm for efficiently maintaining the lower envelope of a dynamic set of planes in ℝ 3. While doing this, we also revisit Chan's technique and present a variant that uses a single binary counter, with a simpler analysis and an improved amortized deletion time.

SODA Conference 2017 Conference Paper

The Rainbow at the End of the Line - A PPAD Formulation of the Colorful Carathéodory Theorem with Applications

  • Frédéric Meunier
  • Wolfgang Mulzer
  • Pauline Sarrabezolles
  • Yannik Stein

Let C 1, …, Cd +i be d + 1 point sets in ℝ d, each containing the origin in its convex hull. A subset C of is called a colorful choice (or rainbow) for C 1, …, C d+1, if it contains exactly one point from each set C i. The colorful Carathéodory theorem states that there always exists a colorful choice for C 1, …, C d+1 that has the origin in its convex hull. This theorem is very general and can be used to prove several other existence theorems in high-dimensional discrete geometry, such as the centerpoint theorem or Tverberg's theorem. The colorful Carathéodory problem (C olorful C arathéodory ) is the computational problem of finding such a colorful choice. Despite several efforts in the past, the computational complexity of C olorful C arathéodory in arbitrary dimension is still open. We show that C olorful C arathéodory lies in the intersection of the complexity classes PPAD and PLS. This makes it one of the few geometric problems in PPAD and PLS that are not known to be solvable in polynomial time. Moreover, it implies that the problem of computing centerpoints, computing Tverberg partitions, and computing points with large simplicial depth is contained in PPAD Π PLS. This is the first nontrivial upper bound on the complexity of these problems. Finally, we show that our PPAD formulation leads to a polynomial-time algorithm for a special case of C olorful C arathéodory in which we have only two color classes C 1 and C 2 in d dimensions, each with the origin in its convex hull, and we would like to find a set with half the points from each color class that contains the origin in its convex hull.

STOC Conference 2015 Conference Paper

Approximate k-flat Nearest Neighbor Search

  • Wolfgang Mulzer
  • Huy L. Nguyên
  • Paul Seiferth
  • Yannik Stein

Let k ≥ 0 be an integer. In the approximate k-flat nearest neighbor (k-ANN) problem, we are given a set P ⊂ R d of n points in d-dimensional space and a fixed approximation factor c > 1. Our goal is to preprocess P so that we can efficiently answer approximate k-flat nearest neighbor queries : given a k-flat F, find a point in P whose distance to F is within a factor c of the distance between F and the closest point in P. The case k = 0 corresponds to the well-studied approximate nearest neighbor problem, for which a plethora of results are known, both in low and high dimensions. The case k = 1 is called approximate line nearest neighbor . In this case, we are aware of only one provably efficient data structure, due to Andoni, Indyk, Krauthgamer, and Nguyen (AIKN) [2]. For k ≥ 2, we know of no previous results.

SODA Conference 2011 Conference Paper

Triangulating the Square and Squaring the Triangle: Quadtrees and Delaunay Triangulations are Equivalent

  • Maarten Löffler
  • Wolfgang Mulzer

We show that Delaunay triangulations and compressed quadtrees are equivalent structures. More precisely, we give two algorithms: the first computes a compressed quadtree for a planar point set, given the Delaunay triangulation; the second finds the Delaunay triangulation, given a compressed quadtree. Both algorithms run in deterministic linear time on a pointer machine. Our work builds on and extends previous results by Krznaric and Levcopolous [40] and Buchin and Mulzer [10]. Our main tool for the second algorithm is the well-separated pair decomposition (WSPD) [13], a structure that has been used previously to find Euclidean minimum spanning trees in higher dimensions [27]. We show that knowing the WSPD (and a quadtree) suffices to compute a planar EMST in linear time. With the EMST at hand, we can find the Delaunay triangulation in linear time [21]. As a corollary, we obtain deterministic versions of many previous algorithms related to Delaunay triangulations, such as splitting planar Delaunay triangulations [19, 20], preprocessing imprecise points for faster Delaunay computation [9, 42], and transdichotomous Delaunay triangulations [10, 15, 16].

SODA Conference 2010 Conference Paper

Self-improving Algorithms for Convex Hulls

  • Kenneth L. Clarkson
  • Wolfgang Mulzer
  • C. Seshadhri 0001

We describe an algorithm for computing planar convex hulls in the self-improving model: given a sequence I 1, I 2, … of planar n -point sets, the upper convex hull conv( I ) of each set I is desired. We assume that there exists a probability distribution D on n -point sets, such that the inputs I j are drawn independently according to D. Furthermore, D is such that the individual points are distributed independently of each other. In other words, the i 'th point is distributed according to D i. The D i 's can be arbitrary but are independent of each other. The distribution D is not known to the algorithm in advance. After a learning phase of n ε rounds, the expected time to compute conv( I ) is O ( n + H (conv( I ))). Here, H (conv( I )) is the entropy of the output, which is a lower bound for the expected running time of any algebraic computation tree that computes the convex hull. (More precisely, H (conv( I )) is the minimum entropy of any random variable that maps I to a description of conv( I ) and to a labeling scheme that proves nonextremality for every point in I not on the hull.) Our algorithm is thus asymptotically optimal for D. (An erratum has been attached to the previously published proceedings.)

FOCS Conference 2009 Conference Paper

Delaunay Triangulations in O(sort(n)) Time and More

  • Kevin Buchin
  • Wolfgang Mulzer

We present several results about Delaunay triangulations (DTs) and convex hulls in transdichotomous and hereditary settings: (i) the DT of a planar point set can be computed in expected time O(sort(n)) on a word RAM, where sort(n) is the time to sort n numbers. We assume that the word RAM supports the shuffle-operation in constant time; (ii) if we know the ordering of a planar point set in x- and in y-direction, its DT can be found by a randomized algebraic computation tree of expected linear depth; (iii) given a universe U of points in the plane, we construct a data structure D for Delaunay queries: for any P ¿ U, D can find the DT of P in time O(|P|log log|U|); (iv) given a universe U of points in 3-space in general convex position, there is a data structure D for convex hull queries: for any P ¿ U, D can find the convex hull of P in time O(|P|(log log|U|) 2 ); (v) given a convex polytope in 3-space with n vertices which are colored with ¿ > 2 colors, we can split it into the convex hulls of the individual color classes in time O(n(log log n) 2 ). The results (i)-(iii) generalize to higher dimensions. We need a wide range of techniques. Most prominently, we describe a reduction from DTs to nearest-neighbor graphs that relies on a new variant of randomized incremental constructions using dependent sampling.