SODA Conference 2024 Conference Paper
Near-Optimal Min-Sum Motion Planning for Two Square Robots in a Polygonal Environment
- Pankaj K. Agarwal
- Dan Halperin
- Micha Sharir
- Alex Steiger
Author name cluster
Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.
SODA Conference 2024 Conference Paper
SODA Conference 2024 Conference Paper
Vertical decomposition is a widely used general technique for decomposing the cells of arrangements of semi-algebraic sets in ℝ d into constant-complexity subcells. In this paper, we settle in the affirmative a few long-standing open problems involving the vertical decomposition of substructures of arrangements for d = 3, 4: (i) Let S be a collection of n semi-algebraic sets of constant complexity in ℝ 3, and let U ( m ) be an upper bound on the complexity of the union U ( S ‘) of any subset S’ ⊆ S of size at most m. We prove that the complexity of the vertical decomposition of the complement of U ( S ) is O * ( n 2 + U ( n )) (where the O * (·) notation hides subpolynomial factors). We also show that the complexity of the vertical decomposition of the entire arrangement A ( S ) is O*(n 2 + X ), where X is the number of vertices in A ( S ). (ii) Let F be a collection of n trivariate functions whose graphs are semi-algebraic sets of constant complexity. We show that the complexity of the vertical decomposition of the portion of the arrangement A ( F ) in ℝ 4 lying below the lower envelope of F is O *( n 3 ). These results lead to efficient algorithms for a variety of problems involving these decompositions, including algorithms for constructing the decompositions themselves, and for constructing (1/ r )-cuttings of substructures of arrangements of the kinds considered above. One additional algorithm of interest is for output-sensitive point enclosure queries amid semi-algebraic sets in three or four dimensions. In addition, as a main domain of applications, we study various proximity problems involving points and lines in ℝ 3: We first present a linear-size data structure for answering nearest-neighbor queries, with points, amid n lines in ℝ 3 in O *( n 2/3 ) time per query. We also study the converse problem, where we return the nearest neighbor of a query line amid n input points, or lines, in ℝ 3. We obtain a data structure of O *( n 4 ) size that answers a nearest-neighbor query in O (log n ) time. * Work by Pankaj Agarwal has been partially supported by NSF grants IIS-18-14493, CCF-20-07556, and CCF-22-23870. Work by Esther Ezra has been partially supported by Israel Science Foundation Grant 800/22, and also by US-Israel Binational Science Foundation under Grant 2022131. Work by Micha Sharir has been partially supported by Israel Science Foundation Grant 260/18.
SODA Conference 2021 Conference Paper
Let be a set of n axis-aligned cubes of arbitrary sizes in ℝ 3 in general position. Let ≔ ( ) be their union, and let κ be the number of vertices on ∂; κ can vary between O (1) and O ( n 2 ). We show that cl(ℝ 3 \ ) can be decomposed into O ( κ log 4 n ) axis-aligned boxes with pairwise-disjoint interiors. Given a boundary representation of, such a decomposition can be computed in O ( n log 2 n + κ log 6 n ) time. We also show that a decomposition of size O (σ log 4 n + κ log 2 n ), where σ is the number of input cubes that appear on ∂, can be computed in O ( n log 2 n + σ log 8 n + κ log 6 n ) time. The complexity and runtime bounds improve to O ( n log n ) if all cubes in are congruent.
SODA Conference 2018 Conference Paper
We present an efficient construction of additively weighted Voronoi diagrams on planar graphs. Let G be a planar graph with n vertices and b sites that lie on a constant number of faces. We show how to preprocess G in Õ ( nb 2 ) time 1 so that one can compute any additively weighted Voronoi diagram for these sites in Õ ( b ) time. We use this construction to compute the diameter of a directed planar graph with real arc lengths in Õ ( n 5/3 ) time. This improves the recent breakthrough result of Cabello (SODA’17), both by improving the running time (from Õ ( n 11/6 )), and by providing a deterministic algorithm. It is in fact the first truly subquadratic deterministic algorithm for this problem. Our use of Voronoi diagrams to compute the diameter follows that of Cabello, but he used abstract Voronoi diagrams, which makes his diameter algorithm more involved, more expensive, and randomized. As in Cabello's work, our algorithm can also compute the Wiener index of a planar graph (i. e. , the sum of all pairwise distances) within the same bound. Our construction of Voronoi diagrams for planar graphs is of independent interest. It has already been used to obtain fast exact distance oracles for planar graphs [Cohen-Addad et al. , FOCS’17].
SODA Conference 2017 Conference Paper
We describe a new data structure for dynamic nearest neighbor queries in the plane with respect to a general family of distance functions that includes L p -norms and additively weighted Euclidean distances, and for general (convex, pair- wise disjoint) sites that have constant description complexity (line segments, disks, etc.). Our data structure has a polylogarithmic update and query time, improving an earlier data structure of Agarwal, Efrat and Sharir that required O ( n ∊ ) time for an update and O (log n ) time for a query [1]. Our data structure has numerous applications, and in all of them it gives faster algorithms, typically reducing an O ( n ∊ ) factor in the bounds to polylogarithmic. To further demonstrate its effectiveness, we give here two new applications: an efficient construction of a spanner in a disk intersection graph, and a data structure for efficient connectivity queries in a dynamic disk graph. To obtain this data structure, we combine and extend various techniques and obtain several side results that are of independent interest. Our data structure depends on the existence and an efficient construction of “vertical” shallow cuttings in arrangements of bivariate algebraic functions. We prove that an appropriate level in an arrangement of a random sample of a suitable size provides such a cutting. To compute it efficiently, we develop a randomized incremental construction algorithm for finding the lowest k levels in an arrangement of bivariate algebraic functions (we mostly consider here collections of functions whose lower envelope has linear complexity, as is the case in the dynamic nearest- neighbor context). To analyze this algorithm, we improve a longstanding bound on the combinatorial complexity of the vertical decomposition of these levels. Finally, to obtain our structure, we plug our vertical shallow cutting construction into Chan's algorithm for efficiently maintaining the lower envelope of a dynamic set of planes in ℝ 3. While doing this, we also revisit Chan's technique and present a variant that uses a single binary counter, with a simpler analysis and an improved amortized deletion time.
SODA Conference 2017 Conference Paper
SODA Conference 2017 Conference Paper
We study a wide spectrum of incidence problems involving points and curves or points and surfaces in ℝ 3. The current (and in fact the only viable) approach to such problems, pioneered by Guth and Katz [35, 36], requires a variety of tools from algebraic geometry, most notably (i) the polynomial partitioning technique, and (ii) the study of algebraic surfaces that are ruled by lines or, in more recent studies [37], by algebraic curves of some constant degree. By exploiting and refining these tools, we obtain new and improved bounds for numerous incidence problems in ℝ 3. In broad terms, we consider two kinds of problems, those involving points and constant-degree algebraic curves, and those involving points and constant-degree algebraic surfaces. In some variants we assume that the points lie on some fixed constant-degree algebraic variety, and in others we consider arbitrary sets of points in 3-space. The case of points and curves has been considered in several previous studies, starting with Guth and Katz's work on points and lines [36]. Our results, which are based on a recent work of Guth and Zahl [37] concerning surfaces that are doubly ruled by curves, provide a grand generalization of all previous results. We reconstruct the bound for points and lines, and improve, in certain significant ways, recent bounds involving points and circles (in [50]) and points and arbitrary constant-degree algebraic curves (in [49]). While in these latter instances the bounds are not known (and are strongly suspected not) to be tight, our bounds are, in a certain sense, the best that can be obtained with this approach, given the current state of knowledge. In the case of points and surfaces, the incidence graph between them can contain large complete bipartite graphs, each involving points on some curve and surfaces containing this curve (unlike earlier studies, we do not rule out this possibility, which makes our approach more general). Our bounds estimate the total size of the vertex sets in such a complete bipartite graph decomposition of the incidence graph. In favorable cases, our bounds translate into actual incidence bounds. Overall, here too our results can be regarded as providing a “grand generalization” of most of the previous studies of (special instances of) this problem. As applications of our point-surface incidence bounds, we consider the problems of distinct and repeated distances determined by a set of n points in ℝ 3, two of the most celebrated open problems in combinatorial geometry. We obtain new and improved bounds for two special cases, one in which the points lie on some algebraic variety of constant degree, and one involving distances between pairs in P 1 × P 2, where Pi is contained in a variety and P 2 is arbitrary.
STOC Conference 2016 Conference Paper
SODA Conference 2016 Conference Paper
Let H be a set of n non-vertical planes in three dimensions, and let r < n be a parameter. We give a simple alternative proof of the existence of a O (1/ r )-cutting of the first n / r levels of ( H ), which consists of O ( r ) semi-unbounded vertical triangular prisms. The same construction yields an approximation of the ( n / r )-level by a terrain consisting of O ( r / ∊ 3 ) triangular faces, which lies entirely between the levels (1 ± ∊ ) n / r. The proof does not use sampling, and exploits techniques based on planar separators and various structural properties of levels in three-dimensional arrangements and of planar maps. The proof is constructive, and leads to a simple randomized algorithm, that computes the terrain in O ( n + r 2 ∊ –6 log 3 r ) expected time. An application of this technique allows us to mimic Matoušek's construction of cuttings in the plane [36], to obtain a similar construction of “layered” (1/ r )-cutting of the entire arrangement ( H ), of optimal size O ( r 3 ). Another application is a simplified optimal approximate range counting algorithm in three dimensions, competing with that of Afshani and Chan [1].
FOCS Conference 2015 Conference Paper
We show that the number of incidences between m distinct points and n distinct lines in R 4 is O(2 c√log m (m 2/5 n 4/5 + m) + m 1/2 n 1/2 q 1/4 + m 2/3 n 1/3 s 1/3 + n), for a suitable absolute constant c, provided that no 2-plane contains more than s input lines, and no hyperplane or quadric contains more than q lines. The bound holds without the extra factor 2 c√log m when m ≤ n 6/7 or m ≥ n 5/3. Except for this possible factor, the bound is tight in the worst case. The context of this work is incidence geometry, a topic that has been widely studied for more than three decades, with strong connections to a variety of topics, from range searching in computational geometry to the Kakeya problem in harmonic analysis and geometric measure theory. The area has picked up considerable momentum in the past seven years, following the seminal works of Guth and Katz [12, 13], where the later work solves the point-line incidence problem in three dimensions, using new tools and techniques from algebraic geometry. This work extends their result to four dimensions. In doing so, it had to overcome many new technical hurdles that arise from the higher-dimensional context, by developing and adapting more advanced tools from algebraic geometry.
SODA Conference 2013 Conference Paper
The Fréchet distance is a similarity measure between two curves A and B that takes into account the location and ordering of the points along the two curves: Informally, it is the minimum length of a leash required to connect a dog, walking along A, and its owner, walking along B, as they walk without backtracking along their respective curves from one endpoint to the other. The discrete Fréchet distance replaces the dog and its owner by a pair of frogs that can only reside on n and m specific stones on the curves A and B, respectively. These frogs hop from one stone to the next without backtracking, and the discrete Fréchet distance is the minimum length of a “leash” that connects the frogs and allows them to execute such a sequence of hops. It can be computed in quadratic time by a straightforward dynamic programming algorithm. We present the first subquadratic algorithm for computing the discrete Fréchet distance between two sequences of points in the plane. Assuming m ≤ n, the algorithm runs in time, in the standard RAM model, using O ( n ) storage. Our approach uses the geometry of the problem in a subtle way to encode legal positions of the frogs as states of a finite automaton.
SODA Conference 2013 Conference Paper
FOCS Conference 2012 Conference Paper
Let $P$ be a set of $n$ points in $\R^d$. We present a linear-size data structure for answering range queries on $P$ with constant-complexity semi algebraic sets as ranges, in time close to $O(n^{1-1/d})$. It essentially matches the performance of similar structures for simplex range searching, and, for $d\ge 5$, significantly improves earlier solutions by the first two authors obtained in~1994. This almost settles a long-standing open problem in range searching. The data structure is based on the polynomial-partitioning technique of Guth and Katz [arXiv: 1011. 4105], which shows that for a parameter $r$, $1
SODA Conference 2012 Conference Paper
SODA Conference 2011 Conference Paper
FOCS Conference 2010 Conference Paper
We show that the number of geometric permutations of an arbitrary collection of n pairwise disjoint convex sets in R d, for d ≥ 3, is O(n 2d-3 log n), improving Wenger's 20 years old bound of O(n 2d-2 ).
SODA Conference 2009 Conference Paper
STOC Conference 2009 Conference Paper
SODA Conference 2008 Conference Paper
FOCS Conference 2007 Conference Paper
We show that the combinatorial complexity of the. union of n "fat" tetrahedra in 3-space (i. e. , tetrahedra all of whose solid angles are at least. some fixed constant) of arbitrary sizes, is O(n 2+epsiv ), for any epsiv > 0: the bound is almost tight in the worst case, thus almost settling a conjecture of Pach el al. [24]. Our result extends, in a significant way, the result of Pach et al. [24] for the restricted case of nearly congruent cubes. The analysis uses cuttings, combined with the Dobkin-K'irkpatrick hierarchical decomposition of convex polytopes, in order to partition space into subcells, so that, on average, the overwhelming majority of the tetrahedra intersecting a subcell Delta behave as fat dihedral wedges in Delta. As an immediate corollary, we obtain that the combinatorial complexity of the union of n cubes in R 3 having arbitrary side lengths, is O(n 2+epsiv ), for any epsiv > 0 again, significantly extending the result of [24]. Our analysis can easily he extended to yield a nearly-quadratic bound on the complexity of the union of arbitrarily oriented fat triangular prisms (whose cross-sections have, arbitrary sizes) in R 3. Finally, we show that a simple variant of our analysis implies a nearly-linear bound on the complexity of the union of fat triangles in the plane.
SODA Conference 2007 Conference Paper
FOCS Conference 2006 Conference Paper
We develop efficient (1 + epsiv)-approximation algorithms for generalized facility location problems. Such facilities are not restricted to being points in Ropf, and can represent more complex structures such as linear facilities (lines in Ropf d, j-dimensional flats), etc. We introduce coresets for weighted (point) facilities. These prove to be useful for such generalized facility location problems, and provide efficient algorithms for their construction. Applications include: k-mean and k-median generalizations, i. e. , find k lines that minimize the sum (or sum of squares) of the distances from each input point to its nearest line. Other applications are generalizations of linear regression problems to multiple regression lines, new SVD/PCA generalizations, and many more. The results significantly improve on previous work, which deals efficiently only with special cases. Open source code for the algorithms in this paper is also available
SODA Conference 2006 Conference Paper
SODA Conference 2006 Conference Paper
SODA Conference 2005 Conference Paper
SODA Conference 2005 Conference Paper
SODA Conference 2005 Invited Paper
SODA Conference 2004 Conference Paper
SODA Conference 2004 Conference Paper
STOC Conference 2003 Conference Paper
STOC Conference 2003 Conference Paper
SODA Conference 2002 Conference Paper
SODA Conference 2002 Conference Paper
FOCS Conference 2002 Conference Paper
We obtain a near-tight bound of O(n/sup 3+/spl epsiv//), for any /spl epsiv/ > 0, on the complexity of the overlay of the minimization diagrams of two collections of surfaces in four dimensions. This settles a long-standing problem in the theory of arrangements, most recently cited by Agarwal and Sharir (2000), and substantially improves and simplifies a result previously published by the authors (2002). Our bound has numerous algorithmic and combinatorial applications, some of which are presented in this paper. Our result is obtained by introducing a new approach to the analysis of combinatorial structures arising in geometric arrangements of surfaces. This approach, which we call the 'partition technique', is based on k-fold divide and conquer, in which a given collection /spl Fscr/ of n surfaces is partitioned into k subcollections /spl Fscr//sub i/ of n/k surfaces each, and the complexity of the relevant combinatorial structure in /spl Fscr/ is recursively related to the complexities of the corresponding structures in each of the /spl Fscr//sub i/'s. We introduce this approach by applying it first to obtain a new simple proof for the known near-quadratic bound on the complexity of an overlay of two minimization diagrams of collections of surfaces in /spl Ropf//sup 3/, thereby simplifying the previously available proof (1996).
FOCS Conference 2001 Conference Paper
We obtain improved bounds on the complexity of m distinct faces in an arrangement of n circles and in an arrangement of n unit circles. The bounds are worst-case tight for unit circles, and, for general circles, they nearly coincide with the best known bounds for the number of incidences between m points and n circles.
SODA Conference 2001 Conference Paper
SODA Conference 2000 Conference Paper
SODA Conference 1999 Conference Paper
SODA Conference 1997 Conference Paper
SODA Conference 1996 Conference Paper
TCS Journal 1995 Journal Article
SODA Conference 1995 Conference Paper
SODA Conference 1994 Conference Paper
FOCS Conference 1993 Conference Paper
We show that the combinatorial complexity of the lower envelope of n surfaces or surface patches in d-space (d/spl ges/3), all algebraic of constant maximum degree, and bounded by algebraic surfaces of constant maximum degree, is O(n/sup d-1+/spl epsi//), for any /spl epsi/>0; the constant of proportionality depends on /spl epsi/, d, and the shape and degree of the surface patches and of their boundaries. This is the first nontrivial general upper bound for this problem, and it almost establishes a long-standing conjecture that the complexity of the envelope is O(n/sup d-2//spl lambda//sub q/(n)) for some constant q depending on the shape and degree of the surfaces (where /spl lambda//sub q/(n) is the maximum length of (n, q) Davenport-Schinzel sequences). We also present a randomized algorithm for computing the envelope in three dimensions, with expected running time O(n/sup 2+/spl epsi//), and give several applications of the new bounds. >
STOC Conference 1993 Conference Paper
FOCS Conference 1993 Conference Paper
We consider the problem of planning the motion of an arbitrary k-sided polygonal robot B, free to translate and rotate in a polygonal environment V bounded by n edges. We show that the combinatorial complexity of a single connected component of the free configuration space of B is k/sup 3/n/sup 2/2/sup O(log(2/3)/ n). This is a significant improvement of the naive bound O((kn)/sup 3/); when k is constant, which is often the case in practice, this yields a near-quadratic bound on the complexity of such a component, which almost settles (in this special case) a long-standing conjecture regarding the complexity of a single cell in a three-dimensional arrangement of surfaces. We also present an algorithm that constructs a single component of the free configuration space of B in time O(n/sup 2+/spl epsi//), for any /spl epsi/>0, assuming B has a constant number of sides. This algorithm, combined with some standard techniques in motion planning, yields a solution to the underlying motion planning problem, within the same asymptotic running time. >
SODA Conference 1993 Conference Paper
FOCS Conference 1993 Conference Paper
We show that the number of vertices, edges, and faces of the union of k convex polyhedra in 3-space, having a total of n faces, is O(k/sup 3/+knlog/sup 2/ k). This bound is almost tight in the worst case. We also describe a rather simple randomized incremental algorithm for computing the boundary of the union in O(k/sup 3/+knlog/sup 3/ k) expected time. >
SODA Conference 1992 Conference Paper
TCS Journal 1992 Journal Article
SODA Conference 1992 Conference Paper
TCS Journal 1991 Journal Article
SODA Conference 1991 Conference Paper
FOCS Conference 1991 Conference Paper
It is shown that for every fixed delta >0 the following holds: if F is a union of n triangles, all of whose angles are at least delta, then the complement of F has O(n) connected components, and the boundary of F consists of O(n log log n) segments. This latter complexity becomes linear if all triangles are of roughly the same size or if they are all infinite wedges. A randomized algorithm that computes F in expected time O(n2/sup alpha (n)/ log n) is given. Several applications of these results are presented. >
SODA Conference 1991 Conference Paper
FOCS Conference 1990 Conference Paper
A number of rendering algorithms in computer graphics sort three-dimensional objects by depth and assume that there is no cycle that makes the sorting impossible. One way to resolve the problem caused by cycles is to cut the objects into smaller pieces. The problem of estimating how many such cuts are always sufficient is addressed. A few related algorithmic and combinatorial geometry problems are considered. >
STOC Conference 1989 Conference Paper
FOCS Conference 1989 Conference Paper
Several output-sensitive algorithms for hidden surface removal in a collection of n horizontal triangles, viewed from a point at z=- infinity, are derived. If k is the combinatorial complexity of the output visibility map, then the result is a simple (deterministic) algorithm that runs in time O(n square root k log n) and several improved and more sophisticated algorithms that use randomization. One of these algorithms runs in time O(n/sup 4/3/log /sup gamma /n+k/sup 3/5/n/sup 4/5+ delta /) for any delta >0, where gamma is some constant less than 3. The performance of the other algorithms is potentially even faster; it depends on other parameters of the structure of the given triangles, as well as on the output size. A variant of the simple algorithm performs hidden surface removal for n (nonintersecting) balls in time O(n/sup 3/2/log n+k). >
FOCS Conference 1988 Conference Paper
The authors study both the incidence counting and the many-faces problem for various kinds of curves, including lines, pseudolines, unit circles, general circles, and pseudocircles. They also extend the analysis to three dimensions, where they concentrate on the case of spheres, which is relevant for the three-dimensional unit-distance problem. They obtain upper bounds for certain quantities. The authors believe that the techniques they use are of independent interest. >
ICRA Conference 1988 Conference Paper
An approach to manipulation tasks involving dextrous hands is presented. This approach treats the gripped object as a virtual finger and hence reduces the description of the task objectives to target forces and torques at reference points on the object. Computations are carried out for the case of three fingers holding a planar object. The authors implemented these ideas on the Four Finger Manipulator at New York University. Experimental results are presented for door opening and wall following with a gripped tool. >
ICRA Conference 1987 Conference Paper
FOCS Conference 1987 Conference Paper
We consider the problem of obtaining sharp (nearly quadratic) bounds for the combinatorial complexity of the lower envelope (i. e. pointwise minimum) of a collection of n bivariate (or generally multi-variate) continuous and "simple" functions, and of designing efficient algorithms for the calculation of this envelope. This problem generalizes the well-studied univariate case (whose analysis is based on the theory of Davenport-Schinzel sequences), but appears to be much more difficult and still largely unsolved. It is a central problem that arises in many areas in computational and combinatorial geometry, and has numerous applications including generalized planar Voronoi diagrams, hidden surface elimination for intersecting surfaces, purely translational motion planning, finding common transversals of polyhedra, and more. In this abstract we provide several partial solutions and generalizations of this problem, and apply them to the problems mentioned above. The most significant of our results is that the lower envelope of n triangles in three dimensions has combinatorial complexity at most O(n2α(n)) (where α(n) is the extremely slowly growing inverse of Ackermann's function), that this bound is tight in the worst case, and that this envelope can be calculated in time O(n2α(n)).
FOCS Conference 1986 Conference Paper
We present efficient algorithms for the following geometric problems: (i) Preprocessing of a 2-D polyhedral terrain so as to support fast ray shooting queries from a fixed point. (ii) Determining whether two disjoint interlocking simple polygons can be separated from one another by a sequence of translations. (iii) Determining whether a given convex polygon can be translated and rotated so as to fit into another given polygonal region. (iv) Motion planning for a convex polygon in the plane amidst polygonal barriers. All our algorithms make use of Davenport Schinzel sequences and on some generalizations of them; these sequences are a powerful combinatorial tool applicable in contexts which involve the calculation of the pointwise maximum or minimum of a collection of functions.
FOCS Conference 1985 Conference Paper
This paper investigates the computational complexity of planning the motion of a body B in 2-D or 3-D space, so as to avoid collision with moving obstacles of known, easily computed, trajectories. Dynamic movement problems are of fundamental importance to robotics, but their computational complexity has not previously been investigated. We provide evidence that the 3-D dynamic movement problem is intractable even if B has only a constant number of degrees of freedom of movement. In particular, we prove the problem is PSPACE-hard if B is given a velocity modulus bound on its movements and is NP hard even if B has no velocity modulus bound, where in both cases B has 6 degrees of freedom. To prove these results we use a unique method of simulation of a Turing machine which uses time to encode configurations (whereas previous lower bound proofs in robotics used the system position to encode configurations and so required unbounded number of degrees of freedom). We also investigate a natural class of dynamic problems which we call asteroid avoidance problems: B, the object we wish to move, is a convex polyhedron which is free to move by translation with bounded velocity modulus, and the polyhedral obstacles have known translational trajectories but cannot rotate. This problem has many applications to robot, automobile, and aircraft collision avoidance. Our main positive results are polynomial time algorithms for the 2-D asteroid avoidance problem with bounded number of obstacles as well as single exponential time and nO(log n) space algorithms for the 3-D asteroid avoidance problem with an unbounded number of obstacles. Our techniques for solving these asteroid avoidance problems are novel in the sense that they are completely unrelated to previous algorithms for planning movement in the case of static obstacles. We also give some additional positive results for various other dynamic movers problems, and in particular give polynomial time algorithms for the case in which B has no velocity bounds and the movements of obstacles are algebraic in space-time.
FOCS Conference 1985 Conference Paper
We present several results related to the problem of estimating the complexity M(f1, .. ., fn) of the pointwise minimum of n continuous univariate or bivariate functions f1, .. ., fn under the assumption that no pair (resp. triple) of these functions intersect in more than some fixed number s of points. Our main result is that in the one-dimensional case M(f1, .. ., fn) - O(nα(n)O(α(n)s-3)) (α(n) is the functional inverse of Ackermann's function). In the twodimensional case the problem is substantially harder, and we have only some initial estimates on M, including a tight bound Θ(n2) if s = 2, and a worst-case lower bound Ω(n2α(n)) for s ≥ 6. The treatment of the twodimensional problem is based on certain properties of the intersection patterns of a collection of planar Jordan curves, which we also develop and prove here.
FOCS Conference 1984 Conference Paper
Davenport-Schinzel sequences are sequences that do not contain forbidden subsequences of alternating symbols. They arise in the computation of the envelope of a set of functions. We show that the maximal length of a Davenport-Schinzel sequence composed of n symbols is (n /spl alpha/(n)), where /spl alpha/ (n) is the functional inverse of Ackermann's function, and is thus very slow growing. This is achieved by establishing an equivalence between such sequences and generalized path compression schemes on rooted trees, and then by analyzing these schemes.
STOC Conference 1984 Conference Paper
STOC Conference 1984 Conference Paper
STOC Conference 1984 Conference Paper
STOC Conference 1983 Conference Paper