Arrow Research search

Author name cluster

Bernard Chazelle

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

44 papers
2 author rows

Possible papers

44

JMLR Journal 2019 Journal Article

Iterated Learning in Dynamic Social Networks

  • Bernard Chazelle
  • Chu Wang

A classic finding by (Kalish et al., 2007) shows that no language can be learned iteratively by rational agents in a self-sustained manner. In other words, if $A$ teaches a foreign language to $B$, who then teaches what she learned to $C$, and so on, the language will quickly get lost and agents will wind up teaching their own common native language. If so, how can linguistic novelty ever be sustained? We address this apparent paradox by considering the case of iterated learning in a social network: we show that by varying the lengths of the learning sessions over time or by keeping the networks dynamic, it is possible for iterated learning to endure forever with arbitrarily small loss. [abs] [ pdf ][ bib ] &copy JMLR 2019. ( edit, beta )

FOCS Conference 2012 Conference Paper

The Dynamics of Influence Systems

  • Bernard Chazelle

Influence systems form a large class of multiagent systems designed to model how influence, broadly defined, spreads across a dynamic network. We build a general analytical framework which we then use to prove that, while Turing-complete, influence dynamics of the diffusive type is almost surely asymptotically periodic. Besides resolving the dynamics of a popular family of multiagent systems, the other contribution of this work is to introduce a new type of renormalization-based bifurcation analysis for multiagent systems.

STOC Conference 2006 Conference Paper

Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform

  • Nir Ailon
  • Bernard Chazelle

We introduce a new low-distortion embedding of l 2 d into l p O(log n) (p=1,2), called the Fast-Johnson-Linden-strauss-Transform . The FJLT is faster than standard random projections and just as easy to implement. It is based upon the preconditioning of a sparse projection matrix with a randomized Fourier transform. Sparse random projections are unsuitable for low-distortion embeddings. We overcome this handicap by exploiting the "Heisenberg principle" of the Fourier transform, ie, its local-global duality. The FJLT can be used to speed up search algorithms based on low-distortion embeddings in l 1 and l 2 . We consider the case of approximate nearest neighbors in l 2 d . We provide a faster algorithm using classical projections, which we then further speed up by plugging in the FJLT. We also give a faster algorithm for searching over the hypercube.

STOC Conference 2004 Conference Paper

Lower bounds for linear degeneracy testing

  • Nir Ailon
  • Bernard Chazelle

In the late nineties Erickson proved a remarkable lower bound on the decision tree complexity of one of the central problems of computational geometry: given n numbers, do any r of them add up to 0? His lower bound of Ω( n ⌈ r /2⌉ ), for any fixed r , is optimal if the polynomials at the nodes are linear and at most r -variate. We generalize his bound to s -variate polynomials for s>>r . Erickson's bound decays quickly as r grows and never reaches above pseudo-polynomial: we provide an exponential improvement. Our arguments are based on three ideas: (i) a geometrization of Erickson's proof technique; (ii) the use of error-correcting codes; and (iii) a tensor product construction for permutation matrices.

FOCS Conference 1997 Conference Paper

A Faster Deterministic Algorithm for Minimum Spanning Trees

  • Bernard Chazelle

A deterministic algorithm for computing a minimum spanning tree of a connected graph is presented. Its running time is O(m /spl alpha/ log /spl alpha/), where /spl alpha/=/spl alpha/(m, n) is a functional inverse of Ackermann's function and n (resp. m) is the number of vertices (resp. edges). This improves on the previous, ten-year old bound of (roughly) O(m log log* m).

FOCS Conference 1994 Conference Paper

A Spectral Approach to Lower Bounds

  • Bernard Chazelle

We establish a nonlinear lower bound for halfplane range searching over a group. Specifically, we show that summing up the weights of n (weighted) points within n halfplanes requires /spl Omega/(n log n) additions and subtractions. This is the first nontrivial lower bound for range searching over a group. By constrast, range searching over a semigroup (which forbids subtractions) is almost completely understood. Our proof has two parts: First, we develop a general, entropy-based, method for relating the linear circuit complexity of a linear map A to the spectrum of A/sup T/A. In the second part of the proof, we design a "high-spectrum" geometric set system and, using techniques from discrepancy theory, we estimate the median eigenvalue of its associated map. Interestingly, the method also shows that using up to a linear number of help gates cannot help; these are gates that can compute any bivariate function. The best feature of our method is that it is very general. With any instance of range searching we associate a quadratic form: any lower bound on the mid-range of its spectrum implies a lower bound on the complexity of that range searching problem. The main drawback of our approach is that it (probably) yields weak lower bounds. Another shortcoming is that the method does not seem to generalize to range searching over rings or fields. >

FOCS Conference 1993 Conference Paper

Geometric Discrepancy Revisited

  • Bernard Chazelle

Discrepancy theory addresses the general issue of approximating one measure by another one. Originally an offshoot of diophantine approximation theory, the area has expanded into applied mathematics, and now, computer science. Besides providing the theoretical foundation for sampling, it holds some of the keys to understanding the computational power of randomization. A few applications of discrepancy theory are listed. We give elementary algorithms for estimating the discrepancy between various measures arising in practice. We also present a general technique for proving discrepancy lower bounds. >

FOCS Conference 1993 Conference Paper

Product Range Spaces, Sensitive Sampling, and Derandomization

  • Hervé Brönnimann
  • Bernard Chazelle
  • Jirí Matousek 0001

We introduce the concept of a sensitive /spl epsi/-approximation, and use it to derive a more efficient algorithm for computing /spl epsi/-nets. We define and investigate product range spaces, for which we establish sampling theorems analogous to the standard finite VC-dimensional case. This generalizes and simplifies results from previous works. We derive a simpler optimal deterministic convex hull algorithm, and by extending the method to the intersection of a set of balls with the same radius, we obtain an O(nlog/sup 3/ n) deterministic algorithm for computing the diameter of an n-point set in 3-dimensional space. >

FOCS Conference 1991 Conference Paper

An Optimal Convex Hull Algorithm and New Results on Cuttings (Extended Abstract)

  • Bernard Chazelle

An optimal algorithm for computing hyperplane cuttings is given. It results in a new kind of cutting, which enjoys all the properties of the previous ones and, in addition, can be refined by composition. An optimal algorithm for computing the convex hull of a finite point set in any fixed dimension is also given. >

FOCS Conference 1990 Conference Paper

Counting and Cutting Cycles of Lines and Rods in Space

  • Bernard Chazelle
  • Herbert Edelsbrunner
  • Leonidas J. Guibas
  • Richard Pollack
  • Raimund Seidel
  • Micha Sharir
  • Jack Snoeyink

A number of rendering algorithms in computer graphics sort three-dimensional objects by depth and assume that there is no cycle that makes the sorting impossible. One way to resolve the problem caused by cycles is to cut the objects into smaller pieces. The problem of estimating how many such cuts are always sufficient is addressed. A few related algorithmic and combinatorial geometry problems are considered. >

FOCS Conference 1990 Conference Paper

Triangulating a Simple Polygon in Linear Time

  • Bernard Chazelle

A linear-time deterministic algorithm for triangulating a simple polygon is developed. The algorithm is elementary in that it does not require the use of any complicated data structures; in particular, it does not need dynamic search trees, finger trees, or fancy point location structures. >

FOCS Conference 1989 Conference Paper

An Optimal Algorithm for Intersecting Three-Dimensional Convex Polyhedra (Detailed Abstract)

  • Bernard Chazelle

A linear algorithm for intersecting two convex polyhedra in 3-space is described. The algorithm is quite simple; it does not require any complicated data structure and should be practical. A number of optimal algorithms for other problems are obtained directly from this result. These include intersecting several polytopes at once or computing the convex hull of their union, merging Voronoi diagrams in the plane in linear time, and computing three-dimensional convex hulls in linear expected time. >

FOCS Conference 1988 Conference Paper

A Deterministic View of Random Sampling and its Use in Geometry

  • Bernard Chazelle
  • Joel Friedman

A number of efficient probabilistic algorithms based on the combination of divide-and-conquer and random sampling have been recently discovered. It is shown that all those algorithms can be derandomized with only polynomial overhead. In the process. results of independent interest concerning the covering of hypergraphs are established, and various probabilistic bounds in geometry complexity are improved. For example, given n hyperplanes in d-space and any large enough integer r, it is shown how to compute, in polynomial time, a simplicial packing of size O(r/sup d/) that covers d-space, each of whose simplices intersects O(n/r) hyperplanes. It is also shown how to locate a point among n hyperplanes in d-space in O(log n) query time, using O(n/sup d/) storage and polynomial preprocessing. >

FOCS Conference 1988 Conference Paper

An Optimal Algorithm for Intersecting Line Segments in the Plane

  • Bernard Chazelle
  • Herbert Edelsbrunner

The authors present the first optimal algorithm for the following problem: given n line segments in the plane, compute all k pairwise intersections in O(n log n+k) time. Within the same asymptotic cost the algorithm will also compute the adjacencies of the planar subdivision induced by the segments, which is a useful data structure for contour-filling on raster devices. >

FOCS Conference 1986 Conference Paper

Lower Bounds on the Complexity of Multidimensional Searching (Extended Abstract)

  • Bernard Chazelle

We establish new lower bounds on the complexity of several searching problems. We show that the time for solving the partial sum problem on n points in d dimensions is at least proportional to (log n/log 2m/n)d-1 in both the worst and average cases; m denotes the amount of storage used. This bound is provably tight for m = Ω(nlogcn) and any c ≫ d- 1. We also prove a lower bound of Ω(n(log n/log log n)d) on the time required for executing n inserts and queries. Other results include a lower bound on the complexity of orthogonal range searching in d dimensions (in report-mode). We show that on a pointer machine a query time of O(s+polylog(n)) time can only be achieved at the expense of Ω(n(log n/log log n)d-1) space, which is optimal; n and s denote respectively the input and output sizes.

FOCS Conference 1985 Conference Paper

Slimming Down Search Structures: A Functional Approach to Algorithm Design

  • Bernard Chazelle

We establish new upper bounds on the complexity of several "rectangle" problems. Our results include, for instance, optimal algorithms for range counting and rectangle searching in two dimensions. These involve linear space implementations of range trees and segment trees. The algorithms we give are simple and practical; they can be dynamized and taken into higher dimensions. Also of interest is the nonstandard approach which we follow to obtain these results: it involves transforming data structures on the basis of functional specifications.

FOCS Conference 1984 Conference Paper

Computing on a Free Tree via Complexity-Preserving Mappings

  • Bernard Chazelle

It appears that in a number of cases computing over free trees is no more difficult than computing over linear lists. In the arithmetic model, we have established a strict equivalence between the interval query problem and its generalization on a tree structure. In the reference machine model, the concept of efficiency is traditionally captured by the class of retrieval problems which can be solved in O(nP0LY LOG(n)) space and O(POLYLOG(n)) time. We have shown that in a number of examples this class is closed under transformations from lists to trees. Characterizing the set of problems and techniques for which this holds is an interesting open problem.

FOCS Conference 1983 Conference Paper

Filtering Search: A New Approach to Query-Answering

  • Bernard Chazelle

We introduce a new technique for solving problems of the following form: preprocess a set of objects so that those satisfying a given property with respect to a query object can be listed very effectively. Among well-known problems to fall into this category we find range query, point enclosure, intersection, near-neighbor problems, etc. The approach which we take is very general and rests on a new concept called fitering search. We show on a number of examples how it can be used to improve the complexity of known algorithms and simplify their implementations as well. In particular, filtering search allows us to improve on the worst-case complexity of the best algorithms known so far for solving the problems mentioned above.

FOCS Conference 1983 Conference Paper

The Power of Geometric Duality

  • Bernard Chazelle
  • Leonidas J. Guibas
  • D. T. Lee

This paper uses a new formulation of the notion of duality that allows the unified treatment of a number of geometric problems. In particular, we are able to apply our approach to solve two long-standing problems of computational geometry: one is to obtain a quadratic algorithm for computing the minimum-area triangle with vertices chosen among n points in the plane; the other is to produce an optimal algorithm for the half-plane range query problem. This problem is to preprocess n points in the plane, so that given a test half-plane, one can efficiently determine all points lying in the half-plane. We describe an optimal O(k + log n) time algorithm for answering such queries, where k is the number of points to be reported. The algorithm requires O(n) space and O(n log n) preprocessing time. Both of these results represent significant improvements over the best methods previously known. In addition, we give a number of new combinatorial results related to the computation of line arrangements.

FOCS Conference 1982 Conference Paper

A Theorem on Polygon Cutting with Applications

  • Bernard Chazelle

Let P be a simple polygon with N vertices, each being assigned a weight ∈ {0, 1}, and let C, the weight of P, be the added weight of all vertices. We prove that it is possible, in O(N) time, to find two vertices a, b in P, such that the segment ab lies entirely inside the polygon P and partitions it into two polygons, each with a weight not exceeding 2C/3. This computation assumes that all the vertices have been sorted along some axis, which can be done in O(Nlog N) time. We use this result to derive a number of efficient divide-and-conquer algorithms for: 1. Triangulating an N-gon in O(Nlog N) time. 2. Decomposing an N-gon into (few) convex pieces in O(Nlog N) time. 3. Given an O(Nlog N) preprocessing, computing the shortest distance between two arbitrary points inside an N-gon (i. e. , the internal distance), in O(N) time. 4. Computing the longest internal path in an N-gon in O(N2) time. In all cases, the algorithms achieve significant improvements over previously known methods, either by displaying better performance or by gaining in simplicity. In particular, the best algorithms for Problems 2, 3, 4, known so far, performed respectively in O(N2), O(N2), and O(N4) time.