Arrow Research search

Author name cluster

Richard Cole 0001

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

39 papers
1 author row

Possible papers

39

STOC Conference 2013 Conference Paper

Tatonnement beyond gross substitutes? : gradient descent to the rescue

  • Yun Kuen Cheung
  • Richard Cole 0001
  • Nikhil R. Devanur

Tatonnement is a simple and natural rule for updating prices in Exchange (Arrow-Debreu) markets. In this paper we define a class of markets for which tatonnement is equivalent to gradient descent. This is the class of markets for which there is a convex potential function whose gradient is always equal to the negative of the excess demand and we call it Convex Potential Function (CPF) markets. We show the following results. CPF markets contain the class of Eisenberg Gale (EG) markets, defined previously by Jain and Vazirani. The subclass of CPF markets for which the demand is a differentiable function contains exactly those markets whose demand function has a symmetric negative semi-definite Jacobian. We define a family of continuous versions of tatonnement based on gradient descent using a Bregman divergence. As we show, all processes in this family converge to an equilibrium for any CPF market. This is analogous to the classic result for markets satisfying the Weak Gross Substitutes property. A discrete version of tatonnement converges toward the equilibrium for the following markets of complementary goods; its convergence rate for these settings is analyzed using a common potential function. Fisher markets in which all buyers have Leontief utilities. The tatonnement process reduces the distance to the equilibrium, as measured by the potential function, to an ε fraction of its initial value in O(1/ε) rounds of price updates. Fisher markets in which all buyers have complementary CES utilities. Here, the distance to the equilibrium is reduced to an ε fraction of its initial value in O(log(1/ε)) rounds of price updates. This shows that tatonnement converges for the entire range of Fisher markets when buyers have complementary CES utilities, in contrast to prior work, which could analyze only the substitutes range, together with a small portion of the complementary range.

STOC Conference 2011 Conference Paper

Inner product spaces for MinSum coordination mechanisms

  • Richard Cole 0001
  • José Correa 0001
  • Vasilis Gkatzelis
  • Vahab Mirrokni
  • Neil Olver

We study coordination mechanisms aiming to minimize the weighted sum of completion times of jobs in the context of selfish scheduling problems. Our goal is to design local policies that achieve a good price of anarchy in the resulting equilibria for unrelated machine scheduling. To obtain these approximation bounds, we introduce a new technique that while conceptually simple, seems to be quite powerful. The method entails mapping strategy vectors into a carefully chosen inner product space; costs are shown to correspond to the norm in this space, and the Nash condition also has a simple description. With this structure in place, we are able to prove a number of results, as follows. First, we consider Smith's Rule, which orders the jobs on a machine in ascending processing time to weight ratio, and show that it achieves an approximation ratio of 4. We also demonstrate that this is the best possible for deterministic non-preemptive strongly local policies. Since Smith's Rule is always optimal for a given fixed assignment, this may seem unsurprising, but we then show that better approximation ratios can be obtained if either preemption or randomization is allowed. We prove that ProportionalSharing, a preemptive strongly local policy, achieves an approximation ratio of 2.618 for the weighted sum of completion times, and an approximation ratio of 2.5 in the unweighted case. We also observe that these bounds are tight. Next, we consider Rand, a natural non-preemptive but randomized policy. We show that it achieves an approximation ratio of at most 2.13; moreover, if the sum of the weighted completion times is negligible compared to the cost of the optimal solution, this improves to π/2. Finally, we show that both ProportionalSharing and Rand induce potential games, and thus always have a pure Nash equilibrium (unlike Smith's Rule). This allows us to design the first combinatorial constant-factor approximation algorithm minimizing weighted completion time for unrelated machine scheduling. It achieves a factor of 2+ε for any ε > 0, and involves imitating best response dynamics using a variant of ProportionalSharing as the policy.

STOC Conference 2008 Conference Paper

Fast-converging tatonnement algorithms for one-time and ongoing market problems

  • Richard Cole 0001
  • Lisa Fleischer

Why might markets tend toward and remain near equilibrium prices? In an effort to shed light on this question from an algorithmic perspective, this paper formalizes the setting of Ongoing Markets, by contrast with the classic market scenario, which we term One-Time Markets. The Ongoing Market allows trade at non-equilibrium prices, and, as its name suggests, continues over time. As such, it appears to be a more plausible model of actual markets. For both market settings, this paper defines and analyzes variants of a simple tatonnement algorithm that differs from previous algorithms that have been subject to asymptotic analysis in three significant respects: the price update for a good depends only on the price, demand, and supply for that good, and on no other information; the price update for each good occurs distributively and asynchronously; the algorithms work (and the analyses hold) from an arbitrary starting point. Our algorithm introduces a new and natural update rule. We show that this update rule leads to fast convergence toward equilibrium prices in a broad class of markets that satisfy the weak gross substitutes property. These are the first analyses for computationally and informationally distributed algorithms that demonstrate polynomial convergence. Our analysis identifies three parameters characterizing the markets, which govern the rate of convergence of our protocols. These parameters are, broadly speaking: 1. A bound on the fractional rate of change of demand for each good with respect to fractional changes in its price. 2. A bound on the fractional rate of change of demand for each good with respect to fractional changes in wealth. 3. The closeness of the market to a Fisher market (a market with buyers starting with money alone). We give two types of protocols. The first type assumes global knowledge of only (an upper bound on) the first parameter. For this protocol, we also provide a matching lower bound in terms of these parameters for the One-Time Market. Our second protocol, which is analyzed for the One-Time Market alone, assumes no global knowledge whatsoever.

FOCS Conference 1995 Conference Paper

Routing on Butterfly Networks with Random Faults

  • Richard Cole 0001
  • Bruce M. Maggs
  • Ramesh K. Sitaraman

We show that even if every node or edge in an N-node butterfly network fails independently with some constant probability, p, it is still possible to identify a set of /spl Theta/(N) nodes between which packets can be routed in any permutation in O(logN) steps, with high probability. Although the analysis as complicated, the routing algorithm itself is relatively simple.

FOCS Conference 1993 Conference Paper

Optimally fast parallel algorithms for preprocessing and pattern matching in one and two dimensions

  • Richard Cole 0001
  • Maxime Crochemore
  • Zvi Galil
  • Leszek Gasieniec
  • Ramesh Hariharan
  • S. Muthukrishnan 0001
  • Kunsoo Park
  • Wojciech Rytter

All algorithms below are optimal alphabet-independent parallel CRCW PRAM algorithms. In one dimension: Given a pattern string of length m for the string-matching problem, we design an algorithm that computes a deterministic sample of a sufficiently long substring in constant time. This problem used to be a bottleneck in the pattern preprocessing for one- and two-dimensional pattern matching. The best previous time bound was O(log/sup 2/ m/log log m). We use this algorithm to obtain the following results. 1. Improving the preprocessing of the constant-time text search algorithm from O(log/sup 2/ m/log log m) to n(log log m), which is now best possible. 2. A constant-time deterministic string-matching algorithm in the case that the text length n satisfies n=/spl Omega/(m/sup 1+/spl epsiv//) for a constant /spl epsiv/>0. 3. A simple probabilistic string-matching algorithm that has constant time with high probability for random input. 4. A constant expected time Las-Vegas algorithm for computing the period of the pattern and all witnesses and thus string matching itself, solving the main open problem remaining in string matching. >

FOCS Conference 1992 Conference Paper

Tighter Bounds on the Exact Complexity of String Matching (Extended Abstract)

  • Richard Cole 0001
  • Ramesh Hariharan

The paper considers how many character comparisons are needed to find all occurrences of a pattern of length m in a text of length n. The main contribution is to show an upper bound of the form n + O(n/m) character comparisons, following preprocessing. Specifically, the authors show an upper bound of n+8/3(m+1)(n-m) character comparisons. This bound is achieved by an online algorithm which performs O(n) work in total, requires O(m) space and O(m/sup 2/) time for preprocessing. In addition the following lower bounds are shown: for online algorithms, a bound of n+11/5(m+1) (n-m) character comparisons for m = 10 + 11 k, for any integer k >or= 1, and for general algorithms, a bound of n+2(n-m)/m+3 character comparisons, for m=2 k+l, for any integer k>or=1. >

FOCS Conference 1990 Conference Paper

Online Algorithms for Finger Searching (Extended Abstract)

  • Richard Cole 0001
  • Arvind Raghunathan

The technique of speeding up access into search structures by maintaining fingers that point to various locations of the search structure is considered. The problem of choosing, in a large search structure, locations at which to maintain fingers is treated. In particular, a server problem in which k servers move along a line segment of length m, where m is the number of keys in the search structure, is addressed. Since fingers may be arbitrarily copied, a server is allowed to jump, or fork, to a location currently occupied by another server. Online algorithms are presented and their competitiveness analyzed. It is shown that the case in which k=2 behaves differently from the case in which k>or=3, by showing that there is a four-competitive algorithm for k=2 that never forks its fingers. For k>or=3, it is shown that any online algorithm that does not fork its fingers can be at most Omega (m/sup 1/2/)-competitive. The main result is that for k=3 there is an online algorithm that forks and is constant competitive (independent of m, the size of the search structure). The algorithm is simple and implementable. >

FOCS Conference 1987 Conference Paper

Cascading Divide-and-Conquer: A Technique for Designing Parallel Algorithms

  • Mikhail J. Atallah
  • Richard Cole 0001
  • Michael T. Goodrich

We present techniques for parallel divide-and-conquer, resulting in improved parallel algorithms for a number of problems. The problems for which we give improved algorithms include intersection detection, trapezoidal decomposition (hence, polygon triangulation), and planar point location (hence, Voronoi diagram construction). We also give efficient parallel algorithms for fractional cascading, 3-dimensional maxima, 2-set dominance counting, and visibility from a point. All of our algorithms run in O(log n) time with either a linear or sub-linear number of processors in the CREW PRAM model.

FOCS Conference 1986 Conference Paper

Approximate and Exact Parallel Scheduling with Applications to List, Tree and Graph Problems

  • Richard Cole 0001
  • Uzi Vishkin

We study two parallel scheduling problems and their use in designing parallel algorithms. First, we define a novel scheduling problem; it is solved by repeated, rapid, approximate reschedulings. This leads to a first optimal PRAM algorithm for list ranking, which runs in logarithmic time. Our second scheduling result is for computing prefix sums of logn bit numbers. We give an optimal parallel algorithm for the problem which runs in sublogarithmic time. These two scheduling results together lead to logarithmic time PRAM algorithms for the connectivity, biconnectivity and minimum spanning tree problems. The connectivity and biconnectivity algorithms are optimal unless m = o(nlog*n), in graphs of n vertices and m edges.

FOCS Conference 1986 Conference Paper

Geometric Applications of Davenport-Schinzel Sequences

  • Micha Sharir
  • Richard Cole 0001
  • Klara Kedem
  • Daniel Leven
  • Richard Pollack
  • Shmuel Sifrony

We present efficient algorithms for the following geometric problems: (i) Preprocessing of a 2-D polyhedral terrain so as to support fast ray shooting queries from a fixed point. (ii) Determining whether two disjoint interlocking simple polygons can be separated from one another by a sequence of translations. (iii) Determining whether a given convex polygon can be translated and rotated so as to fit into another given polygonal region. (iv) Motion planning for a convex polygon in the plane amidst polygonal barriers. All our algorithms make use of Davenport Schinzel sequences and on some generalizations of them; these sequences are a powerful combinatorial tool applicable in contexts which involve the calculation of the pointwise maximum or minimum of a collection of functions.

FOCS Conference 1986 Conference Paper

Parallel Merge Sort

  • Richard Cole 0001

We give a parallel implementation of merge sort on a CREW PRAM that uses n processors and O(logn) time; the constant in the running time is small. We also give a more complex version of the algorithm for the EREW PRAM; it also uses n processors and O(logn) time. The constant in the running time is still moderate, though not as small.

FOCS Conference 1985 Conference Paper

On Information Flow and Sorting: New Upper and Lower Bounds for VLSI Circuits (Extended Abstract)

  • Richard Cole 0001
  • Alan Siegel

This work comprises two parts: lower bounds and upper bounds in VLSI circuits. The upper bounds are for the sorting problem: we describe a large number of constructions for sorting N numbers in the range [0, M] for the standard VLSI bit model. Among other results, we attain: • VLSI sorter constructions that are within a constant factor of optimal size for almost all number ranges M (including M = N), and running times T. • A fundamentally new merging network for sorting numbers in a bit model. • New organizational approaches for optimal tuning of merging networks and the proper management of data flow. The lower bounds apply to a variety of problems. We present two new techniques for establishing lower bounds on the information flow in VLSI circuits. They are: • An averaging technique, which is easy to apply to a variety of problems, including a long standing question regarding the AT2 complexity for sorting. • A technique for constructing fooling sets in instances where our averaging method is unlikely to provide an adequate bound.

FOCS Conference 1984 Conference Paper

River Routing Every Which Way, but Loose (Extended Abstract)

  • Richard Cole 0001
  • Alan Siegel

A solution to the 'Detailed Routing given a Homotopy' (DRH) problem is given in O(n + mlogm + D(m)) operations. The solution uses n + mlogm homotopy queries that are elementary; they are answerable based solely on "local properties" of modules, terminals, and wire connections. In addition, we need O(m) more complex queries, which are represented in the D(m) term. These queries must account for the total number of crossin s occurringfor selected test segments.

FOCS Conference 1984 Conference Paper

Slowing Down Sorting Networks to Obtain Faster Sorting Algorithms

  • Richard Cole 0001

Megiddo introduced a technique for using a parallel algorithm for one problem to construct an efficient serial algorithm for a second problem. We give a general method that trims a factor o f 0(logn) time (or more) for many applications of this technique.

FOCS Conference 1983 Conference Paper

Geometric Retrieval Problems

  • Richard Cole 0001
  • Chee-Keng Yap

A large class of geometric retrieval problems has the following form. Given a set X of geometric objects, preprocess to obtain a data structure D(X). Now use D(X) to rapidly answer queries on X. We say an algorithm for such a problem has (worst-case) space-time complexity O(f(n), g(n)) if the space requirement for D(X) is O(f) and the 'locate run-time' required for each retrieval is O(g). We show three techniques which can consistently be exploited in solving such problems. For instance, using our techniques, we obtain an O(n2+e, lognlog(l/∈)) spacetime algorithm for the polygon retrieval problem, for arbitrarily small ∈, improving on the previous solution having complexity O(n7, logn).