Arrow Research search

Author name cluster

Neal E. Young

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

22 papers
1 author row

Possible papers

22

SODA Conference 2021 Conference Paper

Competitive Data-Structure Dynamization

  • Claire Mathieu
  • Rajmohan Rajaraman
  • Neal E. Young
  • Arman Yousefi

Data-structure dynamization is a general approach for making static data structures dynamic. It is used extensively in geometric settings and in the guise of so-called merge (or compaction) policies in big-data databases such as Google Bigtable and LevelDB (our focus). Previous theoretical work is based on worst-case analyses for uniform inputs — insertions of one item at a time and constant read rate. In practice, merge policies must not only handle batch insertions and varying read/write ratios, they can take advantage of such non-uniformity to reduce cost on a per-input basis. To model this, we initiate the study of data-structure dynamization through the lens of competitive analysis, via two new online set-cover problems. For each, the input is a sequence of disjoint sets of weighted items. The sets are revealed one at a time. The algorithm must respond to each with a set cover that covers all items revealed so far. It obtains the cover incrementally from the previous cover by adding one or more sets and optionally removing existing sets. For each new set the algorithm incurs build cost equal to the weight of the items in the set. In the first problem the objective is to minimize total build cost plus total query cost, where the algorithm incurs a query cost at each time t equal to the current cover size. In the second problem, the objective is to minimize the build cost while keeping the query cost from exceeding k (a given parameter) at any time. We give deterministic online algorithms for both variants, with competitive ratios of Θ(log∗ n ) and k, respectively. The latter ratio is optimal for the second variant.

FOCS Conference 2007 Conference Paper

Beating Simplex for Fractional Packing and Covering Linear Programs

  • Christos Koufogiannakis
  • Neal E. Young

We give an approximation algorithm for packing and covering linear programs (linear programs with non-negative coefficients). Given a constraint matrix with n non-zeros, r rows, and c columns, the algorithm (with high probability) computes feasible primal and dual solutions whose costs are within a factor of I +epsiv of OPT l+ epsiv of OPT (the optimal cost) in time O(n + (r +c) log(n) / epsiv 2 ). For dense problems (with r, c = O(-radicn)) the time is Omega (n log(n) / epsiv 2 )-linear even as epsiv rarr 0. In comparison, previous Lagrangian-relaxation algorithms generally take at least Omega(n log(n)/epsiv 2 ) time, while (for small epsiv) the Simplex algorithm typically takes at least Omega(n min(r, c)) time.

FOCS Conference 2002 Conference Paper

On-Line End-to-End Congestion Control

  • Naveen Garg 0001
  • Neal E. Young

Congestion control in the current Internet is accomplished mainly by TCP/IP. To understand the macroscopic network behavior that results from TCP/IP and similar end-to-end protocols, one main analytic technique is to show that the the protocol maximizes some global objective function of the network traffic. We analyze a particular end-to-end MIMD (multiplicative-increase, multiplicative-decrease) protocol. We show that if all users of the network use the protocol, and all connections last for at least logarithmically many rounds, then the total weighted throughput (value of all packets received) is near the maximum possible. Our analysis includes round-trip-times, and (in contrast to most previous analyses) gives explicit convergence rates, allows connections to start and stop, and allows capacities to change.

FOCS Conference 2001 Conference Paper

Sequential and Parallel Algorithms for Mixed Packing and Covering

  • Neal E. Young

We describe sequential and parallel algorithms that approximately solve linear programs with no negative coefficients (aka mixed packing and covering problems). For explicitly given problems, our fastest sequential algorithm returns a solution satisfying all constraints within a 1/spl plusmn//spl epsi/ factor in O(mdlog(m)//spl epsi//sup 2/) time, where m is the number of constraints and d is the maximum number of constraints any variable appears in. Our parallel algorithm runs in time polylogarithmic in the input size times /spl epsi//sup -4/ and uses a total number of operations comparable to the sequential algorithm. The main contribution is that the algorithms solve mixed packing and covering problems (in contrast to pure packing or pure covering problems, which have only "/spl les/" or only "/spl ges/" inequalities, but not both) and run in time independent of the so-called width of the problem.

FOCS Conference 2001 Conference Paper

Tight Approximation Results for General Covering Integer Programs

  • Stavros G. Kolliopoulos
  • Neal E. Young

In this paper we study approximation algorithms for solving a general covering integer program. An n-vector x of nonnegative integers is sought, which minimizes c/sup T//spl middot/x, subject to Ax/spl ges/b, x/spl les/d. The entries of A, b, c are nonnegative. Let m be the number of rows of A. Covering problems have been heavily studied in combinatorial optimization. We focus on the effect of the multiplicity constraints, x/spl les/d, on approximately. Two longstanding open questions remain for this general formulation with upper bounds on the variables. (i) The integrality gap of the standard LP relaxation is arbitrarily large. Existing approximation algorithms that achieve the well-known O(log m)-approximation with respect to the LP value do so at the expense of violating the upper bounds on the variables by the same O(log m) multiplicative factor. What is the smallest possible violation of the upper bounds that still achieves cost within O(log m) of the standard LP optimum? (ii) The best known approximation ratio for the problem has been O(log(max/sub j//spl Sigma//sub i/A/sub ij/)) since 1982. This bound can be as bad as polynomial in the input size. Is an O(log m)-approximation, like the one known for the special case of Set Cover, possible? We settle these two open questions. To answer the first question we give an algorithm based on the relatively simple new idea of randomly rounding variables to smaller-than-integer units. To settle the second question we give a reduction from approximating the problem while respecting multiplicity constraints to approximating the problem with a bounded violation of the multiplicity constraints.