SODA Conference 2006 Conference Paper
Improved approximation algorithms for broadcast scheduling
- Nikhil Bansal 0001
- Don Coppersmith
- Maxim Sviridenko
Author name cluster
Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.
SODA Conference 2006 Conference Paper
SODA Conference 2006 Conference Paper
SODA Conference 2005 Conference Paper
SODA Conference 2004 Conference Paper
SODA Conference 2003 Conference Paper
STOC Conference 2003 Conference Paper
SODA Conference 2003 Conference Paper
SODA Conference 2002 Conference Paper
SODA Conference 1997 Conference Paper
FOCS Conference 1995 Conference Paper
Let Dist(f, g)=Pr/sub u/ [f(u)/spl ne/g(u)] denote the relative distance between functions f, g mapping from a group G to a group H, and let Dist(f) denote the minimum, over all linear functions (homomorphisms) g, of Dist(f, g). Given a function f: G/spl rarr/H we let Err(f)=Pr/sub u/, v[f(u)+f(v)/spl ne/f(u+v)] denote the rejection probability of the BLR (Blum-Luby-Rubinfeld) linearity test. Linearity testing is the study of the relationship between Err(f) and Dist(f), and in particular the study of lower bounds on Err(f) in terms of Dist(f). The case we are interested in is when the underlying groups are G=GF(2)/sup n/ and H=GF(2). The corresponding test is used in the construction of efficient PCPs and thence in the derivation of hardness of approximation results, and, in this context, improved analyses translate into better non-approximability results. However, while several analyses of the relation of Err(f) to Dist(f) are known, none is tight. We present a description of the relationship between Err(f) and Dist(f) which is nearly complete in all its aspects, and entirely complete (i. e. tight) in some. In particular we present functions L, U: [0, 1]/spl rarr/[0, 1] such that for all x/spl isin/[0, 1] we have L(x)<Err(f)/spl les/U(x) whenever Dist(f)=x, with the upper bound being tight on the whole range, and the lower bound tight on a large part of the range and close on the rest. Part of our strengthening is obtained by showing a new connection between the linearity testing problem and Fourier analysis, a connection which may be of independent interest. Our results are used by M. Bellare et al. (1995) to present the best known hardness results for Max3SAT and other MaxSNP problems.
SODA Conference 1994 Conference Paper
SODA Conference 1994 Conference Paper
STOC Conference 1994 Conference Paper
FOCS Conference 1992 Conference Paper
Consider an arithmetic expression of length n involving only the operations (+, *) and non-negative constants. The authors prove lower bounds on the depth of any binary computation tree over the same set of operations and constants that computes such an expression. In their main result they exhibit a family of arithmetic expressions that requires computation trees of depth at least 1. 5 log/sub 2/n-O(1). The authors also consider the family of arithmetic expressions defined by alternating 5-3 trees. For this family they show a tight bound of 5/(log/sub 2/15)log/sub 2/n+O(1) on the depth of any computation tree. This is the best known tight bound for any family of arithmetic expressions. >
STOC Conference 1990 Conference Paper
STOC Conference 1987 Conference Paper
FOCS Conference 1987 Conference Paper
The following three problems concerning random graphs can be solved in (log n)O(1) expected time using linearly many processors: (1) finding the lexicographically first maximal independent set, (2) coloring the vertices using a number of colors that is almost surely within twice the chromatic number, and (3) finding a Hamiltonian circuit.
FOCS Conference 1981 Conference Paper
The main results of this paper have the following flavor: given one algorithm for multiplying matrices, there exists another, better, algorithm. A consequence of these results is that ω, the exponent for matrix multiplication, is a limit point, that is, cannot be realized by any single algorithm. We also use these results to construct a new algorithm which shows that ω ≪ 2. 495364.