STOC Conference 2023 Conference Paper
Approximating Iterated Multiplication of Stochastic Matrices in Small Space
- Gil Cohen
- Dean Doron
- Ori Sberlo
- Amnon Ta-Shma
Author name cluster
Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.
STOC Conference 2023 Conference Paper
FOCS Conference 2023 Conference Paper
More than twenty years ago, Capalbo, Rein-gold, Vadhan and Wigderson gave the first (and up to date only) explicit construction of a bipartite expander with almost full combinatorial expansion. The construction incorporates zig-zag ideas together with extractor technology, and is rather complicated. We give an alternative construction that builds upon recent constructions of hyper-regular, high-dimensional expanders. The new construction is, in our opinion, simple and elegant. Beyond demonstrating a new, surprising, and intriguing, application of high-dimensional expanders, the construction employs totally new ideas which we hope may lead to progress on the still remaining open problems in the area.
STOC Conference 2021 Conference Paper
SODA Conference 2019 Conference Paper
We develop the notion of double samplers, first introduced by Dinur and Kaufman [DK17], which are samplers with additional combinatorial properties, and whose existence we prove using high dimensional expanders. We show how double samplers give a generic way of amplifying distance in a way that enables efficient list-decoding. There are many error correcting code constructions that achieve large distance by starting with a base code C with moderate distance, and then amplifying the distance using a sampler, e. g. , the ABNNR code construction [ABN + 92] is such. We show that if the sampler is part of a larger double sampler then the construction has an efficient list-decoding algorithm and the list decoding algorithm is oblivious to the base code C (i. e. , it runs the unique decoder for C in a black box way). Our list-decoding algorithm works as follows: it uses a local voting scheme from which it constructs a unique games constraint graph. The constraint graph is an expander, so we can solve unique games efficiently. These solutions are the output of the list decoder. This is a novel use of a unique games algorithm as a subroutine in a decoding procedure, as opposed to the more common situation in which unique games are used for demonstrating hardness results. Double samplers and high dimensional expanders are akin to pseudorandom objects in their utility, but they greatly exceed random objects in their combinatorial properties. We believe that these objects hold significant potential for coding theoretic constructions and view this work as demonstrating the power of double samplers in this context.
STOC Conference 2017 Conference Paper
The breakthrough result of Chattopadhyay and Zuckerman (2016) gives a reduction from the construction of explicit two-source extractors to the construction of explicit non-malleable extractors. However, even assuming the existence of optimal explicit non-malleable extractors only gives a two-source extractor (or a Ramsey graph) for poly (log n ) entropy, rather than the optimal O (log n ). In this paper we modify the construction to solve the above barrier. Using the currently best explicit non-malleable extractors we get an explicit bipartite Ramsey graphs for sets of size 2 k , for k = O (log n loglog n ). Any further improvement in the construction of non-malleable extractors would immediately yield a corresponding two-source extractor. Intuitively, Chattopadhyay and Zuckerman use an extractor as a sampler, and we observe that one could use a weaker object - a somewhere-random condenser with a small entropy gap and a very short seed. We also show how to explicitly construct this weaker object using the error reduction technique of Raz, Reingold and Vadhan (1999), and the constant-degree dispersers of Zuckerman (2006) that also work against extremely small tests.
STOC Conference 2017 Conference Paper
STOC Conference 2013 Conference Paper
TCS Journal 2012 Journal Article
We construct a strong extractor against quantum storage that works for every min-entropy k, has logarithmic seed length, and outputs Ω ( k ) bits, provided that the quantum adversary has at most β k qubits of memory, for any β < 1 2. The construction works by first condensing the source (with minimal entropy-loss) and then applying an extractor that works well against quantum adversaries when the source is close to uniform. We also obtain an improved construction of a strong quantum-proof extractor in the high min-entropy regime. Specifically, we construct an extractor that uses a logarithmic seed length and extracts Ω ( n ) bits from any source over { 0, 1 } n, provided that the min-entropy of the source conditioned on the quantum adversary’s state is at least ( 1 − β ) n, for any β < 1 2.
FOCS Conference 2010 Conference Paper
Efremenko showed locally-decodable codes of subexponential length that can handle close to 1/6 fraction of errors. In this paper we show that the same codes can be locally unique-decoded from error rate 1/2 - α for any α > 0 and locally list-decoded from error rate 1 - α for any α > 0, with only a constant number of queries and a constant alphabet size. This gives the first sub-exponential length codes that can be locally list-decoded with a constant number of queries.
FOCS Conference 2009 Conference Paper
We give an explicit construction of an ¿-biased set over k bits of size O(k/¿2 log(1/¿)) 5/4 This improves upon previous explicit constructions when e is roughly (ignoring logarithmic factors) in the range [k -1. 5, k -0. 5 ]. The construction builds on an algebraic-geometric code. However, unlike previous constructions we use low-degree divisors whose degree is significantly smaller than the genus. Studying the limits of our technique, we arrive at a hypothesis that if true implies the existence of e-biased sets with parameters nearly matching the lower bound, and in particular giving binary error correcting codes beating the Gilbert-Varshamov bound.
STOC Conference 2008 Conference Paper
SODA Conference 2007 Conference Paper
FOCS Conference 2006 Conference Paper
Lossless condensers are unbalanced expander graphs, with expansion close to optimal. Equivalently, they may be viewed as functions that use a short random seed to map a source on n bits to a source on many fewer bits while preserving all of the min-entropy. It is known how to build lossless condensers when the graphs are slightly unbalanced in the work of M. Capalbo et al. (2002). The highly unbalanced case is also important but the only known construction does not condense the source well. We give explicit constructions of lossless condensers with condensing close to optimal, and using near-optimal seed length. Our main technical contribution is a randomness-efficient method for sampling F D (where F is a field) with low-degree curves. This problem was addressed before in the works of E. Ben-Sasson et al. (2003) and D. Moshkovitz and R. Raz (2006) but the solutions apply only to degree one curves, i. e. , lines. Our technique is new and elegant. We use sub-sampling and obtain our curve samplers by composing a sequence of low-degree manifolds, starting with high-dimension, low-degree manifolds and proceeding through lower and lower dimension manifolds with (moderately) growing degrees, until we finish with dimension-one, low-degree manifolds, i. e. , curves. The technique may be of independent interest
STOC Conference 2003 Conference Paper
FOCS Conference 2001 Conference Paper
Finding explicit extractors is an important derandomization goal that has received a lot of attention in the past decade. Previous research has focused on two approaches, one related to hashing and the other to pseudorandom generators. A third view, regarding extractors as good error correcting codes, was noticed before. Yet, researchers had failed to build extractors directly from a good code without using other tools from pseudorandomness. We succeed in constructing an extractor directly from a Reed-Muller code. To do this, we develop a novel proof technique. Furthermore, our construction is the first to achieve a degree close to linear. In contrast, the best previous constructions brought the log of the degree within a constant of optimal, which gives polynomial degree. This improvement is important for certain applications. For example, it follows that approximating the VC dimension to within a factor of N/sup 1-/spl delta// is AM-hard for any positive /spl delta/.
STOC Conference 2001 Conference Paper
STOC Conference 2001 Conference Paper
An extractor is a procedure which extracts randomness from a detective random source using a few additional random bits. Explicit extractor constructions have numerous applications and obtaining such constructions is an important derandomization goal. Trevisan recently introduced an elegant extractor construction, but the number of truly random bits required is suboptimal when the input source has low-min-entropy. Significant progress toward overcoming this bottleneck has been made, but so far has required complicated recursive techniques that lose the simplicity of Trevisan's construction. We give a clean method for overcoming this bottleneck by constructing {\em loss-less condensers}. which compress the n -bit input source without losing any min-entropy, using O(\log n) additional random bits. Our condensers are built using a simple modification of Trevisan's construction, and yield the best extractor constructions to date. Loss-less condensers also produce unbalanced bipartite expander graphs with small (polylogarithmic) degree D and very strong expansion of (1-\epilon)D . We give other applications of our construction, including dispersers with entropy loss O(\log n) , depth two super-concentrators whose size is within a polylog of optimal, and an improved hardness of approximation result.
STOC Conference 2000 Conference Paper
STOC Conference 2000 Conference Paper
STOC Conference 1999 Conference Paper
FOCS Conference 1998 Conference Paper
Sampling is an important primitive in probabilistic and quantum algorithms. In the spirit of communication complexity, given a function f: X/spl times/Y/spl rarr/{0, 1} and a probability distribution D over X/spl times/Y, we define the sampling complexity of (f, D) as the minimum number of bits Alice and Bob must communicate for Alice to pick x/spl isin/X and Bob to pick y/spl isin/Y as well as a valve z s. t. the resulting distribution of (x, y, z) is close to the distribution (D, f(D)). In this paper we initiate the study of sampling complexity, in both the classical and quantum model. We give several variants of the definition. We completely characterize some of these tasks, and give upper and lower bounds on others. In particular this allows us to establish an exponential gap between quantum and classical sampling complexity, for the set disjointness function. This is the first exponential gap for any task where the classical probabilistic algorithm is allowed to err.
FOCS Conference 1997 Conference Paper
We show that the minimum size of a depth-two N-superconcentrator is /spl Theta/(Nlog/sup 2/N/loglogN). Before this work, optimal bounds were known for all depths except two. For the upper bound, we build superconcentrators by putting together a small number of disperser graphs; these disperser graphs are obtained using a probabilistic argument. We present two different methods for showing lower bounds. First, we show that superconcentrators contain several disjoint disperser graphs. When combined with the lower bound for disperser graphs due to Kovari, Sos and Turan, this gives an almost optimal lower bound of /spl Omega/(N(log N/loglog N)/sup 2/) on the size of N-superconcentrators. The second method, based on the work of Hansel (1964), gives the optimal lower bound. The method of the Kovari, Sos and Turan can be extended to give tight lower bounds for extractors, both in terms of the number of truly random bits needed to extract one additional bit and in terms of the unavoidable entropy loss in the system. If the input is an n-bit source with min-entropy /spl kappa/ and the output is required to be within a distance of E from uniform distribution, then to extract even a constant number of additional bits, one must invest at least log(n-/spl kappa/)+2 log(1//spl epsiv/)-O(1) truly random bits; to obtain m output bits one must invest at least m-/spl kappa/+2 log(1//spl epsiv/)-O(1). Thus, there is a loss of 2 log(1//spl epsiv/) bits during the extraction. Interestingly in the case of dispersers this loss in entropy is only about loglog(1//spl epsiv/).
STOC Conference 1996 Conference Paper
STOC Conference 1995 Conference Paper