Arrow Research search

Author name cluster

Dmitry Gavinsky

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

9 papers
2 author rows

Possible papers

9

STOC Conference 2016 Conference Paper

Entangled simultaneity versus classical interactivity in communication complexity

  • Dmitry Gavinsky

In 1999 Raz demonstrated a partial function that had an efficient quantum two-way communication protocol but no efficient classical two-way protocol and asked, whether there existed a function with an efficient quantum one-way protocol, but still no efficient classical two-way protocol. In 2010 Klartag and Regev demonstrated such a function and asked, whether there existed a function with an efficient quantum simultaneous-messages protocol, but still no efficient classical two-way protocol. In this work we answer the latter question affirmatively and present a partial function Shape, which can be computed by a protocol sending entangled simultaneous messages of poly-logarithmic size, and whose classical two-way complexity is lower bounded by a polynomial.

STOC Conference 2014 Conference Paper

Toward better formula lower bounds: an information complexity approach to the KRW composition conjecture

  • Dmitry Gavinsky
  • Or Meir
  • Omri Weinstein
  • Avi Wigderson

One of the major open problems in complexity theory is proving super-logarithmic lower bounds on the depth of circuits (i.e., P ⊈ NC 1 ). This problem is interesting for two reasons: first, it is tightly related to understanding the power of parallel computation and of small-space computation; second, it is one of the first milestones toward proving super-polynomial circuit lower bounds. Karchmer, Raz, and Wigderson [21] suggested to approach this problem by proving the following conjecture: given two boolean functions f and g , the depth complexity of the composed function g o f is roughly the sum of the depth complexities of f and g . They showed that the validity of this conjecture would imply that P ⊈ NC 1 . As a starting point for studying the composition of functions, they introduced a relation called "the universal relation", and suggested to study the composition of universal relations. This suggestion proved fruitful, and an analogue of the KRW conjecture for the universal relation was proved by Edmonds et. al. [12]. An alternative proof was given later by Håstad and Wigderson [18]. However, studying the composition of functions seems more difficult, and the KRW conjecture is still wide open. In this work, we make a natural step in this direction, which lies between what is known and the original conjecture: we show that an analogue of the conjecture holds for the composition of a function with a universal relation. We also suggest a candidate for the next step and provide initial results toward it. Our main technical contribution is developing an approach based on the notion of information complexity for analyzing KW relations -- communication problems that are closely related to questions on circuit depth and formula complexity. Recently, information complexity has proved to be a powerful tool, and underlined some major progress on several long-standing open problems in communication complexity. In this work, we develop general tools for analyzing the information complexity of KW relations, which may be of independent interest.

STOC Conference 2007 Conference Paper

Exponential separations for one-way quantum communication complexity, with applications to cryptography

  • Dmitry Gavinsky
  • Julia Kempe
  • Iordanis Kerenidis
  • Ran Raz
  • Ronald de Wolf

We give an exponential separation between one-way quantum and classical communication protocols for twopartial Boolean functions, both of which are variants of the Boolean Hidden Matching Problem of Bar-Yossef et al. Earlier such an exponential separation was known only for a relational version of the Hidden Matching Problem. Our proofs use the Fourier coefficients inequality of Kahn, Kalai, and Linial. We give a number of applications of this separation. In particular, in the bounded-storage model of cryptography we exhibita scheme that is secure against adversaries with a certain amount of classical storage, but insecure against adversaries with a similar (or even much smaller) amount of quantum storage; in the setting of privacy amplification, we show that there are strong extractors that yield a classically secure key, but are insecure against a quantum adversary.

STOC Conference 2006 Conference Paper

Bounded-error quantum state identification and exponential separations in communication complexity

  • Dmitry Gavinsky
  • Julia Kempe
  • Oded Regev 0001
  • Ronald de Wolf

We consider the problem of bounded-error quantum state identification: given either state α 0 or state α 1 , we are required to output '0', '1' or 'DONO' ("don't know"), such that conditioned on outputting '0' or '1', our guess is correct with high probability. The goal is to maximize the probability of not outputting 'DONO'. We prove a direct product theorem: if we're given two such problems, with optimal probabilities a and b, respectively, and the states in the first problem are pure, then the optimal probability for the joint bounded-error state identification problem is O(ab). Our proof is based on semidefinite programming duality and may be of wider interest.Using this result, we present two exponential separations in the simultaneous message passing model of communication complexity. First, we describe a relation that can be computed with O(log n) classical bits of communication in the presence of shared randomness, but needs Ω(n 1/3 ) communication if the parties don't share randomness, even if communication is quantum. This shows the optimality of Yao's recent exponential simulation of shared-randomness protocols by quantum protocols without shared randomness. Second, we describe a relation that can be computed with O(log n) classical bits of communication in the presence of shared entanglement, but needs Ω((n/log n) 1/3 ) communication if the parties share randomness but no entanglement, even if communication is quantum. This is the first example in communication complexity where entanglement buys you much more than quantum communication does.

JMLR Journal 2003 Journal Article

Optimally-Smooth Adaptive Boosting and Application to Agnostic Learning

  • Dmitry Gavinsky

We describe a new boosting algorithm that is the first such algorithm to be both smooth and adaptive. These two features make possible performance improvements for many learning tasks whose solutions use a boosting technique. The boosting approach was originally suggested for the standard PAC model; we analyze possible applications of boosting in the context of agnostic learning, which is more realistic than the PAC model. We derive a lower bound for the final error achievable by boosting in the agnostic model and show that our algorithm actually achieves that accuracy (within a constant factor). We note that the idea of applying boosting in the agnostic model was first suggested by Ben-David, Long and Mansour (2001) and the solution they give is improved in the present paper. The accuracy we achieve is exponentially better with respect to the standard agnostic accuracy parameter β. We also describe the construction of a boosting "tandem" whose asymptotic number of iterations is the lowest possible (in both γ and ε and whose smoothness is optimal in terms of Õ (·). This allows adaptively solving problems whose solution is based on smooth boosting (like noise tolerant boosting and DNF membership learning), while preserving the original (non-adaptive) solution's complexity. [abs] [ pdf ][ ps.gz ][ ps ]

JMLR Journal 2002 Journal Article

On Boosting with Polynomially Bounded Distributions

  • Nader H. Bshouty
  • Dmitry Gavinsky

We construct a framework which allows an algorithm to turn the distributions produced by some boosting algorithms into polynomially smooth distributions (w.r.t. the PAC oracle's distribution), with minimal performance loss. Further, we explore the case of Freund and Schapire's AdaBoost algorithm, bounding its distributions to polynomially smooth. The main advantage of AdaBoost over other boosting techniques is that it is adaptive, i.e., it is able to take advantage of weak hypotheses that are more accurate than it was assumed a priori. We show that the feature of adaptiveness is preserved and improved by our technique. Our scheme allows the execution of AdaBoost in the on-line boosting mode (i.e., to perform boosting "by filtering"). Executed this way (and possessing the quality of smoothness), now AdaBoost may be efficiently applied to a wider range of learning problems than before. In particular, we demonstrate AdaBoost 's application to the task of DNF learning using membership queries. This application results in an algorithm that chooses the number of boosting iterations adaptively, and, consequently, adaptively chooses the size of the produced final hypothesis. This answers affirmatively a question posed by Jackson.

FOCS Conference 2002 Conference Paper

PAC = PAExact and Other Equivalent Models in Learning

  • Nader H. Bshouty
  • Dmitry Gavinsky

The probably almost exact model (PAExact) can be viewed as the exact model relaxed so that: 1. The counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen. 2. The output hypothesis is equal to the target with negligible error (1//spl omega/(poly) for any poly). This model allows studying (almost) exact learnability of infinite classes and is in some sense analogous to the Exact-learning model for finite classes. It is known that PAExact-learnable/spl rArr/PAC-learnable [BJT02]. In this paper we show that if a class is PAC-learnable (in polynomial time) then it is PAExact-learnable (in polynomial time). Therefore, PAExact-learnable=PAC-learnable. It follows from this result that if a class is PAC-learnable then it is learnable in the probabilistic prediction model from examples with an algorithm that runs in polynomial time for each prediction (polynomial in log(the number of trials)) and that after polynomial number of mistakes achieves a hypothesis that predicts the target with probability 1-1/2/sup poly/. We also show that if a class is PAC-learnable in parallel then it is PAExact-learnable in parallel.