Arrow Research search

Author name cluster

Scott Aaronson

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

22 papers
2 author rows

Possible papers

22

ICML Conference 2023 Conference Paper

Learning Distributions over Quantum Measurement Outcomes

  • Weiyuan Gong
  • Scott Aaronson

Shadow tomography for quantum states provides a sample efficient approach for predicting the measurement outcomes of quantum systems. However, these shadow tomography procedures yield poor bounds if there are more than two outcomes per measurement. In this paper, we consider a general problem of learning properties from quantum states: given an unknown $d$-dimensional quantum state $\rho$ and $M$ unknown quantum measurements $\mathcal{M}_1, .. ., \mathcal{M}_M$ with $K\geq 2$ outcomes, estimating the probability distribution for applying $\mathcal{M}_i$ on $\rho$ to within total variation distance $\epsilon$. Compared to the special case when $K=2$, we have to learn unknown distributions instead of values. Here, we propose an online shadow tomography procedure that solves this problem with high success probability requiring $\tilde{O}(K\log^2M\log d/\epsilon^4)$ copies of $\rho$. We further prove an information-theoretic lower bound showing that at least $\Omega(\min\{d^2, K+\log M\}/\epsilon^2)$ copies of $\rho$ are required to solve this problem with high success probability. Our shadow tomography procedure requires sample complexity with only logarithmic dependence on $M$ and $d$ and is sample-optimal concerning the dependence on $K$.

FOCS Conference 2019 Conference Paper

A Quantum Query Complexity Trichotomy for Regular Languages

  • Scott Aaronson
  • Daniel Grier
  • Luke Schaeffer

We present a trichotomy theorem for the quantum query complexity of regular languages. Every regular language has quantum query complexity Θ(1), Θ̃(√ n), or Θ(n). The extreme uniformity of regular languages prevents them from taking any other asymptotic complexity. This is in contrast to even the context-free languages, which we show can have query complexity Θ(n c ) for all computable c [1/2, 1]. Our result implies an equivalent trichotomy for the approximate degree of regular languages, and a dichotomy-either Θ(1) or Θ(n)-for sensitivity, block sensitivity, certificate complexity, deterministic query complexity, and randomized query complexity. The heart of the classification theorem is an explicit quantum algorithm which decides membership in any star-free language in Õ(√n) time. This well-studied family of the regular languages admits many interesting characterizations, for instance, as those languages expressible as sentences in first-order logic over the natural numbers with the less-than relation. Therefore, not only do the star-free languages capture functions such as OR, they can also express functions such as "there exist a pair of 2's such that everything between them is a 0. " Thus, we view the algorithm for star-free languages as a nontrivial generalization of Grover's algorithm which extends the quantum quadratic speedup to a much wider range of string-processing algorithms than was previously known. We show a variety of applications-new quantum algorithms for dynamic constant-depth Boolean formulas, balanced parentheses nested constantly many levels deep, binary addition, a restricted word break problem, and path-discovery in narrow grids-all obtained as immediate consequences of our classification theorem.

NeurIPS Conference 2018 Conference Paper

Online Learning of Quantum States

  • Scott Aaronson
  • Xinyi Chen
  • Elad Hazan
  • Satyen Kale
  • Ashwin Nayak

Suppose we have many copies of an unknown n-qubit state $\rho$. We measure some copies of $\rho$ using a known two-outcome measurement E_1, then other copies using a measurement E_2, and so on. At each stage t, we generate a current hypothesis $\omega_t$ about the state $\rho$, using the outcomes of the previous measurements. We show that it is possible to do this in a way that guarantees that $|\trace(E_i \omega_t) - \trace(E_i\rho)|$, the error in our prediction for the next measurement, is at least $eps$ at most $O(n / eps^2) $\ times. Even in the non-realizable setting---where there could be arbitrary noise in the measurement outcomes---we show how to output hypothesis states that incur at most $O(\sqrt {Tn}) $ excess loss over the best possible state on the first $T$ measurements. These results generalize a 2007 theorem by Aaronson on the PAC-learnability of quantum states, to the online and regret-minimization settings. We give three different ways to prove our results---using convex optimization, quantum postselection, and sequential fat-shattering dimension---which have different advantages in terms of parameters and portability.

STOC Conference 2018 Conference Paper

Shadow tomography of quantum states

  • Scott Aaronson

We introduce the problem of *shadow tomography*: given an unknown D -dimensional quantum mixed state ρ, as well as known two-outcome measurements E 1 ,…, E M , estimate the probability that E i accepts ρ, to within additive error ε, for each of the M measurements. How many copies of ρ are needed to achieve this, with high probability? Surprisingly, we give a procedure that solves the problem by measuring only O ( ε −5 ·log 4 M ·log D ) copies. This means, for example, that we can learn the behavior of an arbitrary n -qubit state, on *all* accepting/rejecting circuits of some fixed polynomial size, by measuring only n O ( 1) copies of the state. This resolves an open problem of the author, which arose from his work on private-key quantum money schemes, but which also has applications to quantum copy-protected software, quantum advice, and quantum one-way communication. Recently, building on this work, Brandão et al. have given a different approach to shadow tomography using semidefinite programming, which achieves a savings in computation time.

STOC Conference 2015 Conference Paper

Forrelation: A Problem that Optimally Separates Quantum from Classical Computing

  • Scott Aaronson
  • Andris Ambainis

We achieve essentially the largest possible separation between quantum and classical query complexities. We do so using a property-testing problem called Forrelation, where one needs to decide whether one Boolean function is highly correlated with the Fourier transform of a second function. This problem can be solved using 1 quantum query, yet we show that any randomized algorithm needs Ω(√(N)log(N)) queries (improving an Ω(N 1/4 ) lower bound of Aaronson). Conversely, we show that this 1 versus Ω(√(N)) separation is optimal: indeed, any t-query quantum algorithm whatsoever can be simulated by an O(N 1-1/2t )-query randomized algorithm. Thus, resolving an open question of Buhrman et al. from 2002, there is no partial Boolean function whose quantum query complexity is constant and whose randomized query complexity is linear. We conjecture that a natural generalization of Forrelation achieves the optimal t versus Ω(N 1-1/2t ) separation for all t. As a bonus, we show that this generalization is BQP-complete. This yields what's arguably the simplest BQP-complete problem yet known, and gives a second sense in which Forrelation "captures the maximum power of quantum computation."

STOC Conference 2010 Conference Paper

A full characterization of quantum advice

  • Scott Aaronson
  • Andrew Drucker

We prove the following surprising result: given any quantum state rho on n qubits, there exists a local Hamiltonian H on poly(n) qubits (e.g., a sum of two-qubit interactions), such that any ground state of H can be used to simulate rho on all quantum circuits of fixed polynomial size. In terms of complexity classes, this implies that BQP/qpoly is contained in QMA/poly, which supersedes the previous result of Aaronson that BQP/qpoly is contained in PP/poly. Indeed, we can exactly characterize quantum advice, as equivalent in power to untrusted quantum advice combined with trusted classical advice. Proving our main result requires combining a large number of previous tools -- including a result of Alon et al. on learning of real-valued concept classes, a result of Aaronson on the learnability of quantum states, and a result of Aharonov and Regev on "QMA+ super-verifiers" -- and also creating some new ones. The main new tool is a so-called majority-certificates lemma, which is closely related to boosting in machine learning, and which seems likely to find independent applications. In its simplest version, this lemma says the following. Given any set S of Boolean functions on n variables, any function f in S can be expressed as the pointwise majority of m=O(n) functions f1,...,fm in S, such that each fi is the unique function in S compatible with O(log|S|) input/output constraints.

STOC Conference 2008 Conference Paper

Algebrization: a new barrier in complexity theory

  • Scott Aaronson
  • Avi Wigderson

Any proof of P!=NP will have to overcome two barriers: relativization and natural proofs. Yet over the last decade, we have seen circuit lower bounds (for example, that PP does not have linear-size circuits) that overcome both barriers simultaneously. So the question arises of whether there is a third barrier to progress on the central questions in complexity theory. In this paper we present such a barrier, which we call algebraic relativization or algebrization. The idea is that, when we relativize some complexity class inclusion, we should give the simulating machine access not only to an oracle A, but also to a low-degree extension of A over a finite field or ring. We systematically go through basic results and open problems in complexity theory to delineate the power of the new algebrization barrier. First, we show that all known non-relativizing results based on arithmetization -- both inclusions such as IP=PSPACE and MIP=NEXP, and separations such as MAEXP not in P/poly -- do indeed algebrize. Second, we show that almost all of the major open problems -- including P versus NP, P versus RP, and NEXP versus P/poly -- will require non-algebrizing techniques. In some cases algebrization seems to explain exactly why progress stopped where it did: for example, why we have superlinear circuit lower bounds for PromiseMA but not for NP. Our second set of results follows from lower bounds in a new model of algebraic query complexity, which we introduce in this paper and which is interesting in its own right. Some of our lower bounds use direct combinatorial and algebraic arguments, while others stem from a surprising connection between our model and communication complexity. Using this connection, we are also able to give an MA-protocol for the Inner Product function with O(sqrt(n) log n) communication (essentially matching a lower bound of Klauck).

FOCS Conference 2008 Conference Paper

The Polynomial Method in Quantum and Classical Computing

  • Scott Aaronson

In 1889, A. A. Markov proved a powerful result about low-degree real polynomials: roughly speaking, that such polynomials cannot have a sharp jump followed by a long, relatively flat part. A century later, this result - as well as other results from the field of approximation theory - came to play a surprising role in classical and quantum complexity theory. In this article, the author tries to tell this story in an elementary way, beginning with classic results in approximation theory and ending with some recent applications.

STOC Conference 2005 Conference Paper

The complexity of agreement

  • Scott Aaronson

A celebrated 1976 theorem of Aumann asserts that Bayesian agents with common priors can never "agree to disagree": if their opinions about any topic are common knowledge, then those opinions must be equal. But two key questions went unaddressed: first, can the agents reach agreement after a conversation of reasonable length? Second, can the computations needed for that conversation be performed efficiently? This paper answers both questions in the affirmative, thereby strengthening Aumann's original conclusion.We show that for two agents with a common prior to agree within ε about the expectation of a [0,1] variable with high probability over their prior, it suffices for them to exchange O(1/ε 2 ) bits. This bound is completely independent of the number of bits n of relevant knowledge that the agents have. We also extend the bound to three or more agents; and we give an example where the "standard protocol" (which consists of repeatedly announcing one's current expectation) nearly saturates the bound, while a new "attenuated protocol" does better. Finally, we give a protocol that would cause two Bayesians to agree within ε after exchanging O(1/ε 2 ) messages, and that can be simulated by agents with limited computational resources. By this we mean that, after examining the agents' knowledge and a transcript of their conversation, no one would be able to distinguish the agents from perfect Bayesians. The time used by the simulation procedure is exponential in 1/ε 6 but not in n.

STOC Conference 2004 Conference Paper

Lower bounds for local search by quantum arguments

  • Scott Aaronson

The problem of finding a local minimum of a black-box function is central for understanding local search as well as quantum adiabatic algorithms. For functions on the Boolean hypercube 0,1 n , we show a lower bound of Ω(2 n /4 / n ) on the number of queries needed by a quantum computer to solve this problem. More surprisingly, our approach, based on Ambainis's quantum adversary method, also yields a lower bound of Ω(2 n /2 / n 2 ) on the problem's classical randomized query complexity. This improves and simplifies a 1983 result of Aldous. Finally, in both the randomized and quantum cases, we give the first nontrivial lower bounds for finding local minima on grids of constant dimension d ≥3.

FOCS Conference 2003 Conference Paper

Quantum Search of Spatial Regions

  • Scott Aaronson
  • Andris Ambainis

Can Grover's quantum search algorithm speed up search of a physical region - for example a 2D grid of size /spl radic/n x /spl radic/n? The problem is that /spl radic/n time seems to be needed for each query, just to move amplitude across the grid. Here we show that this problem can be surmounted, refuting a claim to the contrary by Benioff. In particular, we show how to search a d-dimensional hypercube in time 0(/spl radic/n) for d /spl ges/ 3, or 0(/spl radic/n log/sup 3/ n) for d = 2. More generally, we introduce a model of quantum query complexity on graphs, motivated by fundamental physical limits on information storage, particularly the holographic principle from black hole thermodynamics. Our results in this model include almost-tight upper and lower bounds for many search tasks; a generalized algorithm that works for any graph with good expansion properties, not just hypercubes; and relationships among several notions of locality for unitary matrices acting on graphs. As an application of our results, we give an 0(/spl radic/n)-qubit communication protocol for the disjointness problem, which improves an upper bound of Hoyer and de Wolf and matches a lower bound of Razborov.

STOC Conference 2002 Conference Paper

Quantum lower bound for the collision problem

  • Scott Aaronson

(MATH) The collision problem is to decide whether a function X : { 1,…, n } → { 1, …, n } is one-to-one or two-to-one, given that one of these is the case. We show a lower bound of Ω( n 1/5 ) on the number of queries needed by a quantum computer to solve this problem with bounded error probability. The best known upper bound is O ( n 1/3 ), but obtaining any lower bound better than Ω(1) was an open problem since 1997. Our proof uses the polynomial method augmented by some new ideas. We also give a lower bound of Ω( n 1/7 ) for the problem of deciding whether two sets are equal or disjoint on a constant fraction of elements. Finally we give implications of these results for quantum complexity theory.