Arrow Research search

Author name cluster

Jordan Cotler

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

FOCS Conference 2024 Conference Paper

Computational Dynamical Systems

  • Jordan Cotler
  • Semon Rezchikov

We study the computational complexity theory of smooth, finite-dimensional dynamical systems. Building off of previous work, we give definitions for what it means for a smooth dynamical system to simulate a Turing machine. We then show that ‘chaotic’ dynamical systems (more precisely, Axiom A systems) and ‘integrable’ dynamical systems (more generally, measure-preserving systems) cannot robustly simulate univer-sal Turing machines, although such machines can be robustly simulated by other kinds of dynamical systems. Subsequently, we show that any Turing machine that can be encoded into a structurally stable one-dimensional dynamical system must have a decidable halting problem, and moreover an explicit time complexity bound in instances where it does halt. More broadly, our work elucidates what it means for one ‘machine’ to simulate another, and emphasizes the necessity of defining low-complexity 'encoders' and 'decoders' to translate between the dynamics of the simulation and the system being simulated. We highlight how the notion of a computational dynamical system leads to questions at the intersection of computational complexity theory, dynamical systems theory, and real algebraic geometry.

FOCS Conference 2021 Conference Paper

Exponential Separations Between Learning With and Without Quantum Memory

  • Sitan Chen
  • Jordan Cotler
  • Hsin-Yuan Huang
  • Jerry Li 0001

We study the power of quantum memory for learning properties of quantum systems and dynamics, which is of great importance in physics and chemistry. Many state-of-the-art learning algorithms require access to an additional external quantum memory. While such a quantum memory is not required a priori, in many cases, algorithms that do not utilize quantum memory require much more data than those which do. We show that this trade-off is inherent in a wide range of learning problems. Our results include the following: •We show that to perform shadow tomography on an $n$ -qubit state $\rho$ with $M$ observables, any algorithm without quantum memory requires $\tilde{\Omega}(\min(M, 2^{n}))$ samples of $\rho$ in the worst case. Up to log factors, this matches the upper bound of [1], and completely resolves an open question in [2], [3]. •We establish exponential separations between algorithms with and without quantum memory for purity testing, distinguishing scrambling and depolarizing evolutions, and uncovering symmetry in physical dynamics. Our separations improve and generalize prior work of [4] by allowing for a broader class of algorithms without quantum memory. •We give the first tradeoff between quantum memory and sample complexity. More precisely, we prove that to estimate absolute values of all $n$ -qubit Pauli observables, algorithms with $k < n$ qubits of quantum memory require at least $\Omega(2^{(n-k)/3})$ samples, but there is an algorithm using $n$ -qubit quantum memory which only requires $\mathcal{O}(n)$ samples. The separations we show are sufficiently large and could already be evident, for instance, with tens of qubits. This provides a concrete path towards demonstrating real-world advantage for learning algorithms with quantum memory.