Arrow Research search

Author name cluster

Edward Pyne

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

8 papers
1 author row

Possible papers

8

FOCS Conference 2025 Conference Paper

Collapsing Catalytic Classes

  • Michal Koucký 0001
  • Ian Mertz
  • Edward Pyne
  • Sasha Sami

A catalytic machine is a space-bounded Turing machine with additional access to a second, much larger work tape, with the caveat that this tape is full, and its contents must be preserved by the computation. Catalytic machines were defined by Buhrman et al. (STOC 2014), who, alongside many follow-up works, exhibited the power of catalytic space (CSPACE) and, in particular, catalytic logspace machines (CL) beyond that of traditional space-bounded machines. Several variants of CL have been proposed, including nondeterministic and co-non-deterministic catalytic computation by Buhrman et al. (STACS 2016) and randomized catalytic computation by Datta et al. (CSR 2020). These and other works proposed several questions, such as catalytic analogues of the theorems of Savitch and Immerman and Szelepcsényi. Catalytic computation was recently derandomized by Cook et al. (STOC 2025), but only in certain parameter regimes. We settle almost all questions regarding randomized and nondeterministic catalytic computation by giving an optimal reduction from catalytic space with additional resources to the corresponding non-catalytic space classes. With regards to non-determinism, our main result is that CL = CNL and with regards to randomness we show CL = CPrL where CPrL denotes randomized catalytic logspace where the accepting probability can be arbitrarily close to 1/2. We also have a number of near-optimal partial results for non-deterministic and randomized catalytic computation with less catalytic space. We show catalytic versions of Savitch’s theorem, Immerman-Szelepscényi, and the derandomization results of Nisan and Saks and Zhou, all of which are unconditional and hold for all parameter settings. Our results build on the compress-or-compute framework of Cook et al. (STOC 2025). Despite proving broader and stronger results, our framework is simpler and more modular.

STOC Conference 2025 Conference Paper

When Connectivity Is Hard, Random Walks Are Easy with Non-determinism

  • Dean Doron
  • Edward Pyne
  • Roei Tell
  • R. Ryan Williams

Two fundamental problems on directed graphs are to decide s - t connectivity, and to estimate the behavior of random walks. Currently, there is no known algorithm for s - t connectivity running in polynomial time and n o (1) space, and no known algorithm for estimating the n -step random walk matrix running in non-deterministic logspace. We show that for every directed graph, at least one of these problems is solvable in time and space that significantly improve on the respective state-of-the-art. In particular, there is a pair of algorithms A 1 and A 2 such that for every graph G , either: A 1 ( G ) outputs the transitive closure of G in polynomial time and polylogarithmic space. A 2 ( G ) outputs an approximation of the n -step random walk matrix of G in non-deterministic logspace. As one application, we show surprisingly tight win-win results for space-bounded complexity. For example, for certain parameter regimes, either Savitch’s theorem can be non-trivially sped up, or randomized space can be almost completely derandomized. We also apply our techniques to significantly weaken the assumptions required to derandomize space-bounded computation, and to make non-deterministic space-bounded computation unambiguous. Specifically, we deduce such conclusions from lower bounds against uniform circuits of polynomial size, which is an exponential improvement on the required hardness in previous works (Doron–Pyne–Tell STOC 2024, Li–Pyne–Tell FOCS 2024). We further show similar results for minimal-memory derandomization (Doron–Tell CCC 2024). To prove these results, we substantially improve the array of technical tools introduced in recent years for studying hardness-vs.-randomness for bounded-space computation. In particular, we develop derandomized distinguish-to-predict transformations for new types of distinguishers (corresponding to compositions of PRGs with weak distinguishers), we construct a derandomized logspace reconstruction procedure for the Shaltiel–Umans generator (JACM 2005) that can compress hard truth-tables to polylogarithmic size, and we design a version of the Chen–Tell generator (FOCS 2021) that is particularly suitable for the space-bounded setting.

FOCS Conference 2024 Conference Paper

Distinguishing, Predicting, and Certifying: On the Long Reach of Partial Notions of Pseudorandomness

  • Jiatu Li
  • Edward Pyne
  • Roei Tell

This paper revisits the study of two classical technical tools in theoretical computer science: Yao's trans-formation of distinguishers to next-bit predictors (FOCS 1982), and the “reconstruction paradigm” in pseudorandomness (e. g. , as in Nisan and Wigderson, JCSS 1994). Recent works of Pyne, Raz, and Zhan (FOCS 2023) and Doron, Pyne, and Tell (STOC 2024) showed that both of these tools can be derandomized in the specific context of read-once branching programs (ROBPs), but left open the question of de randomizing them in more general settings. Our main contributions give appealing evidence that derandomization of the two tools is possible in general settings, show surprisingly strong consequences of such derandomization, and reveal several new settings where such derandomization is unconditionally possible for algorithms stronger than ROBPs (with useful consequences). Specifically: •We show that derandomizing these tools is equivalent to general derandomization. Specifically, we show that derandomizing distinguish - to- predict transformations is equivalent to prBPP=prP, and that derandomized reconstruction procedures (in a more general sense that we introduce) is equivalent to prBPP=prZPP. These statements hold even when scaled down to weak circuit classes and to algorithms that run in super-polynomial time. •Our main technical contributions are unconditional constructions of derandomized versions of Yao's transformation (or reductions of this task to other problems) for classes and for algorithms beyond ROBPs. Consequently, we deduce new results: A significant relaxation of the hypotheses required to derandomize the isolation lemma for logspace algorithms and deduce that NL=UL; and proofs that de-randomization necessitates targeted PRGs in catalytic logspace (unconditionally) and in logspace (conditionally). In addition, we introduce a natural subclass of prZPP that has been implicitly studied in recent works (Korten FOCS 2021, CCC 2022): The class of problems reducible to a problem called “Lossy Code”. We provide a structural characterization for this class in terms of derandomized reconstruction procedures, and show that this characterization is robust to several natural variations. Lastly, we present alternative proofs for classical results in the theory of pseudorandomness (such as two-sided derandomization reducing to one-sided), relying on the notion of deterministically transforming distinguishers to predictors as the main technical tool.

FOCS Conference 2023 Conference Paper

Certified Hardness vs. Randomness for Log-Space

  • Edward Pyne
  • Ran Raz
  • Wei Zhan

Let $\mathcal{L}$ be a language that can be decided in linear space and let $\epsilon \gt 0$ be any constant. Let $\mathcal{A}$ be the exponential hardness assumption that for every n, membership in $\mathcal{L}$ for inputs of length n cannot be decided by circuits of size smaller than $2^{\epsilon n}$. We prove that for every function $f: \{0, 1\}^{*} \rightarrow\{0, 1\}$, computable by a randomized logspace algorithm R, there exists a deterministic logspace algorithm D (attempting to compute f), such that on every input x of length n, the algorithm D outputs one of the following: 1)The correct value $f(x)$. 2)The string: “I am unable to compute $f(x)$ because the hardness assumption $\mathcal{A}$ is false”, followed by a (provenly correct) circuit of size smaller than $2^{\epsilon n^{\prime}}$ for membership in $\mathcal{L}$ for inputs of length $n^{\prime}$, for some $n^{\prime}=\Theta(\log n)$; that is, a circuit that refutes $\mathcal{A}$. Moreover, D is explicitly constructed, given R. We note that previous works on the hardness-versus-randomness paradigm give derandomized algorithms that rely blindly on the hardness assumption. If the hardness assumption is false, the algorithms may output incorrect values, and thus a user cannot trust that an output given by the algorithm is correct. Instead, our algorithm D verifies the computation so that it never outputs an incorrect value. Thus, if D outputs a value for $f(x)$, that value is certified to be correct. Moreover, if D does not output a value for $f(x)$, it alerts that the hardness assumption was found to be false, and refutes the assumption. Our next result is a universal derandomizer for BPL (the class of problems solvable by bounded-error randomized logspace algorithms) 1: We give a deterministic algorithm U that takes as an input a randomized logspace algorithm R and an input x and simulates the computation of R on x, deteriministically. Under the widely believed assumption $\mathbf{BPL}=\mathbf{L}$, the space used by U is at most $C_{R} \cdot \log n$ (where $C_{R}$ is a constant depending on R). Moreover, for every constant $c \geq 1$, if $\operatorname{BPL} \subseteq \operatorname{SPACE}\left[(\log (n))^{c}\right]$ then the space used by U is at most $C_{R} \cdot(\log (n))^{c}$. Finally, we prove that if optimal hitting sets for ordered branching programs exist then there is a deterministic logspace algorithm that, given a black-box access to an ordered branching program B of size n, estimates the probability that B accepts on a uniformly random input. This extends the result of (Cheng and Hoza CCC 2020), who proved that an optimal hitting set implies a white-box two-sided derandomization. 1 Our result is stated and proved for promise-BPL, but we ignore this difference in the abstract.

FOCS Conference 2023 Conference Paper

Singular Value Approximation and Sparsifying Random Walks on Directed Graphs

  • AmirMahdi Ahmadinejad
  • John Peebles
  • Edward Pyne
  • Aaron Sidford
  • Salil P. Vadhan

In this paper, we introduce a new, spectral notion of approximation between directed graphs, which we call singular value (SV) approximation. SV-approximation is stronger than previous notions of spectral approximation considered in the literature, including spectral approximation of Laplacians for undirected graphs [ST04], standard approximation for directed graphs [CKP + 17], and unit-circle (UC) approximation for directed graphs [AKM + 20]. Further, SV approximation enjoys several useful properties not possessed by previous notions of approximation, e. g. , it is preserved under products of randomwalk matrices and bounded matrices. We provide a nearly linear-time algorithm for SV-sparsifying (and hence UC-sparsifying) Eulerian directed graphs, as well as $\ell$-step random walks on such graphs, for any $\ell \leq \operatorname{poly}(n)$. Combined with the Eulerian scaling algorithms of [CKK + 18], given an arbitrary (not necessarily Eulerian) directed graph and a set S of vertices, we can approximate the stationary probability mass of the $\left(S, S^{c}\right)$ cut in an $\ell$-step random walk to within a multiplicative error of $1 / \operatorname{polylog}(n)$ and an additive error of $1 / \operatorname{poly}(n)$ in nearly linear time. As a starting point for these results, we provide a simple black-box reduction from SV-sparsifying Eulerian directed graphs to SV-sparsifying undirected graphs; such a directed-to-undirected reduction was not known for previous notions of spectral approximation.