Arrow Research search

Author name cluster

Roei Tell

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

15 papers
1 author row

Possible papers

15

STOC Conference 2025 Conference Paper

Fiat-Shamir in the Plain Model from Derandomization (Or: Do Efficient Algorithms Believe that NP = PSPACE?)

  • Lijie Chen 0001
  • Ron D. Rothblum
  • Roei Tell

A classical challenge in complexity theory and cryptography is to simulate interactive proof systems by non-interactive proof systems. In this work we leverage approaches from recent works in derandomization to address this challenge, focusing on non-interactive simulations that are sound against uniform adversarial algorithms. Our results concern fundamental questions in complexity theory, such as the NP vs PSPACE question, and also in cryptography, such as the question of constructing non-interactive zero-knowledge arguments for NP from unstructured assumptions. Relying on strong complexity-theoretic hardness assumptions (that will be described below): 1. *Complexity theory.* We prove that PSPACE is contained in the “computationally sound” version of NP . Specifically, for every L ∈ PSPACE , membership in L can be verified by an NP -type (deterministic, polynomial-time) verifier V with the following guarantee: The verifier accepts every x ∈ L when given a proof π from an honest prover that runs in fixed exponential time T P ; and every uniform adversary running in probabilistic time poly ( T P ) cannot find x ∉ L and π such that V ( x ,π)=1, except with negligible probability in T P . As a corollary in the area of bounded arithmetic, under the same assumptions, we deduce that NP ≠ PSPACE is not provable in the theory APC 1 . This is a strong theory, which captures many of the major results in complexity. 2. *Cryptography.* We construct new cryptographic protocols, including succinct non-interactive arguments ( SNARG s) for NC in the plain model, as well as non-interactive zero-knowledge and witness indistinguishable ( NIZK and NIWI ) proof systems for NP , all with computational soundness against uniform adversaries. The SNARG relies solely on the aforementioned complexity-theoretic assumption, whereas the NIZK and NIWI require also a sub-exponentially secure one-way function (which should be injective in the case of the NIWI ). These are the first constructions of the above protocols that do not rely on highly structured cryptographic primitives. Roughly speaking, following Chen and Tell (FOCS 2021, STOC 2023), the complexity-theoretic hardness assumptions throughout our paper assert the existence of functions f ∶ {0,1} n → {0,1} k that are computable in polynomial time and hard for bounded-space machines (say, linear space) in a strong average-case sense: No efficient algorithm can find an input x on which the bounded-space machine computes f , except with negligible probability.

STOC Conference 2025 Conference Paper

When Connectivity Is Hard, Random Walks Are Easy with Non-determinism

  • Dean Doron
  • Edward Pyne
  • Roei Tell
  • R. Ryan Williams

Two fundamental problems on directed graphs are to decide s - t connectivity, and to estimate the behavior of random walks. Currently, there is no known algorithm for s - t connectivity running in polynomial time and n o (1) space, and no known algorithm for estimating the n -step random walk matrix running in non-deterministic logspace. We show that for every directed graph, at least one of these problems is solvable in time and space that significantly improve on the respective state-of-the-art. In particular, there is a pair of algorithms A 1 and A 2 such that for every graph G , either: A 1 ( G ) outputs the transitive closure of G in polynomial time and polylogarithmic space. A 2 ( G ) outputs an approximation of the n -step random walk matrix of G in non-deterministic logspace. As one application, we show surprisingly tight win-win results for space-bounded complexity. For example, for certain parameter regimes, either Savitch’s theorem can be non-trivially sped up, or randomized space can be almost completely derandomized. We also apply our techniques to significantly weaken the assumptions required to derandomize space-bounded computation, and to make non-deterministic space-bounded computation unambiguous. Specifically, we deduce such conclusions from lower bounds against uniform circuits of polynomial size, which is an exponential improvement on the required hardness in previous works (Doron–Pyne–Tell STOC 2024, Li–Pyne–Tell FOCS 2024). We further show similar results for minimal-memory derandomization (Doron–Tell CCC 2024). To prove these results, we substantially improve the array of technical tools introduced in recent years for studying hardness-vs.-randomness for bounded-space computation. In particular, we develop derandomized distinguish-to-predict transformations for new types of distinguishers (corresponding to compositions of PRGs with weak distinguishers), we construct a derandomized logspace reconstruction procedure for the Shaltiel–Umans generator (JACM 2005) that can compress hard truth-tables to polylogarithmic size, and we design a version of the Chen–Tell generator (FOCS 2021) that is particularly suitable for the space-bounded setting.

FOCS Conference 2024 Conference Paper

Distinguishing, Predicting, and Certifying: On the Long Reach of Partial Notions of Pseudorandomness

  • Jiatu Li
  • Edward Pyne
  • Roei Tell

This paper revisits the study of two classical technical tools in theoretical computer science: Yao's trans-formation of distinguishers to next-bit predictors (FOCS 1982), and the “reconstruction paradigm” in pseudorandomness (e. g. , as in Nisan and Wigderson, JCSS 1994). Recent works of Pyne, Raz, and Zhan (FOCS 2023) and Doron, Pyne, and Tell (STOC 2024) showed that both of these tools can be derandomized in the specific context of read-once branching programs (ROBPs), but left open the question of de randomizing them in more general settings. Our main contributions give appealing evidence that derandomization of the two tools is possible in general settings, show surprisingly strong consequences of such derandomization, and reveal several new settings where such derandomization is unconditionally possible for algorithms stronger than ROBPs (with useful consequences). Specifically: •We show that derandomizing these tools is equivalent to general derandomization. Specifically, we show that derandomizing distinguish - to- predict transformations is equivalent to prBPP=prP, and that derandomized reconstruction procedures (in a more general sense that we introduce) is equivalent to prBPP=prZPP. These statements hold even when scaled down to weak circuit classes and to algorithms that run in super-polynomial time. •Our main technical contributions are unconditional constructions of derandomized versions of Yao's transformation (or reductions of this task to other problems) for classes and for algorithms beyond ROBPs. Consequently, we deduce new results: A significant relaxation of the hypotheses required to derandomize the isolation lemma for logspace algorithms and deduce that NL=UL; and proofs that de-randomization necessitates targeted PRGs in catalytic logspace (unconditionally) and in logspace (conditionally). In addition, we introduce a natural subclass of prZPP that has been implicitly studied in recent works (Korten FOCS 2021, CCC 2022): The class of problems reducible to a problem called “Lossy Code”. We provide a structural characterization for this class in terms of derandomized reconstruction procedures, and show that this characterization is robust to several natural variations. Lastly, we present alternative proofs for classical results in the theory of pseudorandomness (such as two-sided derandomization reducing to one-sided), relying on the notion of deterministically transforming distinguishers to predictors as the main technical tool.

STOC Conference 2023 Conference Paper

Depth-d Threshold Circuits vs. Depth-(d+1) AND-OR Trees

  • Pooya Hatami
  • William M. Hoza
  • Avishay Tal
  • Roei Tell

For any n ∈ ℕ and d = o (loglog( n )), we prove that there is a Boolean function F on n bits and a value γ = 2 −Θ( d ) such that F can be computed by a uniform depth-( d + 1) AC 0 circuit with O ( n ) wires, but F cannot be computed by any depth- d TC 0 circuit with n 1 + γ wires. This bound matches the current state-of-the-art lower bounds for computing explicit functions by threshold circuits of depth d > 2, which were previously known only for functions outside AC 0 such as the parity function. Furthermore, in our result, the AC 0 circuit computing F is a monotone *read-once formula* (i.e., an AND-OR tree), and the lower bound holds even in the average-case setting with respect to advantage n −γ . At a high level, our proof strategy combines two prominent approaches in circuit complexity from the last decade: The celebrated *random projections* method of Håstad, Rossman, Servedio, and Tan (J. ACM 2017), which was previously used to show a tight average-case depth hierarchy for AC 0 ; and the line of works analyzing the effect of *random restrictions* on threshold circuits. We show that under a modified version of Håstad, Rossman, Servedio, and Tan’s projection procedure, any depth- d threshold circuit with n 1 + γ wires simplifies to a near-trivial function, whereas an appropriately parameterized AND-OR tree of depth d + 1 maintains structure.

FOCS Conference 2023 Conference Paper

Derandomization vs Refutation: A Unified Framework for Characterizing Derandomization

  • Lijie Chen 0001
  • Roei Tell
  • R. Ryan Williams

We establish an equivalence between two algorithmic tasks: derandomization, the deterministic simulation of probabilistic algorithms; and refutation, the deterministic construction of inputs on which a given probabilistic algorithm fails to compute a certain hard function. We prove that refuting low-space probabilistic streaming algorithms which attempt to compute functions $f \in \mathcal{F P}$ is equivalent to proving that $\operatorname{pr} \mathcal{B P} \mathcal{P}=\operatorname{pr} \mathcal{P}$, even in cases where a lower bound for f against such streaming algorithms (without a refuter) is already unconditionally known. We also demonstrate the generality of our connection between refutation and derandomization, by establishing connections between refuting classes of constant-depth circuits of sublinear size and derandomizing constant-depth circuits of polynomial size with threshold gates (i. e. , $\mathcal{T C}^{0}$). Our connection generalizes and strengthens recent work on the characterization of derandomization. In particular, the refuter framework allows to directly compare several recent works to each other and to our work, as well as to chart a path for further progress. Along the way, we also improve the targeted hitting-set generator of Chen and Tell (FOCS 2021), showing that its translation of hardness to pseudorandomness scales down to $\mathcal{T C}^{0}$.

FOCS Conference 2022 Conference Paper

Unstructured Hardness to Average-Case Randomness

  • Lijie Chen 0001
  • Ron D. Rothblum
  • Roei Tell

The leading technical approach in uniform hardness-to-randomness in the last two decades faced several well-known barriers that caused results to rely on overly strong hardness assumptions, and yet still yield suboptimal conclusions. In this work we show uniform hardness-to-randomness results that simultaneously break through all of the known barriers. Specifically, consider any one of the following three assumptions: 1)For some $\epsilon>0$ there exists a function f computable by uniform circuits of size $2^{O(n)}$ and depth $2^{o(n)}$ such that f is hard for probabilistic time $2^{\epsilon n}$. 2)For every $c\in \mathbb{N}$ there exists a function f computable by logspace-uniform circuits of polynomial size and depth n 2 such that every probabilistic algorithm running in time n c fails to compute f on $\mathrm{a}(1/n)$-fraction of the inputs. 3)For every $c\in \mathbb{N}$ there exists a logspace-uniform family of arithmetic formulas of degree n 2 over a field of size poly $(n)$ such that no algorithm running in probabilistic time n c can evaluate the family on a worst-case input. Assuming any of these hypotheses, where the hardness is for every sufficiently large input length $n\in \mathbb{N}$, we deduce that $\mathcal{R}\mathcal{P}$ can be derandomized in polynomial time and on all input lengths, on average. Furthermore, under the first assumption we also show that $\mathcal{B}\mathcal{P}\mathcal{P}$ can be derandomized in polynomial time, on average and on all input lengths, with logarithmically many advice bits. On the way to these results we also resolve two related open problems. First, we obtain an optimal worst-case to average-case reduction for computing problems in linear space by uniform probabilistic algorithms; this result builds on a new instance checker based on the doubly efficient proof system of Goldwasser, Kalai, and Rothblum (J. ACM, 2015). Secondly, we resolve the main open problem in the work of Carmosino, Impagliazzo and Sabin (ICALP 2018), by deducing derandomization from weak and general fine-grained hardness hypotheses. The full version of this paper is available online [5].

FOCS Conference 2021 Conference Paper

Fooling Constant-Depth Threshold Circuits (Extended Abstract)

  • Pooya Hatami
  • William M. Hoza
  • Avishay Tal
  • Roei Tell

We present new constructions of pseudorandom generators (PRGs) for two of the most widely studied non-uniform circuit classes in complexity theory. Our main result is a construction of the first non-trivial PRG for linear threshold (LTF) circuits of arbitrary constant depth and super-linear size. This PRG fools circuits with depth $d\in\mathbb{N}$ and $n^{1+\delta}$ wires, where $\delta=2^{-O(d)}$, using seed length $O(n^{1-\delta})$ and with error $2^{-n^{\delta}}$. This tightly matches the best known lower bounds for this circuit class. As a consequence of our result, all the known hardness for LTF circuits has now effectively been translated into pseudorandomness. This brings the extensive effort in the last decade to construct PRGs and deterministic circuit-analysis algorithms for this class to the point where any subsequent improvement would yield breakthrough lower bounds. Our second contribution is a PRG for De Morgan formulas of size $s$ whose seed length is $s^{1/3+o(1)}\cdot\text{polylog}(1/\epsilon)$ for error $\epsilon$. In particular, our PRG can fool formulas of sub-cubic size $s=n^{3-\Omega(1)}$ with an exponentially small error $\epsilon=\exp(-n^{\Omega(1)})$. This significantly improves the inverse-polynomial error of the previous state-of-the-art for such formulas by Impagliazzo, Meka, and Zuckerman (FOCS 2012, JACM 2019), and again tightly matches the best currently-known lower bounds for this class. In both settings, a key ingredient in our constructions is a pseudorandom restriction procedure that has tiny failure probability, but simplifies the function to a non-natural “hybrid computational model” that combines several computational models. As part of our proofs we also construct “extremely low-error” PRGs for related circuit classes; for example, we construct a PRG for arbitrary functions of $s$ LTFs that can handle even the extreme setting of parameters $s=n/\text{polylog}(n)$ and $\epsilon=2^{-n/\text{polylog}(n)}$.

FOCS Conference 2021 Conference Paper

Hardness vs Randomness, Revised: Uniform, Non-Black-Box, and Instance-Wise

  • Lijie Chen 0001
  • Roei Tell

We propose a new approach to the hardness-to-randomness framework and to the $promise-\mathcal{BPP}\ = promise-\mathcal{P}$ conjecture. Classical results rely on non-uniform hardness assumptions to construct derandomization algorithms that work in the worst-case, or rely on uniform hardness assumptions to construct derandomization algorithms that work only in the average-case. In both types of results, the derandomization algorithm is “black-box” and uses the standard PRG approach. In this work we present results that closely relate new and natural uniform hardness assumptions to worst-case derandomization of $promise-\mathcal{BPP}$, where the algorithms underlying the latter derandomization are non-black-box. In our main result, we show that $promise-\mathcal{BPP}\ = promise-\mathcal{P}$ if the following holds: There exists a multi-output function computable by logspace-uniform circuits of polynomial size and depth $n^{2}$ that cannot be computed by uniform probabilistic algorithms in time $n^{c}$, for some universal constant $c > 1$, on almost all inputs. The required failure on “almost all inputs” is stronger than the standard requirement of failing on one input of each length; however, the same assumption without the depth restriction on $f$ is necessary for the conclusion. This suggests a potential equivalence between worst-case derandomization of $promise-\mathcal{BPP}$ of any form (i. e. , not necessarily by a black-box algorithm) and the existence of efficiently-computable functions that are hard for probabilistic algorithms on almost all inputs. In our second result, we introduce a new and uniform hardness-to-randomness tradeoff for the setting of superfast average-case derandomization: prior to this work, superfast average-case derandomization was known only under non-uniform hardness assumptions. In an extreme instantiation of our new tradeoff, under appealing uniform hardness assumptions, we show that for every polynomial $T(n)$ and constant $\epsilon > 0$ it holds that $\mathcal{BPTIME}[T]\subseteq \mathrm{heur}-\mathcal{DTIME}[T\cdot n^{\epsilon}]$, where the “heur” prefix means that no polynomial-time algorithm can find, with non-negligible probability, an input on which the deterministic simulation errs. Technically, our approach is to design targeted PRGs and HSGs, as introduced by Goldreich (LNCS, 2011). The targeted PRGs/HSGs “produce randomness from the input”, as sug-gested by Goldreich and Wigderson (RANDOM 2002); and their analysis relies on non-black-box versions of the reconstruction procedure of Impagliazzo and Wigderson (FOCS 1998). Our main reconstruction procedure crucially relies on the ideas underlying the proof system of Goldwasser, Kalai, and Rothblum (J. ACM 2015).

STOC Conference 2021 Conference Paper

Simple and fast derandomization from very hard functions: eliminating randomness at almost no cost

  • Lijie Chen 0001
  • Roei Tell

Extending the classical “hardness-to-randomness” line-of-works, Doron, Moshkovitz, Oh, and Zuckerman (FOCS 2020) recently proved that derandomization with near-quadratic time overhead is possible, under the assumption that there exists a function in DTIME [2 n ] that cannot be computed by randomized SVN circuits of size 2 (1−є)· n for a small є. In this work we extend their inquiry and answer several open questions that arose from their work. For a time function T ( n ), consider the following assumption: Non-uniformly secure one-way functions exist, and for δ=δ(є) and k = k T (є) there exists a problem in DTIME [2 k · n ] that is hard for algorithms that run in time 2 ( k −δ)· n and use 2 (1−δ)· n bits of advice. Under this assumption, we show that: 1. ( Worst-case derandomization. ) Probabilistic algorithms that run in time T ( n ) can be deterministically simulated in time n · T ( n ) 1+є . 2. ( Average-case derandomization. ) For polynomial time functions T ( n )= poly ( n ), we can improve the derandomization time to n є · T ( n ) if we allow the derandomization to succeed only on average, rather than in the worst-case. 3. ( Conditional optimality. ) For worst-case derandomization, the multiplicative time overhead of n is essentially optimal, conditioned on a counting version of the non-deterministic strong exponential-time hypothesis (i.e., on # NSETH ). Lastly, we present an alternative proof for the result of Doron, Moshkovitz, Oh, and Zuckerman that is simpler and more versatile. In fact, we show how to simplify the analysis not only of their construction, but of any construction that “extracts randomness from a pseudoentropic string”.

FOCS Conference 2020 Conference Paper

On Exponential-Time Hypotheses, Derandomization, and Circuit Lower Bounds: Extended Abstract

  • Lijie Chen 0001
  • Ron D. Rothblum
  • Roei Tell
  • Eylon Yogev

The Exponential-Time Hypothesis (ETH) is a strengthening of the P ≠ NP conjecture, stating that 3-SAT on n variables cannot be solved in (uniform) time 2 ε·n, for some. In recent years, analogous hypotheses that are “exponentially-strong” forms of other classical complexity conjectures (such as NP ⊄ eq BPP or coNP ⊄ eq NP) have also been introduced, and have become widely influential. In this work, we focus on the interaction of exponential-time hypotheses with the fundamental and closely-related questions of derandomization and circuit lower bounds. We show that even relatively-mild variants of exponential-time hypotheses have far-reaching implications to derandomization, circuit lower bounds, and the connections between the two. Specifically, we prove that: 1) The Randomized Exponential-Time Hypothesis (rETH) implies that BPP can be simulated on “average-case” in deterministic (nearly-)polynomial-time (i. e. , in time 2 ~O(log(n)) =n loglog(n)O(1) ). The derandomization relies on a conditional construction of a pseudorandom generator with near-exponential stretch (i. e. , with seed length ~O(log(n))); this significantly improves the state-of-the-art in uniform “hardness-to-randomness” results, which previously only yielded pseudorandom generators with sub-exponential stretch from such hypotheses. 2) The Non-Deterministic Exponential-Time Hypothesis (NETH) implies that derandomization of BPP is completely equivalent to circuit lower bounds against E, and in particular that pseudorandom generators are necessary for derandomization. In fact, we show that the foregoing equivalence follows from a very weak version of NETH, and we also show that this very weak version is necessary to prove a slightly stronger conclusion that we deduce from it. Lastly, we show that disproving certain exponential-time hypotheses requires proving breakthrough circuit lower bounds. In particular, if CireuitSAT for circuits over n bits of size poly(n) can be solved by probabilistic algorithms in time 2 n/polylog(n), then BPε does not have circuits of quasilinear size.

STOC Conference 2019 Conference Paper

Bootstrapping results for threshold circuits "just beyond" known lower bounds

  • Lijie Chen 0001
  • Roei Tell

The best known lower bounds for the circuit class TC 0 are only slightly super-linear. Similarly, the best known algorithm for derandomization of this class is an algorithm for quantified derandomization (i.e., a weak type of derandomization) of circuits of slightly super-linear size. In this paper we show that even very mild quantitative improvements of either of the two foregoing results would already imply super-polynomial lower bounds for TC 0 . Specifically: (1) If for every c >1 and sufficiently large d ∈ℕ it holds that n -bit TC 0 circuits of depth d require n 1+ c − d wires to compute certain NC 1 -complete functions, then TC 0 ≠ NC 1 . In fact, even lower bounds for TC 0 circuits of size n 1+ c − d against these functions when c >1 is fixed and sufficiently small would yield lower bounds for polynomial-sized circuits. Lower bounds of the form n 1+ c − d against these functions are already known, but for a fixed c ≈2.41 that is too large to yield new lower bounds via our results. (2) If there exists a deterministic algorithm that gets as input an n -bit TC 0 circuit of depth d and n 1+(1.61) − d wires, runs in time 2 n o (1) , and distinguishes circuits that accept at most B ( n )=2 n 1−(1.61) − d inputs from circuits that reject at most B ( n ) inputs, then NEXP ⊈ TC 0 . An algorithm for this “quantified derandomization” task is already known, but it works only when the number of wires is n 1+ c − d , for c >30, and with a smaller B ( n )≈2 n 1−(30/ c ) d . Intuitively, the “takeaway” message from our work is that the gap between currently-known results and results that would suffice to get super-polynomial lower bounds for TC 0 boils down to the precise constant c >1 in the bound n 1+ c − d on the number of wires. Our results improve previous results of Allender and Koucký (2010) and of the second author (2018), respectively, whose hypotheses referred to circuits with n 1+ c / d wires (rather than n 1+ c − d wires). We also prove results similar to two results above for other circuit classes (i.e., ACC 0 and CC 0 ).

STOC Conference 2018 Conference Paper

Quantified derandomization of linear threshold circuits

  • Roei Tell

One of the prominent current challenges in complexity theory is the attempt to prove lower bounds for TC 0 , the class of constant-depth, polynomial-size circuits with majority gates. Relying on the results of Williams (2013), an appealing approach to prove such lower bounds is to construct a non-trivial derandomization algorithm for TC 0 . In this work we take a first step towards the latter goal, by proving the first positive results regarding the derandomization of TC 0 circuits of depth d >2. Our first main result is a quantified derandomization algorithm for TC 0 circuits with a super-linear number of wires. Specifically, we construct an algorithm that gets as input a TC 0 circuit C over n input bits with depth d and n 1+exp(− d ) wires, runs in almost-polynomial-time, and distinguishes between the case that C rejects at most 2 n 1−1/5 d inputs and the case that C accepts at most 2 n 1−1/5 d inputs. In fact, our algorithm works even when the circuit C is a linear threshold circuit, rather than just a TC 0 circuit (i.e., C is a circuit with linear threshold gates, which are stronger than majority gates). Our second main result is that even a modest improvement of our quantified derandomization algorithm would yield a non-trivial algorithm for standard derandomization of all of TC 0 , and would consequently imply that NEXP ⊈ TC 0 . Specifically, if there exists a quantified derandomization algorithm that gets as input a TC 0 circuit with depth d and n 1+ O (1/ d ) wires (rather than n 1+exp(− d ) wires), runs in time at most 2 n exp(− d ) , and distinguishes between the case that C rejects at most 2 n 1−1/5 d inputs and the case that C accepts at most 2 n 1−1/5 d inputs, then there exists an algorithm with running time 2 n 1−Ω(1) for standard derandomization of TC 0 .