Arrow Research search

Author name cluster

Eylon Yogev

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

9 papers
1 author row

Possible papers

9

FOCS Conference 2023 Conference Paper

IOPs with Inverse Polynomial Soundness Error

  • Gal Arnon
  • Alessandro Chiesa
  • Eylon Yogev

We show that every language in NP has an Interactive Oracle Proof (IOP) with inverse polynomial soundness error and small query complexity. This achieves parameters that surpass all previously known PCPs and IOPs. Specifically, we construct an IOP with perfect completeness, soundness error $1 / n$, round complexity $O(\log \log n)$, proof length poly $(n)$ over an alphabet of size $O(n)$, and query complexity $O(\log \log n)$. This is a step forward in the quest to establish the sliding-scale conjecture for IOPs (which would additionally require query complexity $O(1))$. Our main technical contribution is a high-soundness small-query proximity test for the Reed-Solomon code. We construct an IOP of proximity for Reed-Solomon codes, over a field $\mathbb{F}$ with evaluation domain L and degree d, with perfect completeness, soundness error (roughly) $\max \{1-\delta, O(\rho^{1 / 4})\}$ for $\delta$-far functions, round complexity $O(\log \log d)$, proof length $O(|L| / \rho)$ over $\mathbb{F}$, and query complexity $O(\log \log d)$; here $\rho=(d+1) /|L|$ is the code rate. En route, we obtain a new high-soundness proximity test for bivariate Reed-Muller codes. The IOP for NP is then obtained via a high-soundness reduction from NP to Reed-Solomon proximity testing with rate $\rho=1 / \operatorname{poly}(n)$ and distance $\delta=1-1 / \operatorname{poly}(n)$ (and applying our proximity test). Our constructions are direct and efficient, and hold the potential for practical realizations that would improve the state-of-the-art in real-world applications of IOPs.

STOC Conference 2021 Conference Paper

Adversarial laws of large numbers and optimal regret in online classification

  • Noga Alon
  • Omri Ben-Eliezer
  • Yuval Dagan
  • Shay Moran
  • Moni Naor
  • Eylon Yogev

Laws of large numbers guarantee that given a large enough sample from some population, the measure of any fixed sub-population is well-estimated by its frequency in the sample. We study laws of large numbers in sampling processes that can affect the environment they are acting upon and interact with it. Specifically, we consider the sequential sampling model proposed by Ben-Eliezer and Yogev (2020), and characterize the classes which admit a uniform law of large numbers in this model: these are exactly the classes that are online learnable . Our characterization may be interpreted as an online analogue to the equivalence between learnability and uniform convergence in statistical (PAC) learning. The sample-complexity bounds we obtain are tight for many parameter regimes, and as an application, we determine the optimal regret bounds in online learning, stated in terms of Littlestone’s dimension , thus resolving the main open question from Ben-David, Pál, and Shalev-Shwartz (2009), which was also posed by Rakhlin, Sridharan, and Tewari (2015).

FOCS Conference 2020 Conference Paper

On Exponential-Time Hypotheses, Derandomization, and Circuit Lower Bounds: Extended Abstract

  • Lijie Chen 0001
  • Ron D. Rothblum
  • Roei Tell
  • Eylon Yogev

The Exponential-Time Hypothesis (ETH) is a strengthening of the P ≠ NP conjecture, stating that 3-SAT on n variables cannot be solved in (uniform) time 2 ε·n, for some. In recent years, analogous hypotheses that are “exponentially-strong” forms of other classical complexity conjectures (such as NP ⊄ eq BPP or coNP ⊄ eq NP) have also been introduced, and have become widely influential. In this work, we focus on the interaction of exponential-time hypotheses with the fundamental and closely-related questions of derandomization and circuit lower bounds. We show that even relatively-mild variants of exponential-time hypotheses have far-reaching implications to derandomization, circuit lower bounds, and the connections between the two. Specifically, we prove that: 1) The Randomized Exponential-Time Hypothesis (rETH) implies that BPP can be simulated on “average-case” in deterministic (nearly-)polynomial-time (i. e. , in time 2 ~O(log(n)) =n loglog(n)O(1) ). The derandomization relies on a conditional construction of a pseudorandom generator with near-exponential stretch (i. e. , with seed length ~O(log(n))); this significantly improves the state-of-the-art in uniform “hardness-to-randomness” results, which previously only yielded pseudorandom generators with sub-exponential stretch from such hypotheses. 2) The Non-Deterministic Exponential-Time Hypothesis (NETH) implies that derandomization of BPP is completely equivalent to circuit lower bounds against E, and in particular that pseudorandom generators are necessary for derandomization. In fact, we show that the foregoing equivalence follows from a very weak version of NETH, and we also show that this very weak version is necessary to prove a slightly stronger conclusion that we deduce from it. Lastly, we show that disproving certain exponential-time hypotheses requires proving breakthrough circuit lower bounds. In particular, if CireuitSAT for circuits over n bits of size poly(n) can be solved by probabilistic algorithms in time 2 n/polylog(n), then BPε does not have circuits of quasilinear size.

SODA Conference 2020 Conference Paper

The Power of Distributed Verifiers in Interactive Proofs

  • Moni Naor
  • Merav Parter
  • Eylon Yogev

We explore the power of interactive proofs with a distributed verifier. In this setting, the verifier consists of n nodes and a graph G that defines their communication pattern. The prover is a single entity that communicates with all nodes by short messages. The goal is to verify that the graph G belongs to some language in a small number of rounds, and with small communication bound, i. e. , the proof size. This interactive model was introduced by Kol, Oshman and Saxena (PODC 2018) as a generalization of noninteractive distributed proofs. They demonstrated the power of interaction in this setting by constructing protocols for problems as Graph Symmetry and Graph Non-Isomorphism – both of which require proofs of Ω( n 2 )-bits without interaction. In this work, we provide a new general framework for distributed interactive proofs that allows one to translate standard interactive protocols (i. e. , with a centralized verifier) to ones where the verifier is distributed with a proof size that depends on the computational complexity of the verification algorithm run by the centralized verifier. We show the following: Every (centralized) computation performed in time O ( n ) on a RAM can be translated into three-round distributed interactive protocol with O (log n ) proof size. This implies that many graph problems for sparse graphs have succinct proofs (e. g. , testing planarity). Every (centralized) computation implemented by either a small space or by uniform NC circuit can be translated into a distributed protocol with O (1) rounds and O (log n ) bits proof size for the low space case and polylog( n ) many rounds and proof size for NC. We show that for Graph Non-Isomorphism, one of the striking demonstrations of the power of interaction, there is a 4-round protocol with O (log n ) proof size, improving upon the O ( n log n ) proof size of Kol et al. For many problems, we show how to reduce proof size below the seemingly natural barrier of log n. By employing our RAM compiler, we get a 5-round protocol with proof size O (log log n ) for a family of problems including Fixed Automorphism, Clique and Leader Election (for the latter two problems we actually get O (1) proof size). Finally, we discuss how to make these proofs noninteractive arguments via random oracles. Our compilers capture many natural problems and demonstrate the difficulty in showing lower bounds in these regimes.

SODA Conference 2019 Conference Paper

Low Congestion Cycle Covers and Their Applications

  • Merav Parter
  • Eylon Yogev

A cycle cover of a bridgeless graph G is a collection of simple cycles in G such that each edge e appears on at least one cycle. The common objective in cycle cover computation is to minimize the total lengths of all cycles. Motivated by applications to distributed computation, we introduce the notion of low-congestion cycle covers, in which all cycles in the cycle collection are both short and nearly edge-disjoint. Formally, a (d, c)-cycle cover of a graph G is a collection of cycles in G in which each cycle is of length at most d and each edge participates in at least one cycle and at most c cycles. A-priori, it is not clear that cycle covers that enjoy both a small overlap and a short cycle length even exist, nor if it is possible to efficiently find them. Perhaps quite surprisingly, we prove the following: Every bridgeless graph of diameter D admits a (d, c)-cycle cover where d = Õ ( D ) and c = Õ (1). That is, the edges of G can be covered by cycles such that each cycle is of length at most Õ ( D ) and each edge participates in at most Õ (1) cycles. These parameters are existentially tight up to polylogarithmic terms. Furthermore, we show how to extend our result to achieve universally optimal cycle covers. Let C e is the shortest cycle that covers e, and let OPT( G ) = max e∊G | C e |. We show that every bridgeless graph admits a (d, c)-cycle cover where d = Õ (OPT( G )) and c = Õ (1). We demonstrate the usefulness of low congestion cycle covers in different settings of resilient computation. For instance, we consider a Byzantine fault model where in each round, the adversary chooses a single message and corrupt in an arbitrarily manner. We provide a compiler that turns any r -round distributed algorithm for a graph G with diameter D, into an equivalent fault tolerant algorithm with r ·poly( D ) rounds.

SODA Conference 2017 Conference Paper

Hardness of Continuous Local Search: Query Complexity and Cryptographic Lower Bounds

  • Pavel Hubácek
  • Eylon Yogev

Local search proved to be an extremely useful tool when facing hard optimization problems (e. g. , via the simplex algorithm, simulated annealing, or genetic algorithms). Although powerful, it has its limitations: there are functions for which exponentially many queries are needed to find a local optimum. In many contexts the optimization problem is defined by a continuous function, which might offer an advantage when performing the local search. This leads us to study the following natural question: How hard is continuous local search? The computational complexity of such search problems is captured by the complexity class CLS (Daskalakis and Papadimitriou SODA’11) which is contained in the intersection of PLS and PPAD, two important subclasses of TFNP (the class of NP search problems with a guaranteed solution). In this work, we show the first hardness results for CLS (the smallest non-trivial class among the currently defined subclasses of TFNP). Our hardness results are in terms of black-box (where only oracle access to the function is give n ) and white-box (where the function is represented succinctly by a circuit). In the black-box case, we show instances for which any (computationally unbounded) randomized algorithm must perform exponentially many queries in order to find a local optimum. In the white-box case, we show hardness for computationally bounded algorithms under cryptographic assumptions. Our results demonstrate a strong conceptual barrier precluding design of efficient algorithms for solving local search problems even over continuous domains. As our main technical contribution we introduce a new total search problem which we call End-OF-Metered-Line. The special structure of End-OF-Metered-Line enables us to: (1) show that it is contained in CLS, and (2) prove hardness for it both in the black-box and the white-box setting.

FOCS Conference 2017 Conference Paper

White-Box vs. Black-Box Complexity of Search Problems: Ramsey and Graph Property Testing

  • Ilan Komargodski
  • Moni Naor
  • Eylon Yogev

Ramsey theory assures us that in any graph there is a clique or independent set of a certain size, roughly logarithmic in the graph size. But how difficult is it to find the clique or independent set? If the graph is given explicitly, then it is possible to do so while examining a linear number of edges. If the graph is given by a black-box, where to figure out whether a certain edge exists the box should be queried, then a large number of queries must be issued. But what if one is given a program or circuit for computing the existence of an edge? This problem was raised by Buss and Goldberg and Papadimitriou in the context of TFNP, search problems with a guaranteed solution. We examine the relationship between black-box complexity and white-box complexity for search problems with guaranteed solution such as the above Ramsey problem. We show that under the assumption that collision resistant hash function exist (which follows from the hardness of problems such as factoring, discrete-log and learning with errors) the white-box Ramsey problem is hard and this is true even if one is looking for a much smaller clique or independent set than the theorem guarantees. In general, one cannot hope to translate all black-box hardness for TFNP into white-box hardness: we show this by adapting results concerning the random oracle methodology and the impossibility of instantiating it. Another model we consider is the succinct black-box, where there is a known upper bound on the size of the black-box (but no limit on the computation time). In this case we show that for all TFNP problems there is an upper bound on the number of queries proportional to the description size of the box times the solution size. On the other hand, for promise problems this is not the case. Finally, we consider the complexity of graph property testing in the white-box model. We show a property which is hard to test even when one is given the program for computing the graph. The hard property is whether the graph is a two-source extractor.

FOCS Conference 2014 Conference Paper

One-Way Functions and (Im)Perfect Obfuscation

  • Ilan Komargodski
  • Tal Moran
  • Moni Naor
  • Rafael Pass
  • Alon Rosen
  • Eylon Yogev

A program obfuscator takes a program and outputs a "scrambled" version of it, where the goal is that the obfuscated program will not reveal much about its structure beyond what is apparent from executing it. There are several ways of formalizing this goal. Specifically, in indistinguishability obfuscation, first defined by Barak et al. (CRYPTO 2001), the requirement is that the results of obfuscating any two functionally equivalent programs (circuits) will be computationally indistinguishable. Recently, a fascinating candidate construction for indistinguishability obfuscation was proposed by Garg et al. (FOCS 2013). This has led to a flurry of discovery of intriguing constructions of primitives and protocols whose existence was not previously known (for instance, fully deniable encryption by Sahai and Waters, STOC 2014). Most of them explicitly rely on additional hardness assumptions, such as one-way functions. Our goal is to get rid of this extra assumption. We cannot argue that indistinguishability obfuscation of all polynomial-time circuits implies the existence of one-way functions, since if P ≠ NP, then program obfuscation (under the indistinguishability notion) is possible. Instead, the ultimate goal is to argue that if P ≠ NP and program obfuscation is possible, then one-way functions exist. Our main result is that if NP ⊈; io-BPP and there is an efficient (even imperfect) indistinguishability obfuscator, then there are one-way functions. In addition, we show that the existence of an indistinguishability obfuscator implies (unconditionally) the existence of SZK-arguments for NP. This, in turn, provides an alternative version of our main result, based on the assumption of hard-on-the average NP problems. To get some of our results we need obfuscators for simple programs such as 3CNF formulas