Arrow Research search

Author name cluster

Rahul Ilango

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

12 papers
1 author row

Possible papers

12

FOCS Conference 2025 Conference Paper

Cryptography Meets Worst-case Complexity: Optimal Security and More From iO and Worst-case Assumptions

  • Rahul Ilango
  • Alex Lombardi

We study several problems in the intersection of cryptography and complexity theory based on the following highlevel thesis. 1)Obfuscation can serve as a general-purpose worst-case to average-case reduction, reducing the existence of various forms of cryptography to corresponding worst-case assumptions. 2)We can therefore hope to overcome barriers in cryptography and average-case complexity by (i) making worstcase hardness assumptions beyond $P \neq N P$, and (ii) leveraging worst-case hardness reductions, either proved by traditional complexity-theoretic methods or facilitated further by cryptography. Concretely, our results include: •Optimal Hardness. Assuming sub-exponential indistinguishability obfuscation, we give fine-grained worst-case to average case reductions for circuit-SAT. In particular, if finding an NP-witness requires nearly brute-force time in the worst case, then the same is true for some efficiently sampleable distribution. In fact, we show that under these assumptions, there exist families of one-way functions with optimal time-probability security tradeoffs. Under an additional, stronger assumption - the optimal non-deterministic hardness of refuting circuit-SAT - we construct additional cryptographic primitives such as PRGs and public-key encryption that have such optimal timeadvantage security tradeoffs. •Direct Product Hardness. Again assuming iO and optimal non-deterministic hardness of SAT refutation, we show that the “(search) k-fold SAT problem” - the computational task of finding satisfying assignments to k circuit-SAT instances simultaneously - has (optimal) hardness roughly $\left(T / 2^{n}\right)^{k}$ for time T algorithms. In fact, we build “optimally secure one-way product functions” (Holmgren-Lombardi, FOCS ‘18), demonstrating that optimal direct product theorems hold for some choice of one-way function family. •Single-Input Correlation Intractability. Assuming either iO or LWE, we show a worst-case to average-case reduction for strong forms of single-input correlation intractability. That is, powerful forms of correlation-intractable hash functions exist provided that a collection of worst-case “correlationfinding” problems are hard. •Non-interactive Proof of Quantumness. Assuming subexponential iO and OWFs, we give a non-interactive proof of quantumness based on the worst-case hardness of the whitebox Simon problem. In particular, this proof of quantumness result does not explicitly assume quantum advantage for an average-case task. To help prove our first two results, we show along the way how to improve the Goldwasser-Sipser “set lower bound” protocol to have communication complexity quadratically smaller in the multiplicative approximation error $\varepsilon$.

FOCS Conference 2025 Conference Paper

Gödel in Cryptography: Effectively Zero-Knowledge Proofs for NP with No Interaction, No Setup, and Perfect Soundness

  • Rahul Ilango

A zero-knowledge proof demonstrates that a fact (like that a Sudoku puzzle has a solution) is true while, counterintuitively, revealing nothing else (like what the solution actually is). This remarkable guarantee is extremely useful in cryptographic applications, but it comes at a cost. A classical impossibility result by Goldreich and Oren [J. Cryptol. ‘94] shows that zeroknowledge proofs must necessarily sacrifice basic properties of traditional mathematical proofs - namely perfect soundness (that no proof of a false statement exists) and non-interactivity (that a proof can be transmitted in a single message). Contrary to this impossibility, we show that zero-knowledge with perfect soundness and no interaction is effectively possible. We do so by defining and constructing a powerful new relaxation of zero-knowledge. Intuitively, while the classical zero-knowledge definition requires that an object called a simulator actually exists, our new definition only requires that one cannot rule out that a simulator exists (in a particular logical sense). Using this, we show that every falsifiable security property of (classical) zero-knowledge can be achieved with no interaction, no setup, and perfect soundness. This enables us to remove interaction and setup from (classical) zero-knowledge in essentially all of its applications in the literature, at the relatively mild cost that such applications now have security that is “game-based” instead of “simulation-based. ” Our construction builds on the work of Kuykendall and Zhandry [TCC ‘20] and relies on two central, longstanding, and well-studied assumptions that we show are also necessary. The first is the existence of non-interactive witness indistinguishable proofs, which follows from standard assumptions in cryptography. The second is Krajíček and Pudlák’s 1989 conjecture that no optimal proof system exists. This is one of the main conjectures in the field of proof complexity and is the natural finitistic analogue of the impossibility of Hilbert’s second problem (and, hence, also Gödel’s incompleteness theorem). Our highlevel idea is to use these assumptions to construct a prover and verifier where no simulator exists, but the non-existence of a simulator is independent (in the logical sense of unprovability) of an arbitrarily strong logical system. One such logical system is the standard axioms of mathematics: ZFC.

FOCS Conference 2025 Conference Paper

NP-hardness of the Minimum Circuit Size Problem from Well-Studied Assumptions

  • Shuichi Hirahara
  • Rahul Ilango

Whether the Minimum Circuit Size Problem (MCSP) is NP-hard or not is a long-standing open question. Indeed, Levin delayed the publication of his fundamental work on the theory of NP-completeness because he hoped to prove NP-completeness of MCSP. In this paper, we present the first plausible assumptions under which MCSP is NP-hard. Specifically, we prove that MCSP is NP-hard under deterministic quasi-polynomial-time nonadaptive reductions, assuming: •subexponentially-secure non-interactive witness indistinguishable proof systems for SAT exist, •coNP requires subexponential-size non-deterministic circuits, and•P NP /poly requires circuits of size Ω(2 n /n). This is arguably the first evidence that MCSP is not in coNP, which indicates that there is no short proof that witnesses the hardness of a function.

STOC Conference 2023 Conference Paper

Indistinguishability Obfuscation, Range Avoidance, and Bounded Arithmetic

  • Rahul Ilango
  • Jiatu Li
  • R. Ryan Williams

The range avoidance problem (denoted by Avoid ) asks to find a string outside of the range of a given circuit C :{0,1} n →{0,1} m , where m > n . Although at least half of the strings of length m are correct answers, it is not clear how to deterministically find one. Recent results of Korten (FOCS’21) and Ren, Wang, and Santhanam (FOCS’ 22) show that efficient deterministic algorithms for Avoid would have far-reaching consequences, including strong circuit lower bounds and explicit constructions of combinatorial objects (e.g., Ramsey graphs, extractors, rigid matrices). This strongly motivates the question: does an efficient deterministic algorithm for Avoid actually exist? In this work, we prove under the existence of subexponentially secure indistinguishability obfuscation ( i O ) that polynomial-time deterministic algorithms for Avoid imply NP = coNP . Combining this with Jain, Lin, and Sahai’s recent breakthrough construction of i O from well-founded assumptions (STOC’21, EUROCRYPT’22), we provide the first plausible evidence that Avoid has no efficient deterministic algorithm. Moreover, we also prove the hardness of Avoid based on polynomially-secure i O and a weaker variant of the Nondeterministic Exponential Time Hypothesis (NETH). Extending our techniques, we prove a surprising separation in bounded arithmetic , conditioned on similar assumptions. Assuming subexponentially secure i O and coNP is not infinitely often in, we show that Avoid has no deterministic polynomial-time algorithm even when we are allowed O (1) queries to an oracle that can invert the given input circuit on an arbitrarily chosen m -bit string. It follows that the dual Weak Pigeonhole Principle , the combinatorial principle underlying Avoid , is not provable in Cook’s theory PV 1 . This gives (under plausible assumptions) the first separation of Cook’s theory PV 1 for polynomial-time reasoning and Jeřábek’s theory APV 1 for probabilistic polynomial-time reasoning.

STOC Conference 2023 Conference Paper

NP-Hardness of Approximating Meta-Complexity: A Cryptographic Approach

  • Yizhi Huang 0001
  • Rahul Ilango
  • Hanlin Ren

It is a long-standing open problem whether the Minimum Circuit Size Problem ( MCSP ) and related meta-complexity problems are NP -complete. Even for the rare cases where the NP -hardness of meta-complexity problems are known, we only know very weak hardness of approximation. In this work, we prove NP -hardness of approximating meta-complexity with nearly-optimal approximation gaps. Our key idea is to use cryptographic constructions in our reductions, where the security of the cryptographic construction implies the correctness of the reduction. We present both conditional and unconditional hardness of approximation results as follows. 1. Assuming subexponentially-secure witness encryption exists, we prove essentially optimal NP -hardness of approximating conditional time-bounded Kolmogorov complexity ( K t ( x | y )) in the regime where t >> | y |. Previously, the best hardness of approximation known was a | x | 1/ (loglog| x |) factor and only in the sublinear regime ( t 1, the Minimum Oracle Circuit Size Problem ( MOCSP ) is NP -hard to approximate, where Yes instances have circuit complexity at most s , and No instances have circuit complexity at least s c . Our reduction builds on a witness encryption construction proposed by Garg, Gentry, Sahai, and Waters (STOC’13). Previously, it was unknown whether it is NP -hard to distinguish between oracle circuit complexity s versus 10 s log N . 3. Finally, we define a “multi-valued” version of MCSP , called mvMCSP , and show that w.p. ‍1 over a random oracle O , it is NP -hard to approximate mvMCSP O under quasi-polynomial-time reductions with an O oracle. Intriguingly, this result follows almost directly from the security of Micali’s CS Proofs (Micali, SICOMP’00). In conclusion, we give three results convincingly demonstrating the power of cryptographic techniques in proving NP -hardness of approximating meta-complexity.

FOCS Conference 2023 Conference Paper

SAT Reduces to the Minimum Circuit Size Problem with a Random Oracle

  • Rahul Ilango

The Minimum Circuit Size Problem (MCSP) is the task of deciding, given the truth table of a Boolean function f and a size parameter s, whether there is a circuit computing f of size at most s. It has been an open question since Levin’s seminal work on NP-completeness (1973) whether MCSP is NP-complete. This question has drawn further interest in light of recent connections between MCSP, learning theory, average-case complexity, and cryptography. We show that, with probability one, there is a black-box $\mathrm{P} /$ poly (as well as a $\mathrm{PO}^{\mathcal{O}}$) many-one reduction from (unrelativized) SAT to MCSP on circuits with access to a random oracle $\mathcal{O}$. This resolves an open question of Huang, Ilango, and Ren (STOC 2023) who conjectured the existence of such a reduction. Two important ingredients in our proof are 1) a relaxation of symmetry of information that we call pseudo symmetry of information and 2) a subroutine of the reduction that essentially is a cryptographic proof of work. Our reduction yields additive hardness of approximation that is optimal up to a constant factor and extends to a variety of other metacomplexity problems, including computing time-bounded Kolmogorov complexity $\left(\mathrm{K}^{t}\right)$. Applying the random oracle heuristic from cryptography, where one heuristically “instantiates” $\mathcal{O}$ with a real-world cryptographic hash function, we get a plethora of candidate deterministic polynomial-time many-one reductions from SAT to MCSP and $K^{t}$ in the standard unrelativized world. To our knowledge, no candidate reduction from SAT to MCSP or $\mathrm{K}^{t}$ was known previously. Moreover, the hardness of approximation in these candidate reductions would imply the NP-hardness of the gap version of $\mathrm{K}^{t}$ that Hirahara (FOCS 2018) shows has a non-black-box worst-case to average-case reduction. Intriguingly, as a consequence we get that the existence of sufficiently “unstructured” functions implies that a problem with a known (non-black-box) worst-case to average-case reduction is NP-complete.

FOCS Conference 2023 Conference Paper

Towards Separating Computational and Statistical Differential Privacy

  • Badih Ghazi
  • Rahul Ilango
  • Pritish Kamath
  • Ravi Kumar 0001
  • Pasin Manurangsi

Computational differential privacy (CDP) is a natural relaxation of the standard notion of (statistical) differential privacy (SDP) proposed by Beimel, Nissim, and Omri (CRYPTO 2008) and Mironov, Pandey, Reingold, and Vadhan (CRYPTO 2009). In contrast to SDP, CDP only requires privacy guarantees to hold against computationally-bounded adversaries rather than computationally-unbounded statistical adversaries. Despite the question being raised explicitly in several works (e. g. , Bun, Chen, and Vadhan, TCC 2016), it has remained tantalizingly open whether there is any task achievable with the CDP notion but not the SDP notion. Even a candidate such task is unknown. Indeed, it is even unclear what the truth could be! In this work, we give the first construction of a task achievable with the CDP notion but not the SDP notion, under the following strong but plausible cryptographic assumptions: •Non-Interactive Witness Indistinguishable Proofs, •Laconic Collision-Resistant Keyless Hash Functions, •Differing-Inputs Obfuscation for Public-Coin Samplers. In particular, we construct a task for which there exists an $\varepsilon$-CDP mechanism with $\varepsilon=O(1)$ achieving $1-o(1)$ utility, but any $(\varepsilon, \delta)$-SDP mechanism, including computationally-unbounded ones, that achieves a constant utility must use either a super-constant $\varepsilon$ or an inverse-polynomially large $\delta$. To prove this, we introduce a new approach for showing that a mechanism satisfies CDP: first we show that a mechanism is “private” against a certain class of decision tree adversaries, and then we use cryptographic constructions to “lift” this into privacy against computationally bounded adversaries. We believe this approach could be useful to devise further tasks separating CDP from SDP.

STOC Conference 2022 Conference Paper

Robustness of average-case meta-complexity via pseudorandomness

  • Rahul Ilango
  • Hanlin Ren
  • Rahul Santhanam

We show broad equivalences in the average-case complexity of many different meta-complexity problems, including Kolmogorov complexity, time-bounded Kolmogorov complexity, and the Minimum Circuit Size Problem. These results hold for a wide range of parameters (various thresholds, approximation gaps, weak or strong average-case hardness, etc.) and complexity notions, showing the theory of meta-complexity is very *robust* in the average-case setting. Our results are shown by establishing new and generic connections between meta-complexity and the theory of pseudorandomness and one-way functions. Using these connections, we give the first unconditional characterization of one-way functions based on the average-case hardness of the Minimum Circuit Size Problem. We also give a surprising and clean characterization of one-way functions based on the average-case hardness of (the worst-case uncomputable) Kolmogorov complexity. Moreover, the latter is the first characterization of one-way functions based on the average-case hardness of a fixed problem on *any* samplable distribution. We give various applications of these results to the foundations of cryptography and the theory of meta-complexity. For example, we show that the average-case hardness of deciding k - SAT or Clique on any samplable distribution of high enough entropy implies the existence of one-way functions. We also use our results to unconditionally solve various meta-complexity problems in CZK (computational zero-knowledge) on average, and give implications of our results for the classic question of proving NP -hardness for the Minimum Circuit Size Problem.

FOCS Conference 2021 Conference Paper

The Minimum Formula Size Problem is (ETH) Hard

  • Rahul Ilango

A longstanding open question is whether the Minimum Circuit Size Problem (MCSP) is NP-complete. In fact, even determining whether MCSP has a search-to-decision reduction has been open for over twenty years. We show that, under the Exponential Time Hypothesis, the Minimum (DeMorgan) Formula Size Problem, MFSP, is not in P. Building on this, we show that MFSP has a polynomial-time (exact) search-to-decision reduction, a result that does not relativize. Our main technique relates the formula complexity of a partial function with the formula complexity of an associated total function and is proved using the “leaf weighting” technique of Buchfuhrer and Umans.

FOCS Conference 2020 Conference Paper

Constant Depth Formula and Partial Function Versions of MCSP are Hard

  • Rahul Ilango

Attempts to prove the intractability of the Minimum Circuit Size Problem (MCSP) date as far back as the 1950s and are well-motivated by connections to cryptography, learning theory, and average-case complexity. In this work, we make progress, on two fronts, towards showing MCSP is intractable under worst-case assumptions. While Masek showed in the late 1970s that the version of MCSP for DNF formulas is NP-hard, extending this result to the case of depth-3 AND/OR formulas was open. We show that determining the minimum size of a depth- d formula computing a given Boolean function is N P-hard under quasipolynomial-time randomized reductions for all constant d ≥ 2. Our approach is based on a method to “lift” depth- d formula lower bounds to depth-( d+1). This method also implies the existence of a function with a 2 Ωd(n1/5 ) additive gap between its depth-d and depth-( d+1) formula complexity. We also make progress in the case of general, unrestricted circuits. We show that the version of MCSP where the input is a partial function (represented by a string in {0, 1, ?}*) is not in P under the Exponential Time Hypothesis (ETH). Intriguingly, we formulate a notion of lower bound statements being (P/poly)-recognizable that is closely related to Razborov and Rudich's definition of being (P/poly)-constructive. We show that unless there are subexponential-sized circuits computing SAT, the lower bound statements used to prove the correctness of our reductions cannot be (P/poly)-recognizable.