Arrow Research search

Author name cluster

Mikito Nanashima

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
1 author row

Possible papers

5

STOC Conference 2024 Conference Paper

One-Way Functions and Zero Knowledge

  • Shuichi Hirahara
  • Mikito Nanashima

The fundamental theorem of Goldreich, Micali, and Wigderson (J. ACM 1991) shows that the existence of a one-way function is sufficient for constructing computational zero knowledge ( CZK ) proofs for all languages in NP . We prove its converse, thereby establishing characterizations of one-way functions based on the worst-case complexities of zero knowledge. Specifically, we prove that the following are equivalent: - A one-way function exists. - NP ⊆ CZK and NP is hard in the worst case. - CZK is hard in the worst case and the problem GapMCSP of approximating circuit complexity is in CZK . The characterization above also holds for statistical and computational zero-knowledge argument systems. We further extend this characterization to a proof system with knowledge complexity O (log n ). In particular, we show that the existence of a one-way function is characterized by the worst-case hardness of CZK if GapMCSP has a proof system with knowledge complexity O (log n ). We complement this result by showing that NP admits an interactive proof system with knowledge complexity ω(log n ) under the existence of an exponentially hard auxiliary-input one-way function (which is a weaker primitive than an exponentially hard one-way function). We also characterize the existence of a robustly-often nonuniformly computable one-way function by the nondeterministic hardness of CZK under the weak assumption that PSPACE ⊈ AM . We present two applications of our results. First, we simplify the proof of the recent characterization of a one-way function by NP -hardness of a meta-computational problem and the worst-case hardness of NP given by Hirahara (STOC’23). Second, we show that if NP has a laconic zero-knowledge argument system, then there exists a public-key encryption scheme whose security can be based on the worst-case hardness of NP . This improves previous results which assume the existence of an indistinguishable obfuscation.

FOCS Conference 2024 Conference Paper

Optimal Coding for Randomized Kolmogorov Complexity and Its Applications

  • Shuichi Hirahara
  • Zhenjian Lu
  • Mikito Nanashima

The coding theorem for Kolmogorov complexity states that any string sampled from a computable distribution has a description length close to its information content. A coding theorem for resource-bounded Kolmogorov complexity is the key to obtaining fundamental results in average-case complexity, yet whether any samplable distribution admits a coding theorem for randomized time-bounded Kolmogorov complexity $(\text{rK}^{\text{poly}})$ is open and a common bottleneck in the recent literature of meta-complexity. Previous works bypassed this issue by considering probabilistic Kolmogorov complexity $(\text{pK}^{\text{poly}})$, in which public random bits are assumed to be available. In this paper, we present an efficient coding theorem for randomized Kolmogorov complexity under the non-existence of one-way functions, thereby removing the common bottleneck. This enables us to prove $\text{rK}^{\text{poly}}$ counterparts of virtually all the average-case results that were proved only for $\text{pK}^{\text{poly}}$, and enables the resolution of the following concrete open problems. 1)The existence of a one-way function is characterized by the failure of average-case symmetry of information for randomized time-bounded Kolmogorov complexity, as well as a conditional coding theorem for randomized time-bounded Kolmogorov complexity. This resolves the open problem of Hirahara, Ilango, Lu, Nanashima, and Oliveira (STOC'23). 2)Hirahara, Kabanets, Lu, and Oliveira (CCC'24) showed that randomized time-bounded Kolmogorov complexity admits search-to-decision reductions in the errorless average-case setting over any samplable distribution, and left open whether a similar result holds in the error-prone setting. We resolve this question affirmatively, and as a consequence, characterize the existence of a one-way function by the average-case hardness of computing $\text{rK}^{\text{poly}}$ with respect to an arbitrary samplable distribution, which is an $\text{rK}^{\text{poly}}$ analogue of the $\text{pK}^{\text{poly}}$ characterization of Liu and Pass (CRYPTO'23). The key technical lemma is that any distribution whose next bits are efficiently predictable admits an efficient encoding and decoding scheme, which could be of independent interest to data compression.

FOCS Conference 2023 Conference Paper

Learning in Pessiland via Inductive Inference

  • Shuichi Hirahara
  • Mikito Nanashima

Pessiland is one of Impagliazzo’s five possible worlds in which NP is hard on average, yet no one-way function exists. This world is considered the most pessimistic because it offers neither algorithmic nor cryptographic benefits. In this paper, we develop a unified framework for constructing strong learning algorithms under the nonexistence of a one-way function, indicating a positive aspect of Pessiland. Using our framework, we improve the learning algorithm for adaptively changing distributions, which was introduced by Naor and Rothblum (ICML’06). Although the previous learner assumes the knowledge of underlying distributions, our learner is universal, i. e. , does not assume any knowledge on distributions, and has better sample complexity. We also employ our framework to construct a strong agnostic learner with optimal sample complexity, which improves the previous PAC learner of Blum, Furst, Kearns, and Lipton (Crypto’93). Our learning algorithms are worst-case algorithms that run in exponential time with respect to computational depth, and as a by-product, we present the first characterization of the existence of a one-way function by the worst-case hardness of some promise problem in AM. As a corollary of our results, we establish the robustness of average-case learning, that is, the equivalence among various average-case learning tasks, such as (strong and weak) agnostic learning, learning adaptively changing distributions with respect to arbitrary unknown distributions, and weak learning with membership queries with respect to the uniform distribution. Our framework is based on the theory of Solomonoff’s inductive inference and the universal extrapolation algorithm of Impagliazzo and Levin (FOCS’90). Conceptually, the framework demonstrates that Pessiland is, in fact, a wonderland for machine learning in which various learning tasks can be efficiently solved by the generic algorithm of universal extrapolation.

FOCS Conference 2021 Conference Paper

On Worst-Case Learning in Relativized Heuristica

  • Shuichi Hirahara
  • Mikito Nanashima

A PAC learning model involves two worst-case requirements: a learner must learn all functions in a class on all example distributions. However, basing the hardness of learning on NP-hardness has remained a key challenge for decades. In fact, recent progress in computational complexity suggests the possibility that a weaker assumption might be sufficient for worst-case learning than the feasibility of worst-case algorithms for NP problems. In this study, we investigate whether these worst-case re-quirements for learning are satisfied on the basis of only average-case assumptions in order to understand the nature of learning. First, we construct a strong worst-case learner based on the assumption that DistNP ⊆ AvgP, i. e. , in Heuristica. Our learner agnostically learns all polynomial-size circuits on all unknown P/ poly-samplable distributions in polynomial time, where the complexity of learning depends on the complexity of sampling examples. Second, we study the limitation of relativizing constructions of learners based on average-case heuristic algorithms. Specifically, we construct a powerful oracle such that DistPH ⊆ AvgP, i. e. , every problem in PH is easy on average, whereas UP ∩ coUP and PAC learning on almost-uniform distributions are hard even for 2 n/w(1og n) - time algorithms in the relativized world, which improves the oracle separation presented by Impagliazzo (CCC 2011). The core concept of our improvements is the consideration of a switching lemma on a large alphabet, which may be of independent interest. The lower bound on the time complexity is nearly optimal because Hirahara (STOC 2021) showed that DistPH ⊆ AvgP implies that PH can be solved in time 2 O(n/ log n) under any relativized world. The full version of this paper is available on ECCC [1].