Arrow Research search

Author name cluster

Mihir Bellare

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

19 papers
1 author row

Possible papers

19

FOCS Conference 1997 Conference Paper

A Concrete Security Treatment of Symmetric Encryption

  • Mihir Bellare
  • Anand Desai
  • E. Jokipii
  • Phillip Rogaway

We study notions and schemes for symmetric (ie. private key) encryption in a concrete security framework. We give four different notions of security against chosen plaintext attack and analyze the concrete complexity of reductions among them, providing both upper and lower bounds, and obtaining tight relations. In this way we classify notions (even though polynomially reducible to each other) as stronger or weaker in terms of concrete security. Next we provide concrete security analyses of methods to encrypt using a block cipher, including the most popular encryption method, CBC. We establish tight bounds (meaning matching upper bounds and attacks) on the success of adversaries as a function of their resources.

FOCS Conference 1997 Conference Paper

Does Parallel Repetition Lower the Error in Computationally Sound Protocols?

  • Mihir Bellare
  • Russell Impagliazzo
  • Moni Naor

Whether or not parallel repetition lowers the error has been a fundamental question in the theory of protocols, with applications in many different areas. It is well known that parallel repetition reduces the error at an exponential rate in interactive proofs and Arthur-Merlin games. It seems to have been taken for granted that the same is true in arguments, or other proofs where the soundness only holds with respect to computationally bounded parties. We show that this is not the case. Surprisingly, parallel repetition can actually fail in this setting. We present four-round protocols whose error does not decrease under parallel repetition. This holds for any (polynomial) number of repetitions. These protocols exploit non-malleable encryption and can be based on any trapdoor permutation. On the other hand we show that for three-round protocols the error does go down exponentially fast. The question of parallel error reduction is particularly important when the protocol is used in cryptographic settings like identification, and the error represents the probability that an intruder succeeds.

FOCS Conference 1996 Conference Paper

Pseudorandom Functions Revisited: The Cascade Construction and Its Concrete Security

  • Mihir Bellare
  • Ran Canetti
  • Hugo Krawczyk

Pseudorandom function families are a powerful cryptographic primitive, yielding, in particular simple solutions for the main problems in private key cryptography. Their existence based on general assumptions (namely the existence of one-way functions) has been established. The authors investigate new ways of designing pseudorandom function families. The goal is to find constructions that are both efficient and secure, and thus eventually to bring the benefits of pseudorandom functions to practice. The basic building blocks in the design are certain limited versions of pseudorandom function families, called finite length input pseudorandom function families, for which very efficient realizations exist impractical cryptography. Thus rather than starting from one-way functions, they propose constructions of "full-fledged" pseudorandom function families from these limited ones. In particular they propose the cascade construction, and provide a concrete security analysis which relates the strength of the cascade to that of the underlying finite pseudorandom function family in a precise and quantitative way.

FOCS Conference 1995 Conference Paper

Free Bits, PCPs and Non-Approximability - Towards Tight Results

  • Mihir Bellare
  • Oded Goldreich 0001
  • Madhu Sudan 0001

The first part of this paper presents new proof systems and improved non-approximability results. In particular we present a proof system for NP using logarithmic randomness and two amortized free bits, so that Max clique is hard within N/sup 1/3/ and chromatic number within N/sup 1/5/. We also show hardness of 38/37 for Max-3-SAT, 27/26 for vertex cover, 82/81 for Max-cut, and 94/93 for Max-2-SAT. The second part of this paper presents a "reverse" of the FGLSS connection by showing that an NP-hardness result for the approximation of Max clique to within a factor of N/sup 1/(g+1/) would imply a probabilistic verifier for NP with logarithmic randomness and amortized free-bit complexity g. We also show that "existing techniques" won't yield proof systems of less than two bits in amortized free bit complexity. Finally, we initiate a comprehensive study of PCP and FPCP parameters, proving several triviality results and providing several useful transformations.

FOCS Conference 1995 Conference Paper

Linearity Testing in Characteristic Two

  • Mihir Bellare
  • Don Coppersmith
  • Johan Håstad
  • Marcos A. Kiwi
  • Madhu Sudan 0001

Let Dist(f, g)=Pr/sub u/ [f(u)/spl ne/g(u)] denote the relative distance between functions f, g mapping from a group G to a group H, and let Dist(f) denote the minimum, over all linear functions (homomorphisms) g, of Dist(f, g). Given a function f: G/spl rarr/H we let Err(f)=Pr/sub u/, v[f(u)+f(v)/spl ne/f(u+v)] denote the rejection probability of the BLR (Blum-Luby-Rubinfeld) linearity test. Linearity testing is the study of the relationship between Err(f) and Dist(f), and in particular the study of lower bounds on Err(f) in terms of Dist(f). The case we are interested in is when the underlying groups are G=GF(2)/sup n/ and H=GF(2). The corresponding test is used in the construction of efficient PCPs and thence in the derivation of hardness of approximation results, and, in this context, improved analyses translate into better non-approximability results. However, while several analyses of the relation of Err(f) to Dist(f) are known, none is tight. We present a description of the relationship between Err(f) and Dist(f) which is nearly complete in all its aspects, and entirely complete (i. e. tight) in some. In particular we present functions L, U: [0, 1]/spl rarr/[0, 1] such that for all x/spl isin/[0, 1] we have L(x)<Err(f)/spl les/U(x) whenever Dist(f)=x, with the upper bound being tight on the whole range, and the lower bound tight on a large part of the range and close on the rest. Part of our strengthening is obtained by showing a new connection between the linearity testing problem and Fourier analysis, a connection which may be of independent interest. Our results are used by M. Bellare et al. (1995) to present the best known hardness results for Max3SAT and other MaxSNP problems.

FOCS Conference 1994 Conference Paper

Randomness-Efficient Oblivious Sampling

  • Mihir Bellare
  • John Rompel

We introduce a natural notion of obliviousness of a sampling procedure, and construct a randomness-efficient oblivious sampler. Our sampler uses O(l+log /spl delta//sup -1//spl middot/log l) coins to output m=poly(/spl epsiv//sup -1/, log /spl delta//sup -1/, log l) sample points x/sub 1/, .. ., x/sub m/, /spl isin/ {0, 1}/sup 1/ such that Pr[|1/m/spl Sigma//sub i=1//sup m/f(x/sub i/)-E[f]|>

FOCS Conference 1991 Conference Paper

Languages that Are Easier than their Proofs

  • Richard Beigel
  • Mihir Bellare
  • Joan Feigenbaum
  • Shafi Goldwasser

Languages in NP are presented for which it is harder to prove membership interactively than it is to decide this membership. Similarly, languages where checking is harder than computing membership are presented. Under assumptions about triple-exponential time, incoherent sets in NP are constructed. Without any assumptions, incoherent sets are constructed in DSPACE (n to the log n), yielding the first uncheckable and non-random-self-reducible sets in that space. >

FOCS Conference 1990 Conference Paper

Randomness in Interactive Proofs

  • Mihir Bellare
  • Oded Goldreich 0001
  • Shafi Goldwasser

The quantitative aspects of randomness in interactive proof systems are studied. The result is a randomness-efficient error-reduction technique: given an Arthur-Merlin proof system (error probability <or=1/3) in which Arthur sends l=l(n) random bits per rounds, a proof system that achieves error probability 2/sup -k/ at the cost of Arthur sending only 2l+O(k) random bits per round is constructed. The method maintains the number of rounds in the game. Underlying the transformation is a novel sampling method for approximating the average value of an arbitrary function f: