Arrow Research search

Author name cluster

Klim Efremenko

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

20 papers
1 author row

Possible papers

20

FOCS Conference 2025 Conference Paper

Constant Rate Codes for Adaptive Broadcasts Do Not Exist

  • Klim Efremenko
  • Gillat Kol
  • Dmitry Paramonov
  • Raghuvansh R. Saxena

Can the n-party broadcast channel, where any symbol sent by one party is received by all, be made resilient to noise with low overhead? Namely, is it possible to construct interactive error-correcting codes that convert any protocol designed for the noiseless broadcast channel into one that works over the noisy broadcast channel and is not much longer than the original protocol? [12, STOC 2018] showed that such interactive codes with constant multiplicative overhead are possible under the assumption that the noiseless protocol being simulated is non-adaptive, meaning that it is restricted to have a pre-determined order of turns. Their noise resilient simulating protocols, however, require adaptivity, where each party can decide whether or not to broadcast given all the information available to them, including their input and received transcript. The question of whether such a simulation is possible for general, potentially adaptive, noiseless protocols was left open. We resolve this question negatively, proving that any interactive code that converts adaptive noiseless broadcast protocols into adaptive broadcast protocols resilient to stochastic errors must incur a multiplicative overhead of Ω(log n/ log log n), which is nearly tight.

STOC Conference 2023 Conference Paper

The Rate of Interactive Codes Is Bounded Away from 1

  • Klim Efremenko
  • Gillat Kol
  • Dmitry Paramonov
  • Raghuvansh R. Saxena

Kol and Raz [STOC 2013] showed how to simulate any alternating two-party communication protocol designed to work over the noiseless channel, by a protocol that works over a stochastic channel that corrupts each sent symbol with probability ‍є>0 independently, with only a 1+ O (√(є)) blowup to the communication. In particular, this implies that the maximum rate of such interactive codes approaches 1 as є goes to ‍0, as is also the case for the maximum rate of classical error correcting codes. Over the past decade, followup works have strengthened and generalized this result to other noisy channels, stressing on how fast the rate approaches 1 as є goes to 0, but retaining the assumption that the noiseless protocol is alternating. In this paper we consider the general case, where the noiseless protocols can have arbitrary orders of speaking . In contrast to Kol-Raz and to the followup results in this model, we show that the maximum rate of interactive codes that encode general protocols is upper bounded by a universal constant strictly smaller than 1. To put it differently, we show that there is an inherent blowup in communication when protocols with arbitrary orders of speaking are faced with any constant fraction of errors ‍є > 0. We mention that our result assumes a large alphabet set and resolves the (non-binary variant) of a conjecture by Haeupler [FOCS ‍2014].

FOCS Conference 2022 Conference Paper

Binary Codes with Resilience Beyond 1/4 via Interaction

  • Klim Efremenko
  • Gillat Kol
  • Raghuvansh R. Saxena
  • Zhijun Zhang 0007

In the reliable transmission problem, a sender, Alice, wishes to transmit a bit-string x to a remote receiver, Bob, over a binary channel with adversarial noise. The solution to this problem is to encode x using an error correcting code. As it is long known that the distance of binary codes is at most 1/2, reliable transmission is possible only if the channel corrupts (flips) at most a 1/4-fraction of the communicated bits. We revisit the reliable transmission problem in the two-way setting, where both Alice and Bob can send bits to each other. Our main result is the construction of two-way error correcting codes that are resilient to a constant fraction of corruptions strictly larger than 1/4. Moreover, our code has constant rate and requires Bob to only send one short message. We mention that our result resolves an open problem by Haeupler, Kamath, and Velingker [APPROX-RANDOM, 2015] and by Gupta, Kalai, and Zhang [STOC, 2022]. Curiously, our new two-way code requires a fresh perspective on classical error correcting codes: While classical codes have only one distance guarantee for all pairs of codewords (i. e. , the minimum distance), we construct codes where the distance between a pair of codewords depends on the “compatibility” of the messages they encode. We also prove that such codes are necessary for our result.

FOCS Conference 2021 Conference Paper

Statistically Near-Optimal Hypothesis Selection

  • Olivier Bousquet
  • Mark Braverman
  • Gillat Kol
  • Klim Efremenko
  • Shay Moran

Hypothesis Selection is a fundamental distribution learning problem where given a comparator-class $\mathcal{Q}=\{q_{1}, \ldots, q_{n}\}$ of distributions, and a sampling access to an unknown target distribution $p$, the goal is to output a distribution $q$ such that $\mathsf{TV}(p, q)$ is close to opt, where $\mathsf{opt}=\min\nolimits_{i}\{\mathsf{TV}(p, q_{i})\}$ and TV (. ,.) denotes the total-variation distance. Despite the fact that this problem has been studied since the 19th century, its complexity in terms of basic resources, such as number of samples and approximation guarantees, remains unsettled (this is discussed, e. g. , in the charming book by Devroye and Lugosi '00). This is in stark contrast with other (younger) learning settings, such as PAC learning, for which these complexities are well understood. We derive an optimal 2-approximation learning strategy for the Hypothesis Selection problem, outputting $q$ such that, $\mathsf{TV}(p, q)\leq 2\cdot\text{opt}+\varepsilon$, with a (nearly) optimal sample complexity of $\tilde{O}(\log n/\varepsilon^{2})$. This is the first algorithm that simultaneously achieves the best approximation factor and sample complexity: previously, Bousquet, Kane, and Moran (COLT ‘19) gave a learner achieving the optimal 2-approximation, but with an exponentially worse sample complexity of $\tilde{O}(\sqrt{n}/\varepsilon^{2. 5})$, and Yatracos (Annals of Statistics '85) gave a learner with optimal sample complexity of $O(\log n/\varepsilon^{2})$ but with a sub-optimal approximation factor of 3. We mention that many works in the Density Estimation (a. k. a. , Distribution Learning) literature use Hypothesis Selection as a black box subroutine. Our result therefore implies an improvement on the approximation factors obtained by these works, while keeping their sample complexity intact. For example, our result improves the approximation factor of the algorithm of Ashtiani, Ben-David, Harvey, Liaw, and Mehrabian (JACM '20) for agnostic learning of mixtures of gaussians from 9 to 6, while maintaining its nearly-tight sample complexity.

FOCS Conference 2021 Conference Paper

Tight Bounds for General Computation in Noisy Broadcast Networks

  • Klim Efremenko
  • Gillat Kol
  • Dmitry Paramonov
  • Raghuvansh R. Saxena

Let II be a protocol over the n-party broadcast channel, where in each round, a pre-specified party broadcasts a symbol to all other parties. We wish to design a scheme that takes such a protocol II as input and outputs a noise resilient protocol II’ that simulates II over the noisy broadcast channel, where each received symbol is flipped with a fixed constant probability, independently. What is the minimum overhead in the number of rounds that is incurred by any such simulation scheme? A classical result by Gallager from the 80's shows that non-interactive T-round protocols, where the bit communicated in every round is independent of the communication history, can be converted to noise resilient ones with only an $\mathrm{O}(\log\log T$ ) multiplicative overhead in the number of rounds. Can the same be proved for any protocol? Or, are there protocols whose simulation requires an $\Omega(\log T)$ overhead (which always suffices)? We answer both the above questions in the negative: We give a simulation scheme with an $\tilde{O}(\sqrt{\log T})$ overhead for every protocol and channel alphabet. We also prove an (almost) matching lower bound of $\Omega(\sqrt{\log T})$ on the overhead required to simulate the pointer chasing protocol with T = n and polynomial alphabet.

FOCS Conference 2020 Conference Paper

Binary Interactive Error Resilience Beyond ${{}^{1}}\! /\! _{8}$ (or why $({{}^{1}}\! /\! _{2})^{3} > {{}^{1}}\! /\! _{8})$

  • Klim Efremenko
  • Gillat Kol
  • Raghuvansh R. Saxena

Interactive error correcting codesInteractive error correcting codes are codes that encode a two party communication protocol to an error-resilient protocol that succeeds even if a constant fraction of the communicated symbols are adversarially corrupted, at the cost of increasing the communication by a constant factor. What is the largest fraction of corruptions that such codes can protect against? If the error-resilient protocol is allowed to communicate large (constant sized) symbols, Braverman and Rao (STOC, 2011) show that the maximum rate of corruptions that can be tolerated is 1 /4. They also give a binary interactive error correcting protocol that only communicates bits and is resilient to 1 /2 fraction of errors, but leave the optimality of this scheme as an open problem. We answer this question in the negative, breaking the 1 /8 barrier. Specifically, we give a binary interactive error correcting scheme that is resilient to 5 /39 > 1 /8 fraction of adversarial errors. Our scheme builds upon a novel construction of binary list-decodable interactive codes with small list size.

FOCS Conference 2019 Conference Paper

Radio Network Coding Requires Logarithmic Overhead

  • Klim Efremenko
  • Gillat Kol
  • Raghuvansh R. Saxena

We consider the celebrated radio network model for abstracting communication in wireless networks. In this model, in any round, each node in the network may broadcast a message to all its neighbors. However, a node is able to hear a message broadcast by a neighbor only if no collision occurred, meaning that it was the only neighbor broadcasting. While the (noiseless) radio network model received a lot of attention over the last few decades, the effect of noise on radio networks is still not well understood. In this paper, we take a step forward and show that making radio network protocols resilient to noise may require a substantial performance overhead. Specifically, we construct a multi-hop network and a communication protocol over this network that works in T rounds when there is no noise. We prove that any scheme that simulates our protocol and is resilient to stochastic noise, requires at least cT log(n) rounds, for some constant c. This stands in contrast to our previous result (STOC, 2018), showing that protocols over the single-hop (clique) network can be made noise resilient with only a constant overhead. Our result also settles a recent conjecture by Censor-Hillel, Haeupler, Hershkowitz, Zuzic (2018). We complement the above result by giving a scheme to simulate any protocol with a fixed order of transmissions with only an O(log (n)) overhead.

FOCS Conference 2014 Conference Paper

List and Unique Coding for Interactive Communication in the Presence of Adversarial Noise

  • Mark Braverman
  • Klim Efremenko

In this paper we extend the notion of list-decoding to the setting of interactive communication and study its limits. In particular, we show that any protocol can be encoded, with a constant rate, into a list-decodable protocol which is resilient to a noise rate of up to 1/2 -- ε, and that this is tight. Using our list-decodable construction, we study a more nuanced model of noise where the adversary can corrupt up to a fraction α Alice's communication and up to a fraction β of Bob's communication. We use list-decoding in order to fully characterize the region RU of pairs (α β) for which unique decoding with a constant rate is possible. The region RU turns out to be quite unusual in its shape. In particular, it is bounded by a piecewise-differentiable curve with infinitely many pieces. We show that outside this region, the rate must be exponential. This suggests that in some error regimes, list-decoding is necessary for optimal unique decoding. We also consider the setting where only one party of the communication must output the correct answer. We precisely characterize the region of all pairs (α β) for which one-sided unique decoding is possible in a way that Alice will output the correct answer.

STOC Conference 2012 Conference Paper

From irreducible representations to locally decodable codes

  • Klim Efremenko

A q-query Locally Decodable Code (LDC) is an error-correcting code that allows to read any particular symbol of the message by reading only q symbols of the codeword even if the codeword is adversary corrupted. In this paper we present a new approach for the construction of LDCs. We show that if there exists an irreducible representation (ρ, V) of G and q elements g 1 ,g 2 ,..., g q in G such that there exists a linear combination of matrices ρ(g i ) that is of rank one, then we can construct a q-query Locally Decodable Code C:V-> F G . We show the potential of this approach by constructing constant query LDCs of sub-exponential length matching the best known constructions.

FOCS Conference 2010 Conference Paper

Local List Decoding with a Constant Number of Queries

  • Avraham Ben-Aroya
  • Klim Efremenko
  • Amnon Ta-Shma

Efremenko showed locally-decodable codes of subexponential length that can handle close to 1/6 fraction of errors. In this paper we show that the same codes can be locally unique-decoded from error rate 1/2 - α for any α > 0 and locally list-decoded from error rate 1 - α for any α > 0, with only a constant number of queries and a constant alphabet size. This gives the first sub-exponential length codes that can be locally list-decoded with a constant number of queries.