Arrow Research search

Author name cluster

Bogdan Chornomaz

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

NeurIPS Conference 2025 Conference Paper

Agnostic Learning under Targeted Poisoning: Optimal Rates and the Role of Randomness

  • Bogdan Chornomaz
  • Yonatan Koren
  • Shay Moran
  • Tom Waknine

We study the problem of learning in the presence of an adversary that can corrupt an $\eta$ fraction of the training examples with the goal of causing failure on a specific test point. In the realizable setting, prior work established that the optimal error under such instance-targeted poisoning attacks scales as $\Theta(d\eta)$, where $d$ is the VC dimension of the hypothesis class [Hanneke, Karbasi, Mahmoody, Mehalel, and Moran (NeurIPS 2022)]. In this work, we resolve the corresponding question in the agnostic setting. We show that the optimal excess error is $\widetilde\Theta(\sqrt{d\eta})$, answering one of the main open problems left by Hanneke et al. To achieve this rate, it is necessary to use randomized learners: Hanneke et al. \ showed that deterministic learners can be forced to suffer error close to $1$ even under small amounts of poisoning. Perhaps surprisingly, our upper bound remains valid even when the learner’s random bits are fully visible to the adversary. In the other direction, our lower bound is stronger than standard PAC-style bounds: instead of tailoring a hard distribution separately for each sample size, we exhibit a single fixed distribution under which the adversary can enforce an excess error of $\Omega(\sqrt{d\eta})$ infinitely often.

STOC Conference 2024 Conference Paper

Local Borsuk-Ulam, Stability, and Replicability

  • Zachary Chase 0001
  • Bogdan Chornomaz
  • Shay Moran
  • Amir Yehudayoff

We use and adapt the Borsuk-Ulam Theorem from topology to derive limitations on list-replicable and globally stable learning algorithms. We further demonstrate the applicability of our methods in combinatorics and topology. We show that, besides trivial cases, both list-replicable and globally stable learning are impossible in the agnostic PAC setting. This is in contrast with the realizable case where it is known that any class with a finite Littlestone dimension can be learned by such algorithms. In the realizable PAC setting, we sharpen previous impossibility results and broaden their scope. Specifically, we establish optimal bounds for list replicability and global stability numbers in finite classes. This provides an exponential improvement over previous works and implies an exponential separation from the Littlestone dimension. We further introduce lower bounds for weak learners, i.e., learners that are only marginally better than random guessing. Lower bounds from previous works apply only to stronger learners. To offer a broader and more comprehensive view of our topological approach, we prove a local variant of the Borsuk-Ulam theorem in topology and a result in combinatorics concerning Kneser colorings. In combinatorics, we prove that if c is a coloring of all non-empty subsets of [ n ] such that disjoint sets have different colors, then there is a chain of subsets that receives at least 1+ ⌊ n /2⌋ colors (this bound is sharp). In topology, we prove e.g. that for any open antipodal-free cover of the d -dimensional sphere, there is a point ‍ x that belongs to at least t =⌈ d +3/2⌉ sets.