Arrow Research search

Author name cluster

Danila Sinopalnikov

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

ICLR Conference 2025 Conference Paper

BOND: Aligning LLMs with Best-of-N Distillation

  • Pier Giuseppe Sessa
  • Robert Dadashi
  • Léonard Hussenot
  • Johan Ferret
  • Nino Vieillard
  • Alexandre Ramé
  • Bobak Shahriari
  • Sarah Perrin

Reinforcement learning from human feedback (RLHF) is a key driver of quality and safety in state-of-the-art large language models. Yet, a surprisingly simple and strong inference-time strategy is Best-of-N sampling that selects the best generation among N candidates. In this paper, we propose Best-of-N Distillation (BOND), a novel RLHF algorithm that seeks to emulate Best-of-N but without its significant computational overhead at inference time. Specifically, BOND is a distribution matching algorithm that forces the distribution of generations from the policy to get closer to the Best-of-N distribution. We use the Jeffreys divergence (a linear combination of forward and backward KL) to balance between mode-covering and mode-seeking behavior, and derive an iterative formulation that utilizes a moving anchor for efficiency. We demonstrate the effectiveness of our approach and several design choices through experiments on abstractive summarization and Gemma models.

AAAI Conference 2021 Conference Paper

*-CFQ: Analyzing the Scalability of Machine Learning on a Compositional Task

  • Dmitry Tsarkov
  • Tibor Tihon
  • Nathan Scales
  • Nikola Momchev
  • Danila Sinopalnikov
  • Nathanael Schärli

We present *-CFQ (“star-CFQ”): a suite of large-scale datasets of varying scope based on the CFQ semantic parsing benchmark, designed for principled investigation of the scalability of machine learning systems in a realistic compositional task setting. Using this suite, we conduct a series of experiments investigating the ability of Transformers to benefit from increased training size under conditions of fixed computational cost. We show that compositional generalization remains a challenge at all training sizes, and we show that increasing the scope of natural language leads to consistently higher error rates, which are only partially offset by increased training data. We further show that while additional training data from a related domain improves the accuracy in datastarved situations, this improvement is limited and diminishes as the distance from the related domain to the target domain increases.

ICLR Conference 2020 Conference Paper

Measuring Compositional Generalization: A Comprehensive Method on Realistic Data

  • Daniel Keysers
  • Nathanael Schärli
  • Nathan Scales
  • Hylke Buisman
  • Daniel Furrer
  • Sergii Kashubin
  • Nikola Momchev
  • Danila Sinopalnikov

State-of-the-art machine learning methods exhibit limited compositional generalization. At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements. We introduce a novel method to systematically construct such benchmarks by maximizing compound divergence while guaranteeing a small atom divergence between train and test sets, and we quantitatively compare this method to other approaches for creating compositional generalization benchmarks. We present a large and realistic natural language question answering dataset that is constructed according to this method, and we use it to analyze the compositional generalization ability of three machine learning architectures. We find that they fail to generalize compositionally and that there is a surprisingly strong negative correlation between compound divergence and accuracy. We also demonstrate how our method can be used to create new compositionality benchmarks on top of the existing SCAN dataset, which confirms these findings.