Arrow Research search
Back to STOC

STOC 2025

Provably Learning a Multi-head Attention Layer

Conference Paper 9C Algorithms and Complexity · Theoretical Computer Science

Abstract

The multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models. Given a sequence length k , attention matrices Σ 1 ,…, Σ m ∈ℝ d × d , and projection matrices W 1 ,…, W m ∈ℝ d × d , the corresponding multi-head attention layer F : ℝ k × d → ℝ k × d transforms length- k sequences of d -dimensional tokens X ∈ℝ k × d via F ( X ) ≜ ∑ i =1 m softmax ( X Σ i X ⊤ ) X W i . In this work, we initiate the study of provably learning a multi-head attention layer from random examples and give the first nontrivial upper and lower bounds for this problem. Provided { W i , Σ i } satisfy certain non-degeneracy conditions, we give a ( dk ) O ( m 3 ) -time algorithm that learns F to small error given random labeled examples drawn uniformly from {± 1} k × d . We also prove computational lower bounds showing that in the worst case, exponential dependence on the number of heads m is unavoidable. We chose to focus on Boolean X to mimic the discrete nature of tokens in large language models, though our techniques naturally extend to standard continuous settings, e.g. Gaussian. Our algorithm, which is centered around using examples to sculpt a convex body containing the unknown parameters, is a significant departure from existing provable algorithms for learning feed-forward networks, which predominantly exploit fine-grained algebraic and rotation invariance properties of the Gaussian distribution. In contrast, our analysis is more flexible as it primarily relies on various upper and lower tail bounds for the input distribution and “slices” thereof.

Authors

Keywords

  • PAC learning
  • Supervised learning
  • attention
  • transformers

Context

Venue
ACM Symposium on Theory of Computing
Archive span
1969-2025
Indexed papers
4364
Paper id
156001237992279501