Arrow Research search
Back to CSL

CSL 2025

The Parameterized Complexity of Learning Monadic Second-Order Logic

Conference Paper Accepted Paper Logic in Computer Science · Theoretical Computer Science

Abstract

Within the model-theoretic framework for supervised learning introduced by Grohe and Turán (TOCS 2004), we study the parameterized complexity of learning concepts definable in monadic second-order logic (MSO). We show that the problem of learning an MSO-definable concept from a training sequence of labeled examples is fixed-parameter tractable on graphs of bounded clique-width, and that it is hard for the parameterized complexity class para-NP on general graphs. It turns out that an important distinction to be made is between 1-dimensional and higher-dimensional concepts, where the instances of a k-dimensional concept are k-tuples of vertices of a graph. For the higher-dimensional case, we give a learning algorithm that is fixed-parameter tractable in the size of the graph, but not in the size of the training sequence, and we give a hardness result showing that this is optimal. By comparison, in the 1-dimensional case, we obtain an algorithm that is fixed-parameter tractable in both.

Authors

Keywords

  • monadic second-order definable concept learning
  • agnostic probably approximately correct learning
  • parameterized complexity
  • clique-width
  • fixed-parameter tractable
  • Boolean classification
  • supervised learning
  • monadic second-order logic

Context

Venue
Annual Conference on Computer Science Logic
Archive span
1988-2026
Indexed papers
1413
Paper id
466740914774958691