TMLR 2025
Decoding-based Regression
Abstract
Language models have recently been shown capable of performing regression wherein numeric predictions are represented as decoded strings. In this work, we provide theoretical grounds for this capability and furthermore investigate the utility of causal sequence decoding models as numeric regression heads given any feature representation. We find that, despite being trained in the usual way - for next-token prediction via cross-entropy loss - decoder-based heads are as performant as standard pointwise heads when benchmarked over standard regression tasks, while being flexible enough to capture smooth numeric distributions, such as in the task of density estimation.
Authors
Keywords
No keywords are indexed for this paper.
Context
- Venue
- Transactions on Machine Learning Research
- Archive span
- 2022-2026
- Indexed papers
- 3849
- Paper id
- 487054071656624713