Arrow Research search
Back to AAAI

AAAI 2026

SABER: Switchable and Balanced Training for Efficient LLM Reasoning

Conference Paper AAAI Technical Track on Natural Language Processing VI Artificial Intelligence

Abstract

Large language models (LLMs) empowered by chain-of-thought reasoning have achieved impressive accuracy on complex tasks but suffer from excessive inference costs and latency when applied uniformly to all problems. We propose SABER (Switchable and Balanced Training for Efficient LLM Reasoning), a reinforcement learning framework that endows LLMs with user‑controllable, token‑budgeted reasoning. SABER first profiles each training example’s base‑model thinking token usage and assigns it to one of the predefined budget tiers. During fine‑tuning, the model is guided by system prompts and length‑aware rewards to respect its assigned budget. In parallel, we incorporate no‑think examples to ensure the model remains reliable even when explicit reasoning is turned off. SABER further supports four discrete inference modes—NoThink, FastThink, CoreThink, and DeepThink, enabling flexible trade‑offs between latency and reasoning depth. Extensive evaluations on math reasoning (MATH, GSM8K), code generation (MBPP), and logical reasoning (LiveBench-Reasoning) demonstrate that SABER achieves high accuracy under tight budgets, graceful degradation, and effective cross-scale and cross‑domain generalization. In particular, SABER‑FastThink cuts reasoning length by 65.4% and yields a 3.6% accuracy gain compared with the base model on the MATH benchmark.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
258132226048884105