Arrow Research search
Back to ICML

ICML 2025

rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking

Conference Paper Accept (oral) Artificial Intelligence · Machine Learning

Abstract

We present rStar-Math to demonstrate that small language models (SLMs) can rival or even surpass the math reasoning capability of OpenAI o1, without distillation from superior models. rStar-Math achieves this by exercising “deep thinking” through Monte Carlo Tree Search (MCTS), where a math policy SLM performs test-time search guided by an SLM-based process reward model. rStar-Math introduces three innovations to tackle the challenges in training the two SLMs: (1) a novel code-augmented CoT data synthesis method, which performs extensive MCTS rollouts to generate step-by-step verified reasoning trajectories used to train the policy SLM; (2) a novel process reward model training method that avoids naïve step-level score annotation, yielding a more effective process preference model (PPM); (3) a self-evolution recipe in which the policy SLM and PPM are built from scratch and iteratively evolved to improve reasoning capabilities. Through 4 rounds of self-evolution with millions of synthesized solutions for 747k math problems, rStar-Math boosts SLMs’ math reasoning to state-of-the-art levels. On MATH benchmark, it improves Qwen2. 5-Math-7B from 58. 8% to 90. 0%, surpassing o1-preview by +4. 5%. On the USA Math Olympiad (AIME), rStar-Math solves an average of 53. 3% (8/15) of problems, ranking among the top 20% of the brightest high school math students. Code and data are available at https: //github. com/microsoft/rStar.

Authors

Keywords

  • LLM
  • Reasoning
  • Self-evolution

Context

Venue
International Conference on Machine Learning
Archive span
1993-2025
Indexed papers
16471
Paper id
31712442480907847