AAAI Conference 2026 Conference Paper
Beyond Single-Speed Reasoning: Coordinating Fast and Slow Dynamics for Efficient World Modeling
- Hongwei Wang
- Yangru Huang
- Guangyao Chen
- Xu Wang
- Yi Jin
Model-based reinforcement learning (MBRL) enables efficient decision-making by learning predictive world modelsof environment dynamics. Despite recent advances, existingmodels often struggle to reconcile accurate short-term transitions with coherent long-term planning, especially in partially observable or long-horizon settings. We argue that thislimitation often stems from modeling all transitions at a single temporal resolution, which makes it challenging to simultaneously capture fine-grained local dynamics and abstractglobal structures. To this end, we propose SF-RSSM (Slow-Fast Recurrent State-Space Model), a novel method that decouples short-term and long-term dynamics via a dualbranchdesign. The fast branch captures short-horizon transitions using residual prediction, while the slow branch models long-range dependencies with a GRU-based recurrent pathway.A distillation mechanism is developed to enable cooperationacross timescales, with the slow model providing soft targetsto guide the fast model. Additionally, a curiosity module encourages exploration by promoting learning in regions wherethe fast and slow branches exhibit divergent dynamics. Experiments on CARLA, DMControl and Atari benchmarks showthat SF-RSSM outperforms strong baselines in policy performance.