IS Journal 2026 Journal Article
A Dynamic Framework to Integrate Deep Reinforcement Learning with Hierarchical Symbolic Plans
- Xuelong Liu
- Nuo Chen
- Wenji Mao
- Daniel Zeng
Neuro-symbolic framework has become one of the mainstream paradigms in intelligent system design. For intelligent decision-making, Reinforcement Learning (RL) and automated planning are the representative neural and symbolic techniques, respectively, which can facilitate each other. Despite the rapid development and wide applications of deep RL, its drawbacks on sample efficiency and convergence in sparse-reward environments have become the major obstacles to hinder its advancement. To address these issues, in this paper, we propose a neuro-symbolic framework to integrate deep RL with hierarchical plans. Specifically, we develop a selective Monte-Carlo Tree Search algorithm, in which hierarchical plans are dynamically constructed during the learning process. The constructed plans, in turn, provide the high-level guidance for RL to constrain the subtasks leading to goal attainment, thus reducing useless/redundant exploration in RL. Experiments on five challenging scenarios show that our framework achieves better sample efficiency and faster convergence compared to the state-of-the-art approaches.