EAAI Journal 2026 Journal Article
A global and local agent-based curriculum reinforcement learning approach for multi-end-effector robotic arm manipulation
- Yichen Wang
- Shuai Zheng
- Ze Yang
- Jingmin Guo
- Zitong Yang
- Jun Hong
Reinforcement learning is widely applied in robotic arm manipulation tasks. However, most of these tasks focus on single and simple end effector. When facing heavy robotic arm hoisting tasks, which are usually manipulated by robotic arms with multi-end-effectors and more degrees of freedom, the single-agent-based reinforcement learning method performs relatively ineffective. In this paper, we propose a multi-agent reinforcement learning approach for hoisting tasks manipulated by robotic arm with multi-end-effectors. The method decomposes the robotic arm into global and local agents based on the degrees of freedom, with one agent controlling global and rough movement, and the other controlling local and fine movement. In this way, the multi-end-effectors’ spatial trajectory can be accurately manipulated. Moreover, in the training process, a four levels curriculum learning strategy is introduced, in which different reward functions are designed respectively, to make the training efficiency and effectiveness. We develop a Unity engine environment-based simulation and perform several comparison experiments. The results demonstrate that the proposed approach outperforms conventional single-agent-based methods.