Arrow Research search

Author name cluster

Heng You

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

AAAI Conference 2024 Conference Paper

A Transfer Approach Using Graph Neural Networks in Deep Reinforcement Learning

  • Tianpei Yang
  • Heng You
  • Jianye Hao
  • Yan Zheng
  • Matthew E. Taylor

Transfer learning (TL) has shown great potential to improve Reinforcement Learning (RL) efficiency by leveraging prior knowledge in new tasks. However, much of the existing TL research focuses on transferring knowledge between tasks that share the same state-action spaces. Further, transfer from multiple source tasks that have different state-action spaces is more challenging and needs to be solved urgently to improve the generalization and practicality of the method in real-world scenarios. This paper proposes TURRET (Transfer Using gRaph neuRal nETworks), to utilize the generalization capabilities of Graph Neural Networks (GNNs) to facilitate efficient and effective multi-source policy transfer learning in the state-action mismatch setting. TURRET learns a semantic representation by accounting for the intrinsic property of the agent through GNNs, which leads to a unified state embedding space for all tasks. As a result, TURRET achieves more efficient transfer with strong generalization ability between different tasks and can be easily combined with existing Deep RL algorithms. Experimental results show that TURRET significantly outperforms other TL methods on multiple continuous action control tasks, successfully transferring across robots with different state-action spaces.

AAMAS Conference 2023 Conference Paper

Transfer Learning based Agent for Automated Negotiation

  • Siqi Chen
  • Qisong Sun
  • Heng You
  • Tianpei Yang
  • Jianye Hao

Although great success has been made in automated negotiation, a major issue still stands out: it is inefficient that learning a policy from scratch when an agent encounters an unknown opponent. Transfer learning (TL) can alleviate this problem by utilizing the knowledge of previously learned policies to accelerate the current task learning. This work presents a novel Transfer Learningbased Negotiating Agent (TLNAgent) framework that allows an autonomous agent to transfer previous knowledge from source policies to help with new tasks, while boosting its performance. TL- NAgent comprises three key components: the negotiation module, the adaptation module and the transfer module. Specifically, the negotiation module is responsible for interacting with the other agent during negotiation. The adaptation module measures the helpfulness of each source policy based on a fusion of two selection mechanisms. The transfer module is based on lateral connections between source and target networks and accelerates the agent’s training by transferring knowledge from the selected source policy. Our comprehensive experiments clearly demonstrate that TL is effective in the context of automated negotiation, and TLNAgent outperforms state-of-the-art negotiating agents in various domains.

UAI Conference 2022 Conference Paper

Cross-domain adaptive transfer reinforcement learning based on state-action correspondence

  • Heng You
  • Tianpei Yang
  • Yan Zheng 0002
  • Jianye Hao
  • Matthew E. Taylor

Despite the impressive success achieved in various domains, deep reinforcement learning (DRL) is still faced with the sample inefficiency problem. Transfer learning (TL), which leverages prior knowledge from different but related tasks to accelerate the target task learning, has emerged as a promising direction to improve RL efficiency. The majority of prior work considers TL across tasks with the same state-action spaces, while transferring across domains with different state-action spaces is relatively unexplored. Furthermore, such existing cross-domain transfer approaches only enable transfer from a single source policy, leaving open the important question of how to best transfer from multiple source policies. This paper proposes a novel framework called Cross-domain Adaptive Transfer (CAT) to accelerate DRL. CAT learns the state-action correspondence from each source task to the target task and adaptively transfers knowledge from multiple source task policies to the target policy. CAT can be easily combined with existing DRL algorithms and experimental results show that CAT significantly accelerates learning and outperforms other cross-domain transfer methods on multiple continuous action control tasks.