Arrow Research search
Back to IROS

IROS 2021

Sample-efficient Reinforcement Learning Representation Learning with Curiosity Contrastive Forward Dynamics Model

Conference Paper Accepted Paper Artificial Intelligence ยท Robotics

Abstract

Developing an agent in reinforcement learning (RL) that is capable of performing complex control tasks directly from high-dimensional observation such as raw pixels is a challenge as efforts still need to be made towards improving sample efficiency and generalization of RL algorithm. This paper considers a learning framework for a Curiosity Contrastive Forward Dynamics Model (CCFDM) to achieve a more sample-efficient RL based directly on raw pixels. CCFDM incorporates a forward dynamics model (FDM) and performs contrastive learning to train its deep convolutional neural network-based image encoder (IE) to extract conducive spatial and temporal information to achieve a more sample efficiency for RL. In addition, during training, CCFDM provides intrinsic rewards, produced based on FDM prediction error, and encourages the curiosity of the RL agent to improve exploration. The diverge and less-repetitive observations provided by both our exploration strategy and data augmentation available in contrastive learning improve not only the sample efficiency but also the generalization. Performance of existing model-free RL methods such as Soft Actor-Critic built on top of CCFDM outperforms prior state-of-the-art pixel-based RL methods on the DeepMind Control Suite benchmark.

Authors

Keywords

  • Training
  • Representation learning
  • Heuristic algorithms
  • Reinforcement learning
  • Predictive models
  • Frequency division multiplexing
  • Data mining
  • Forward Dynamics
  • Forward Dynamics Model
  • Data Augmentation
  • Temporal Information
  • Control Task
  • Forward Model
  • Sampling Efficiency
  • Self-supervised Learning
  • Reinforcement Learning Algorithm
  • Exploration Strategy
  • Intrinsic Rewards
  • Reinforcement Learning Agent
  • Model-free Reinforcement Learning
  • Raw Pixel
  • Image Encoder
  • Computer Vision
  • Batch Size
  • Transition Probabilities
  • Multilayer Perceptron
  • Contrastive Loss
  • Markov Decision Process
  • Model-based Reinforcement Learning
  • Handcrafted Features
  • Visual Observation
  • Extrinsic Rewards
  • Replay Buffer
  • Robotic Tasks
  • Current Observations
  • Large Batch Size

Context

Venue
IEEE/RSJ International Conference on Intelligent Robots and Systems
Archive span
1988-2025
Indexed papers
26578
Paper id
769141133498672456