Arrow Research search
Back to ICRA

ICRA 2017

PLATO: Policy learning using adaptive trajectory optimization

Conference Paper Accepted Paper Artificial Intelligence ยท Robotics

Abstract

Policy search can in principle acquire complex strategies for control of robots and other autonomous systems. When the policy is trained to process raw sensory inputs, such as images and depth maps, it can also acquire a strategy that combines perception and control. However, effectively processing such complex inputs requires an expressive policy class, such as a large neural network. These high-dimensional policies are difficult to train, especially when learning to control safety-critical systems. We propose PLATO, a continuous, reset-free reinforcement learning algorithm that trains complex control policies with supervised learning, using model-predictive control (MPC) to generate the supervision, hence never in need of running a partially trained and potentially unsafe policy. PLATO uses an adaptive training method to modify the behavior of MPC to gradually match the learned policy in order to generate training samples at states that are likely to be visited by the learned policy. PLATO also maintains the MPC cost as an objective to avoid highly undesirable actions that would result from strictly following the learned policy before it has been fully trained. We prove that this type of adaptive MPC expert produces supervision that leads to good long-horizon performance of the resulting policy. We also empirically demonstrate that MPC can still avoid dangerous on-policy actions in unexpected situations during training. Our empirical results on a set of challenging simulated aerial vehicle tasks demonstrate that, compared to prior methods, PLATO learns faster, experiences substantially fewer catastrophic failures (crashes) during training, and often converges to a better policy.

Authors

Keywords

  • Training
  • Neural networks
  • Supervised learning
  • Robots
  • Robustness
  • Trajectory optimization
  • Learning (artificial intelligence)
  • Policy Learning
  • Adaptive Optimization
  • Neural Network
  • Learning Algorithms
  • Model Predictive Control
  • Safety-critical
  • Raw Input
  • Catastrophic Failure
  • Prior Methods
  • Complex Policy
  • Policy Search
  • Time Step
  • Deep Neural Network
  • Training Time
  • Morphine
  • Stationary Distribution
  • Coaching
  • Mean Time To Failure
  • Final Policy
  • Model Predictive Control Algorithm
  • Dataset Statistics
  • Non-stationary Environments
  • Equivalent Objective
  • Number Of Crashes
  • True State
  • Linear Velocity
  • Velocity Commands

Context

Venue
IEEE International Conference on Robotics and Automation
Archive span
1984-2025
Indexed papers
30179
Paper id
312850197784972998