Arrow Research search
Back to ICRA

ICRA 2013

Learning objective functions for manipulation

Conference Paper Accepted Paper Artificial Intelligence ยท Robotics

Abstract

We present an approach to learning objective functions for robotic manipulation based on inverse reinforcement learning. Our path integral inverse reinforcement learning algorithm can deal with high-dimensional continuous state-action spaces, and only requires local optimality of demonstrated trajectories. We use L 1 regularization in order to achieve feature selection, and propose an efficient algorithm to minimize the resulting convex objective function. We demonstrate our approach by applying it to two core problems in robotic manipulation. First, we learn a cost function for redundancy resolution in inverse kinematics. Second, we use our method to learn a cost function over trajectories, which is then used in optimization-based motion planning for grasping and manipulation tasks. Experimental results show that our method outperforms previous algorithms in high-dimensional settings.

Authors

Keywords

  • Trajectory
  • Cost function
  • Robots
  • Joints
  • Kinematics
  • Learning (artificial intelligence)
  • Objective Function
  • Learning Objective Function
  • Local Optimum
  • Path Planning
  • Manipulation Tasks
  • Robot Manipulator
  • Integration Of Learning
  • Inverse Kinematics
  • Inverse Reinforcement Learning
  • Optimization Problem
  • Optimization Method
  • Optimal Control
  • Global Optimization
  • Partial Differential Equations
  • Optimal Efficiency
  • Maximum Entropy
  • Joint Angles
  • Robotic Arm
  • End-effector
  • Cost Control
  • Trajectory Optimization
  • Policy Parameters
  • Optimal Control Problem
  • Joint Limits
  • Terminal Cost
  • Joint Acceleration
  • Joint Velocity
  • Destination Point

Context

Venue
IEEE International Conference on Robotics and Automation
Archive span
1984-2025
Indexed papers
30179
Paper id
307017327456748574