Arrow Research search
Back to IROS

IROS 2020

Learning User-Preferred Mappings for Intuitive Robot Control

Conference Paper Accepted Paper Artificial Intelligence ยท Robotics

Abstract

When humans control drones, cars, and robots, we often have some preconceived notion of how our inputs should make the system behave. Existing approaches to teleoperation typically assume a one-size-fits-all approach, where the designers pre-define a mapping between human inputs and robot actions, and every user must adapt to this mapping over repeated interactions. Instead, we propose a personalized method for learning the human's preferred or preconceived mapping from a few robot queries. Given a robot controller, we identify an alignment model that transforms the human's inputs so that the controller's output matches their expectations. We make this approach data-efficient by recognizing that human mappings have strong priors: we expect the input space to be proportional, reversable, and consistent. Incorporating these priors ensures that the robot learns an intuitive mapping from few examples. We test our learning approach in robot manipulation tasks inspired by assistive settings, where each user has different personal preferences and physical capabilities for teleoperating the robot arm. Our simulated and experimental results suggest that learning the mapping between inputs and robot actions improves objective and subjective performance when compared to manually defined alignments or learned alignments without intuitive priors. The supplementary video showing these user studies can be found at: https://youtu.be/rKHka0_48-Q.

Authors

Keywords

  • Impedance matching
  • Robot control
  • Transforms
  • Manipulators
  • Task analysis
  • Robots
  • Intelligent robots
  • Human Activities
  • User Study
  • Manipulation Tasks
  • Robotic Arm
  • Robot Manipulator
  • Human Input
  • Strong Prior
  • Indifference Curves
  • Control Input
  • Multilayer Perceptron
  • Robotic System
  • Complex Scenarios
  • System Input
  • Unlabeled Data
  • Markov Decision Process
  • Semi-supervised Learning
  • End-effector
  • Loss Term
  • Human Users
  • Robotic Assistance
  • Human Preferences
  • State St
  • Semi-supervised Model
  • Simulated Robot
  • End-effector Pose
  • End-effector Position
  • Robot State
  • L2 Loss
  • Stationary Distribution

Context

Venue
IEEE/RSJ International Conference on Intelligent Robots and Systems
Archive span
1988-2025
Indexed papers
26578
Paper id
152465207986276341