Arrow Research search

Author name cluster

Thomas Rühr

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
1 author row

Possible papers

4

ICRA Conference 2019 Conference Paper

Improving Data Efficiency of Self-supervised Learning for Robotic Grasping

  • Lars Berscheid
  • Thomas Rühr
  • Torsten Kröger

Given the task of learning robotic grasping solely based on a depth camera input and gripper force feedback, we derive a learning algorithm from an applied point of view to significantly reduce the amount of required training data. Major improvements in time and data efficiency are achieved by: Firstly, we exploit the geometric consistency between the undistorted depth images and the task space. Using a relative small, fully-convolutional neural network, we predict grasp and gripper parameters with great advantages in training as well as inference performance. Secondly, motivated by the small random grasp success rate of around 3 %, the grasp space was explored in a systematic manner. The final system was learned with 23 000 grasp attempts in around 60 h, improving current solutions by an order of magnitude. For typical bin picking scenarios, we measured a grasp success rate of (96. 6 ± 1. 0) %. Further experiments showed that the system is able to generalize and transfer knowledge to novel objects and environments.

IROS Conference 2013 Conference Paper

Interactive environment exploration in clutter

  • Megha Gupta
  • Thomas Rühr
  • Michael Beetz
  • Gaurav S. Sukhatme

Robotic environment exploration in cluttered environments is a challenging problem. The number and variety of objects present not only make perception very difficult but also introduce many constraints for robot navigation and manipulation. In this paper, we investigate the idea of exploring a small, bounded environment (e. g. , the shelf of a home refrigerator) by prehensile and non-prehensile manipulation of the objects it contains. The presence of multiple objects results in partial and occluded views of the scene. This inherent uncertainty in the scene's state forces the robot to adopt an observe-plan-act strategy and interleave planning with execution. Objects occupying the space and potentially occluding other hidden objects are rearranged to reveal more of the unseen area. The environment is considered explored when the state (free or occupied) of every voxel in the volume is known. The presented algorithm can be easily adapted to real world problems like object search, taking inventory, and mapping. We evaluate our planner in simulation using various metrics like planning time, number of actions required, and length of planning horizon. We then present an implementation on the PR2 robot and use it for object search in clutter.

ICRA Conference 2012 Conference Paper

A generalized framework for opening doors and drawers in kitchen environments

  • Thomas Rühr
  • Jürgen Sturm
  • Dejan Pangercic
  • Michael Beetz
  • Daniel Cremers

In this paper, we present a generalized framework for robustly operating previously unknown cabinets in kitchen environments. Our framework consists of the following four components: (1) a module for detecting both Lambertian and non-Lambertian (i. e. specular) handles, (2) a module for opening and closing novel cabinets using impedance control and for learning their kinematic models, (3) a module for storing and retrieving information about these objects in the map, and (4) a module for reliably operating cabinets of which the kinematic model is known. The presented work is the result of a collaboration of three PR2 beta sites. We rigorously evaluated our approach on 29 cabinets in five real kitchens located at our institutions. These kitchens contained 13 drawers, 12 doors, 2 refrigerators and 2 dishwashers. We evaluated the overall performance of detecting the handle of a novel cabinet, operating it and storing its model in a semantic map. We found that our approach was successful in 51. 9% of all 104 trials. With this work, we contribute a well-tested building block of open-source software for future robotic service applications.

IROS Conference 2011 Conference Paper

Autonomous semantic mapping for robots performing everyday manipulation tasks in kitchen environments

  • Nico Blodow
  • Lucian Cosmin Goron
  • Zoltán-Csaba Márton
  • Dejan Pangercic
  • Thomas Rühr
  • Moritz Tenorth
  • Michael Beetz

In this work we report about our efforts to equip service robots with the capability to acquire 3D semantic maps. The robot autonomously explores indoor environments through the calculation of next best view poses, from which it assembles point clouds containing spatial and registered visual information. We apply various segmentation methods in order to generate initial hypotheses for furniture drawers and doors. The acquisition of the final semantic map makes use of the robot's proprioceptive capabilities and is carried out through the robot's interaction with the environment. We evaluated the proposed integrated approach in the real kitchen in our laboratory by measuring the quality of the generated map in terms of the map's applicability for the task at hand (e. g. resolving counter candidates by our knowledge processing system).