Arrow Research search

Author name cluster

Marius Fehr

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

9 papers
1 author row

Possible papers

9

ICRA Conference 2021 Conference Paper

3D3L: Deep Learned 3D Keypoint Detection and Description for LiDARs

  • Dominc Streiff
  • Lukas Bernreiter
  • Florian Tschopp
  • Marius Fehr
  • Roland Siegwart

With the advent of powerful, light-weight 3D LiDARs, they have become the hearth of many navigation and SLAM algorithms on various autonomous systems. Pointcloud registration methods working with unstructured pointclouds such as ICP are often computationally expensive or require a good initial guess. Furthermore, 3D feature-based registration methods have never quite reached the robustness of 2D methods in visual SLAM. With the continuously increasing resolution of LiDAR range images, these 2D methods not only become applicable but should exploit the illumination-independent modalities that come with it, such as depth and intensity. In visual SLAM, deep learned 2D features and descriptors perform exceptionally well compared to traditional methods. In this publication, we use a state-of-the-art 2D feature network as a basis for 3D3L, exploiting both intensity and depth of LiDAR range images to extract powerful 3D features. Our results show that these keypoints and descriptors extracted from LiDAR scan images outperform state-of-the-art on different benchmark metrics and allow for robust scan-to-scan alignment as well as global localization.

ICRA Conference 2020 Conference Paper

Hybrid Topological and 3D Dense Mapping through Autonomous Exploration for Large Indoor Environments

  • Clara Gómez
  • Marius Fehr
  • Alexander Millane
  • Alejandra C. Hernández
  • Juan I. Nieto 0001
  • Ramón Barber
  • Roland Siegwart

Robots require a detailed understanding of the 3D structure of the environment for autonomous navigation and path planning. A popular approach is to represent the environment using metric, dense 3D maps such as 3D occupancy grids. However, in large environments the computational power required for most state-of-the-art 3D dense mapping systems is compromising precision and real-time capability. In this work, we propose a novel mapping method that is able to build and maintain 3D dense representations for large indoor environments using standard CPUs. Topological global representations and 3D dense submaps are maintained as hybrid global map. Submaps are generated for every new visited place. A place (room) is identified as an isolated part of the environment connected to other parts through transit areas (doors). This semantic partitioning of the environment allows for a more efficient mapping and path-planning. We also propose a method for autonomous exploration that directly builds the hybrid representation in real time. We validate the real-time performance of our hybrid system on simulated and real environments regarding mapping and path-planning. The improvement in execution time and memory requirements upholds the contribution of the proposed work.

IROS Conference 2018 Conference Paper

History-Aware Autonomous Exploration in Confined Environments Using MAVs

  • Christian Witting
  • Marius Fehr
  • Rik Bähnemann
  • Helen Oleynikova
  • Roland Siegwart

Many scenarios require a robot to be able to explore its 3D environment online without human supervision. This is especially relevant for inspection tasks and search and rescue missions. To solve this high-dimensional path planning problem, sampling-based exploration algorithms have proven successful. However, these do not necessarily scale well to larger environments or spaces with narrow openings. This paper presents a 3D exploration planner based on the principles of Next-Best Views (NBVs). In this approach, a Micro-Aerial Vehicle (MAV)equipped with a limited field-of-view depth sensor randomly samples its configuration space to find promising future viewpoints. In order to obtain high sampling efficiency, our planner maintains and uses a history of visited places, and locally optimizes the robot's orientation with respect to unobserved space. We evaluate our method in several simulated scenarios, and compare it against a state-of-the-art exploration algorithm. The experiments show substantial improvements in exploration time (2 ⨯ faster), computation time, and path length, and advantages in handling difficult situations such as escaping dead-ends (up to 20 ⨯ faster). Finally, we validate the on-line capability of our algorithm on a computational constrained real world MAV.

IROS Conference 2018 Conference Paper

Incremental Object Database: Building 3D Models from Multiple Partial Observations

  • Fadri Furrer
  • Tonci Novkovic
  • Marius Fehr
  • Abel Gawel
  • Margarita Grinvald
  • Torsten Sattler
  • Roland Siegwart
  • Juan I. Nieto 0001

Collecting 3D object data sets involves a large amount of manual work and is time consuming. Getting complete models of objects either requires a 3D scanner that covers all the surfaces of an object or one needs to rotate it to completely observe it. We present a system that incrementally builds a database of objects as a mobile agent traverses a scene. Our approach requires no prior knowledge of the shapes present in the scene. Object-like segments are extracted from a global segmentation map, which is built online using the input of segmented RGB-D images. These segments are stored in a database, matched among each other, and merged with other previously observed instances. This allows us to create and improve object models on the fly and to use these merged models to reconstruct also unobserved parts of the scene. The database contains each (potentially merged) object model only once, together with a set of poses where it was observed. We evaluate our pipeline with one public dataset, and on a newly created Google Tango dataset containing four indoor scenes with some of the objects appearing multiple times, both within and across scenes.

ICRA Conference 2018 Conference Paper

Topomap: Topological Mapping and Navigation Based on Visual SLAM Maps

  • Fabian Blöchliger
  • Marius Fehr
  • Marcin Dymczyk
  • Thomas Schneider 0007
  • Roland Siegwart

Visual robot navigation within large-scale, semistructured environments deals with various challenges such as computation intensive path planning algorithms or insufficient knowledge about traversable spaces. Moreover, many state-of-the-art navigation approaches only operate locally instead of gaining a more conceptual understanding of the planning objective. This limits the complexity of tasks a robot can accomplish and makes it harder to deal with uncertainties that are present in the context of real-time robotics applications. In this work, we present Topomap, a framework which simplifies the navigation task by providing a map to the robot which is tailored for path planning use. This novel approach transforms a sparse feature-based map from a visual Simultaneous Localization And Mapping (SLAM) system into a three-dimensional topological map. This is done in two steps. First, we extract occupancy information directly from the noisy sparse point cloud. Then, we create a set of convex free-space clusters, which are the vertices of the topological map. We show that this representation improves the efficiency of global planning, and we provide a complete derivation of our algorithm. Planning experiments on real world datasets demonstrate that we achieve similar performance as RRT* with significantly lower computation times and storage requirements. Finally, we test our algorithm on a mobile robotic platform to prove its advantages.

IROS Conference 2018 Conference Paper

Visual-Inertial Teach and Repeat Powered by Google Tango

  • Marius Fehr
  • Thomas Schneider 0007
  • Roland Siegwart

Many industrial facilities require periodic visual inspections. Often the points of interest are out of reach or in potentially hazardous environment. Multi-copters are ideal platforms to automate this expensive and tedious task. This video presents a system that enables a human operator to teach a visual inspection task to an autonomous aerial vehicle by simply demonstrating the task using a tablet. The system employs the Google Tango visual-inertial mapping framework as the only source of pose estimates, thus enabling operation in GPS-denied environments. In a first step the operator records the desired inspection path using the tablet. Inspection points are automatically inserted if the operator pauses, holding a viewpoint. The mapping framework then computes a feature-based localization map, which is shared with the robot. After take-off, the robot estimates its pose based on this map and plans a smooth trajectory through the way points defined by the operator. Furthermore, the system is able to track the global pose of other robots or the operator, localized in the same map, and follow them in real-time, while avoiding collision. This was demonstrated in the second part of the video, where the robot is following the operator in real-time through a hedge maze.

ICRA Conference 2017 Conference Paper

TSDF-based change detection for consistent long-term dense reconstruction and dynamic object discovery

  • Marius Fehr
  • Fadri Furrer
  • Ivan Dryanovski
  • Jürgen Sturm
  • Igor Gilitschenski
  • Roland Siegwart
  • Cesar Cadena 0001

Robots that are operating for extended periods of time need to be able to deal with changes in their environment and represent them adequately in their maps. In this paper, we present a novel 3D reconstruction algorithm based on an extended Truncated Signed Distance Function (TSDF) that enables to continuously refine the static map while simultaneously obtaining 3D reconstructions of dynamic objects in the scene. This is a challenging problem because map updates happen incrementally and are often incomplete. Previous work typically performs change detection on point clouds, surfels or maps, which are not able to distinguish between unexplored and empty space. In contrast, our TSDF-based representation naturally contains this information and thus allows us to more robustly solve the scene differencing problem. We demonstrate the algorithms performance as part of a system for unsupervised object discovery and class recognition. We evaluated our algorithm on challenging datasets that we recorded over several days with RGB-D enabled tablets. To stimulate further research in this area, all of our datasets are publicly available 3.

IROS Conference 2017 Conference Paper

Voxblox: Incremental 3D Euclidean Signed Distance Fields for on-board MAV planning

  • Helen Oleynikova
  • Zachary Taylor
  • Marius Fehr
  • Roland Siegwart
  • Juan I. Nieto 0001

Micro Aerial Vehicles (MAVs) that operate in unstructured, unexplored environments require fast and flexible local planning, which can replan when new parts of the map are explored. Trajectory optimization methods fulfill these needs, but require obstacle distance information, which can be given by Euclidean Signed Distance Fields (ESDFs). We propose a method to incrementally build ESDFs from Truncated Signed Distance Fields (TSDFs), a common implicit surface representation used in computer graphics and vision. TSDFs are fast to build and smooth out sensor noise over many observations, and are designed to produce surface meshes. We show that we can build TSDFs faster than Octomaps, and that it is more accurate to build ESDFs out of TSDFs than occupancy maps. Our complete system, called voxblox, is available as open source and runs in real-time on a single CPU core. We validate our approach on-board an MAV, by using our system with a trajectory optimization local planner, entirely on-board and in real-time.

ICRA Conference 2016 Conference Paper

Reshaping our model of the world over time

  • Marius Fehr
  • Marcin Dymczyk
  • Simon Lynen
  • Roland Siegwart

An accurate estimate of the 3D-structure in the environment is key to robotic applications such as autonomous inspection, obstacle avoidance and manipulation. Recent years have seen substantial algorithmic advances towards creating highly accurate models of small objects as well as large scale architectural structures. Most commonly a rich set of images covering a static scene are used to jointly estimate the pose of the cameras and the observed 3D-structure. For many practical application however the assumption of static scenes and sufficient coverage by images does not hold. In fact for industrial inspection the change in the scene is of most interest and the limited resources on mobile platforms don't allow for extensive data captures. In this paper we investigate the potential of combining multiple independent captures of a place to selectively reconstruct a scene over time. We propose an incremental reconstruction algorithm which identifies and fuses novel data into a joint model of the scene. Being able to identify changing parts of the scene is particularly interesting for mobile applications where bandwidth, storage and processing power are limited. Through detailed experiments, we show the potential of our approach to use multiple mobile devices to reconstruct and update a model of the static part of the environment over time.