Arrow Research search

Author name cluster

Christopher Rasmussen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

13 papers
2 author rows

Possible papers

13

IROS Conference 2014 Conference Paper

Perception and control strategies for driving utility vehicles with a humanoid robot

  • Christopher Rasmussen
  • Kiwon Sohn
  • Qiaosong Wang
  • Paul Y. Oh

This paper describes the hardware and software components of a general-purpose humanoid robot system for autonomously driving several different types of utility vehicles. The robot recognizes which vehicle it is in, localizes itself with respect to the dashboard, and self-aligns in order to interface with the steering wheel and accelerator pedal. Low- and higher-level methods are presented for speed control, environment perception, and trajectory planning and following suitable for operation in planar areas with discrete obstacles as well as along road-like paths.

IROS Conference 2012 Conference Paper

Simplified markov random fields for efficient semantic labeling of 3D point clouds

  • Yan Lu
  • Christopher Rasmussen

In this paper, we focus on 3D point cloud classification by assigning semantic labels to each point in the scene. We propose to use simplified Markov networks to model the contextual relations between points, where the node potentials are calculated from point-wise classification results using off-the-shelf classifiers, such as Random Forest and Support Vector Machines, and the edge potentials are set by physical distance between points. Our experimental results show that this approach yields comparable if not better results with improved speed compared with state-of-the-art methods. We also propose a novel robust neighborhood filtering method to exclude outliers in the neighborhood of points, in order to reduce noise in local geometric statistics when extracting features and also to reduce number of false edges when constructing Markov networks. We show that applying robust neighborhood filtering improves the results when classifying point clouds with more object categories.

IROS Conference 2011 Conference Paper

Integrating stereo structure for omnidirectional trail following

  • Christopher Rasmussen
  • Yan Lu
  • Mehmet Kemal Kocamaz

We describe a system which follows “trails” for autonomous outdoor robot navigation. Through a combination of appearance and structural cues derived from stereo omnidirectional color cameras, the algorithm is able to detect and track rough paths despite widely varying tread material, border vegetation, and illumination conditions. The approaching trail region is modeled as a circular arc segment of constant width. Using likelihood formulations which measure color, brightness, and/or height contrast between a hypothetical region and flanking areas, the tracker performs a robust randomized search for the most likely trail region and robot pose relative to it with no a priori appearance model. The addition of the structural information, which is derived from a semi-global dense stereo algorithm with ground-plane fitting, is shown to improve trail segmentation accuracy and provide an additional layer of safety beyond solely ladar-based obstacle avoidance. Our system's ability to follow a variety of trails is demonstrated through live runs as well as analysis of offline runs on several long sequences with diverse appearance and structural characteristics using ground-truth segmentations.

IROS Conference 2010 Conference Paper

Trail following with omnidirectional vision

  • Christopher Rasmussen
  • Yan Lu
  • Mehmet Kemal Kocamaz

We describe a system which follows “trails” for autonomous outdoor robot navigation. Through a combination of visual cues provided by stereo omnidirectional color cameras and ladar-based structural information, the algorithm is able to detect and track rough paths despite widely varying tread material, border vegetation, and illumination conditions. The approaching trail region is simply modeled as a circular arc of constant width. Using an adaptive measure of color and brightness contrast between a hypothetical region and flanking areas, the tracker performs a robust randomized search for the most likely trail region and robot pose relative to it with no a priori appearance model. Stereo visual odometry improves tracker dynamics on uneven terrain and permits local obstacle map maintenance. A motion planner is also described which takes the trail shape estimate and local map to plan smooth trajectories around in-trail and near-trail hazards. Our system's performance is analyzed on several long sequences with diverse appearance and structural characteristics using ground-truth segmentations.

IROS Conference 2009 Conference Paper

Appearance contrast for fast, robust trail-following

  • Christopher Rasmussen
  • Yan Lu
  • Mehmet Kemal Kocamaz

We describe a framework for finding and tracking ¿trails¿ for autonomous outdoor robot navigation. Through a combination of visual cues and ladar-derived structural information, the algorithm is able to follow paths which pass through multiple zones of terrain smoothness, border vegetation, tread material, and illumination conditions. Our shape-based visual trail tracker assumes that the approaching trail region is approximately triangular under perspective. It generates region hypotheses from a learned distribution of expected trail width and curvature variation, and scores them using a robust measure of color and brightness contrast with flanking regions. The structural component analogously rewards hypotheses which correspond to empty or low-density regions in a groundstrike-filtered ladar obstacle map. Our system's performance is analyzed on several long sequences with diverse appearance and structural characteristics. Ground-truth segmentations are used to quantify performance where available, and several alternative algorithms are compared on the same data.

IROS Conference 2008 Conference Paper

Shape-guided superpixel grouping for trail detection and tracking

  • Christopher Rasmussen
  • Donald Scott

We describe a framework for detecting and tracking continuous ldquotrailsrdquo in images and image sequences for autonomous robot navigation. Continuous trails are extended regions along the ground such as roads, hiking paths, rivers, and pipelines which can be navigationally useful for ground-based or aerial robots. Our approach to single-image trail segmentation incorporates both bottom-up and top-down processes. First, good grouping hypotheses are efficiently generated by probabilistic clustering of superpixels based on color similarity. Second, hypotheses are robustly ranked with an objective function comprising shape, appearance, and deformation terms. The shape term measures how well a triangle, the approximate template for a trail viewed under perspective, can be fit to the groupingpsilas boundary. The appearance term reflects the visual contrast between the grouping and its surroundings using a between-class/within-class scatter measure. Finally, the deformation term measures the closeness of the fitted triangle to a learned distribution which captures expected size, location, and other degrees of shape variation. Although trail detection is accurate and reasonably fast on a variety of isolated images, we describe how introducing temporal filtering to both the bottom-up and top-down stages increases segmentation accuracy and per-frame speed over image sequences. Results are shown on varied sequences collected from flying and driving platforms, as well as images sampled from the Web.

ICRA Conference 2006 Conference Paper

A Hybrid Vision + Ladar Rural Road Follower

  • Christopher Rasmussen

We present a vision- and ladar-based approach to autonomous driving on rural and desert roads that has been tested extensively in a closed-loop system. The vision component uses Gabor wavelet filters for texture analysis to find ruts and tracks from which the road vanishing point can be inferred via Hough-style voting, yielding a direction estimate for steering control. The ladar component projects detected obstacles along the road direction onto the plane of the front of the vehicle and tracks the 1-D obstacle "gap" due to the road to yield a lateral offset estimate. Several image- and state-based tests to detect failure conditions such as off-road poses (i. e. , there is no road to follow) and poor lighting due to sun glare or distracting shadows are also explained. The system's efficacy is demonstrated with full control of a vehicle over 10+ miles of difficult roads at up to 25 mph, as well as analysis of logged data in diverse situations

IROS Conference 2005 Conference Paper

Randomized view planning and occlusion removal for mosaicing building facades

  • Christopher Rasmussen
  • Thommen Korah
  • William Ulrich

We present two key parts of an ongoing robotic, vision-based architectural modeling project. The first component is a randomized approach to view planning for a single ground robot scanning a building perimeter to recover a series of texture map mosaics. This algorithm generates paths that simultaneously address coverage and quality (i. e. , real-valued distance and foreshortening factors). The second part is a technique for "cleaning" the captured texture maps in the presence of occlusions caused by trees, signs, people, and other foreground objects. When such occlusions comprise a minority of views a background feature can be recovered via temporal median filtering, but when they are in the majority, appearance information from other visible portions of the facade provides a critical cue to correctly complete the mosaic. We describe a novel spatiotemporal timeline-based inpainting algorithm that identifies and corrects such areas.

ICRA Conference 2002 Conference Paper

Combining Laser Range, Color, and Texture Cues for Autonomous Road Following

  • Christopher Rasmussen

We describe results on combining depth information from a laser range-finder and color and texture image cues to segment ill-structured dirt, gravel, and asphalt roads as input to an autonomous road following system. A large number of registered laser and camera images were captured at frame-rate on a variety, of rural roads, allowing laser features such as 3-D height and smoothness to be correlated with image features such as color histograms and Gabor filter responses. A small set of road models was generated by training separate neural networks on labeled feature vectors clustered by road "type. " By first classifying the type of a novel road image, an appropriate second-stage classifier was selected to segment individual pixels, achieving a high degree of accuracy on arbitrary images from the dataset. Segmented images combined with laser range information and the vehicle's inertial navigation data were used to construct 3-D maps suitable for path planning.

ICRA Conference 2002 Conference Paper

Feature Detection and Tracking for Mobile Robots using a Combination of Ladar and Color Images

  • Tsai Hong
  • Tommy Chang
  • Christopher Rasmussen
  • Michael Shneier

In an outdoor, off-road mobile robotics environment, it is important to identify objects that can affect the vehicle's ability to traverse its planned path, and to determine their three-dimensional characteristics. In the paper, a combination of three elements is used to accomplish this task. An imaging ladar collects range images of the scene. A color camera, whose position relative to the ladar is known, is used to gather color images. Information extracted from these sensors is used to build a world model, a representation of the current state of the world. The world model is used actively in the sensing to predict what should be visible in each of the sensors during the next imaging cycle. The paper explains how the combined use of these three types of information leads to a robust understanding of the local environment surrounding the robotic vehicle for two important tasks: puddle/pond avoidance and road sign detection.

IROS Conference 1998 Conference Paper

Joint probabilistic techniques for tracking objects using multiple visual cues

  • Christopher Rasmussen
  • Gregory D. Hager

Robots relying on vision as a primary sensor frequently need to track common objects such as people, cars, and tools in order to successfully perform autonomous navigation or grasping tasks. These objects may comprise many visual parts and attributes, yet image-based tracking algorithms are often keyed to only one of a target's identifying characteristics. In this paper, we present a framework for sharing information among disparate state estimation processes operating on the same underlying visual object. Well-known techniques for joint probabilistic data association are adapted to yield increased robustness when multiple trackers attuned to different visual cues are deployed simultaneously. We also formulate a measure of tracker confidence, based on distinctiveness and occlusion probability, which permits the deactivation of trackers before erroneous state estimates adversely affect the ensemble. We will discuss experiments using color-region- and snake-based tracking in tandem that demonstrate the efficacy of this approach.

ICRA Conference 1997 Conference Paper

Image-based prediction of landmark features for mobile robot navigation

  • Gregory D. Hager
  • David J. Kriegman
  • Erliang Yeh
  • Christopher Rasmussen

We have been developing an architecture for vision-based navigation which relies on continuous feedback from visual "landmarks" to control robot motion, In this approach, landmarks are consistently located and acquired as they come into view. To make this process efficient and robust, it is important that the image locations of these features can be predicted from available image information. In this article, we discuss methods for direct image-based prediction of point and line features for a mobile system operating on a planar surface. Preliminary experimental results suggest that image-based prediction con be performed efficiently and with sufficient accuracy to ensure robust acquisition of navigational landmarks.

AAAI Conference 1996 Conference Paper

Robot Navigation Using Image Sequences

  • Christopher Rasmussen

We describe a framework for robot navigation that exploits the continuity of image sequences. Tracked visual features both guide the robot and provide predictive information about subsequent features to track. Our hypothesis is that image-based techniques will allow accurate motion without a precise geometric model of the world, while using predictive information will add speed and robustness. A basic component of our framework is called a scene, which is the set of image features stable over some segment of motion. When the scene changes, it is appended to a stored sequence. As the robot moves, correspondences and dissimilarities between current, remembered, and expected scenes provide cues to join and split scene sequences, forming a map-like directed graph. Visual servoing on features in successive scenes is used to traverse a path between robot and goal map locations. In our framework, a human guide serves as a scene recognition oracle during a map-learning phase; thereafter, assuming a known starting position, the robot can independently determine its location without general scene recognition ability. A prototype implementation of this framework uses as features color patches, sum-of-squared differences (SSD) subimages, or image projections of rectangles.