Arrow Research search

Author name cluster

Brian Coltin

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

21 papers
2 author rows

Possible papers

21

ICRA Conference 2025 Conference Paper

AstroLoc2: Fast Sequential Depth-Enhanced Localization for Free-Flying Robots

  • Ryan Soussan
  • Marina Moreira 0001
  • Brian Coltin
  • Trey Smith

We present AstroLoc2, a monocular and time-offlight (ToF) visual-inertial graph-based localizer used by the Astrobee free-flying robots on the International Space Station (ISS). AstroLoc2 sequentially performs odometry and absolute localization in a single process to decouple map noise from velocity and IMU bias estimation and run efficiently on resource constrained platforms. It improves monocular visual-inertial odometry robustness by adding ToF correspondence factors and uses adaptive map-matching to increase image registration reliability in dynamic environments while preserving fast matching in static ones. We evaluate the performance of AstroLoc2 on a public dataset of 10 ISS activities and show that it improves localization accuracy by 16 % and success rates by 5. 5 % while maintaining a faster runtime than leading methods. AstroLoc2 has enabled the Astrobee robots to perform higher precision maneuvers in changing environments on the ISS. It can be configured for other limited computation platforms and we release the source code to the public.

ICRA Conference 2024 Conference Paper

An Investigation of Multi-feature Extraction and Super-resolution with Fast Microphone Arrays

  • Eric T. Chang
  • Runsheng Wang
  • Peter Ballentine
  • Jingxi Xu 0002
  • Trey Smith
  • Brian Coltin
  • Ioannis Kymissis
  • Matei Ciocarlie

In this work, we use MEMS microphones as vibration sensors to simultaneously classify texture and estimate contact position and velocity. Vibration sensors are an important facet of both human and robotic tactile sensing, providing fast detection of contact and onset of slip. Microphones are an attractive option for implementing vibration sensing as they offer a fast response and can be sampled quickly, are affordable, and occupy a very small footprint. Our prototype sensor uses only a sparse array (8-9 mm spacing) of distributed MEMS microphones (<$1, 3. 76×2. 95×1. 10 mm) embedded under an elastomer. We use transformer-based architectures for data analysis, taking advantage of the microphones’ high sampling rate to run our models on time-series data as opposed to individual snapshots. This approach allows us to obtain 77. 3% average accuracy on 4-class texture classification (84. 2% when excluding the slowest drag velocity), 1. 8 mm mean error on contact localization, and 5. 6 mm/s mean error on contact velocity. We show that the learned texture and localization models are robust to varying velocity and generalize to unseen velocities. We also report that our sensor provides fast contact detection, an important advantage of fast transducers. This investigation illustrates the capabilities one can achieve with a MEMS microphone array alone, leaving valuable sensor real estate available for integration with complementary tactile sensing modalities.

ICRA Conference 2022 Conference Paper

AstroLoc: An Efficient and Robust Localizer for a Free-flying Robot

  • Ryan Soussan
  • Varsha Kumar
  • Brian Coltin
  • Trey Smith

We present AstroLoc, an efficient and robust monocular visual-inertial graph-based localization system used by the Astrobee free-flying robots onboard the International Space Station (ISS). We provide a novel localization system that limits the traditionally higher computation times for graph-based localization systems and enables the resource constrained Astrobee robots to benefit from their increased accuracy. We also introduce methods for handling cheirality issues for visual odometry and localization factors that further increase localization robustness. We evaluate the performance of AstroLoc on a dataset of ISS activities and show that it greatly improves pose, velocity, and IMU bias estimation accuracy while efficiently running in a limited computation environment. AstroLoc has improved the localization accuracy for the Astrobee robots on the ISS and has led to more successful and longer duration activities. While the AstroLoc system is tuned for the Astrobee robots, it can be configured for any resource constrained platform. The source code for AstroLoc is released to the public.

ICRA Conference 2022 Conference Paper

Robust Semantic Mapping and Localization on a Free-Flying Robot in Microgravity

  • Ian D. Miller
  • Ryan Soussan
  • Brian Coltin
  • Trey Smith
  • Vijay Kumar 0001

We propose a system that uses semantic object detections to localize a microgravity free-flyer. Many applications require absolute localization in a known reference frame, such as the execution of waypoint trajectories defined by human operators. Classical geometric methods build a map of point features, which may not be able to be associated after lighting or environmental changes. By contrast, semantics remain invariant to changes up to the robustness of the detection algorithm and motion of the semantic objects. In this work, we describe our approaches for both offline semantic map generation as well as online localization against a semantic map, intended to run in real-time on the robot. We additionally demonstrate how our semantic localizer outperforms image-feature matching in some cases, and show the robustness of the algorithm to environmental changes. Crucially, we show in our experiments that when semantics are used to supplement point features, localization is always improved. To our knowledge, these experiments demonstrate the first use of learned semantics for localization on a free-flying robot in microgravity.

IROS Conference 2021 Conference Paper

A Multi-Axis FBG-Based Tactile Sensor for Gripping in Space

  • Samuel Frishman
  • Julia Di
  • Zulekha Karachiwalla
  • Richard J. Black
  • Kian Moslehi
  • Trey Smith
  • Brian Coltin
  • Bijan Moslehi

Tactile sensing can improve end-effector control and grasp quality, especially for free-flying robots where target approach and alignment present particular challenges. However, many current tactile sensing technologies are not suitable for the harsh environment of space. We present a tactile sensor that measures normal and biaxial shear strains in the pads of a gripper using a single optical fiber with Bragg grating (FBG) sensors. Compared to conventional wired solutions, the encapsulated optical fibers are immune to electromagnetic interference — critical in the harsh environment of space. Sampling is possible at over 1 kHz to detect dynamic events. We mount sensor pads on a custom two-fingered gripper with independent control of the distal and proximal phalanges, allowing for grip readjustment based on sensing data. Calibrated sensor data for forces match those from a commercial multiaxial load cell with an average 96. 2% RMS for all taxels. We demonstrate the gripper on tasks motivated by the Astrobee free-flying robots in the International Space Station (ISS): gripping corners, detecting misaligned grasps, and improving load sharing over the contact areas in pinch grasps.

IROS Conference 2021 Conference Paper

Online Information-Aware Motion Planning with Inertial Parameter Learning for Robotic Free-Flyers

  • Monica Ekal
  • Keenan Albee
  • Brian Coltin
  • Rodrigo M. M. Ventura
  • Richard Linares
  • David W. Miller

Space free-flyers like the Astrobee robots currently operating aboard the International Space Station must operate with inherent system uncertainties. Parametric uncertainties like mass and moment of inertia are especially important to quantify in these safety-critical space systems and can change in scenarios such as on-orbit cargo movement, where unknown grappled payloads significantly change the system dynamics. Cautiously learning these uncertainties en route can potentially avoid time- and fuel-consuming pure system identification maneuvers. Recognizing this, this work proposes RATTLE, an online information-aware motion planning algorithm that explicitly weights parametric model-learning coupled with real-time replanning capability that can take advantage of improved system models. The method consists of a two-tiered (global and local) planner, a low-level model predictive controller, and an online parameter estimator that produces estimates of the robot’s inertial properties for more informed control and replanning on-the-fly; all levels of the planning and control feature online update-able models. Simulation results of RAT-TLE for the Astrobee free-flyer grappling an uncertain payload are presented alongside results of a hardware demonstration showcasing the ability to explicitly encourage model parametric learning while achieving otherwise useful motion.

AAAI Conference 2020 Short Paper

Search Tree Pruning for Progressive Neural Architecture Search (Student Abstract)

  • Deanna Flynn
  • P. Michael Furlong
  • Brian Coltin

Our neural architecture search algorithm progressively searches a tree of neural network architectures. Child nodes are created by inserting new layers determined by a transition graph into a parent network up to a maximum depth and pruned when performance is worse than its parent. This increases efficiency but makes the algorithm greedy. Simpler networks are successfully found before more complex ones that can achieve benchmark performance similar to other topperforming networks.

IROS Conference 2018 Conference Paper

HTC Vive: Analysis and Accuracy Improvement

  • Miguel Borges
  • Andrew Symington
  • Brian Coltin
  • Trey Smith
  • Rodrigo M. M. Ventura

HTC Vive has been gaining attention as a cost-effective, off-the-shelf tracking system for collecting ground truth pose data. We assess this system's pose estimation through a series of controlled experiments where we show its precision to be in the millimeter magnitude and accuracy to range from millimeter to meter. We also show that Vive gives greater weight to inertial measurements in order to produce a smooth trajectory for virtual reality applications. Hence, the Vive's off the shelf algorithm is poorly suited for robotics applications such as measuring ground truth poses, where accuracy and repeatability are key. Therefore we introduce a new open-source tracking algorithm and calibration procedure for Vive which address these problems. We also show that our approach improves the pose estimation repeatability and accuracy by up to two orders of magnitude.

ICRA Conference 2018 Conference Paper

Low-Drift Visual Odometry in Structured Environments by Decoupling Rotational and Translational Motion

  • Pyojin Kim
  • Brian Coltin
  • H. Jin Kim

We present a low-drift visual odometry algorithm that separately estimates rotational and translational motion from lines, planes, and points found in RGB-D images. Previous methods estimate drift-free rotational motion from structural regularities to reduce drift in the rotation estimate, which is the primary source of positioning inaccuracy in visual odometry. However, multiple orthogonal planes are required to be visible throughout the entire motion estimation process; otherwise, these VO approaches fail. We propose a new approach to estimate drift-free rotational motion jointly from both lines and planes by exploiting environmental regularities. We track the spatial regularities with an efficient SO(3)-manifold constrained mean shift algorithm. Once the drift-free rotation is found, we recover the translational motion from all tracked points with and without depth by minimizing the de-rotated reprojection error. We compare the proposed algorithm to other state-of-the-art visual odometry methods on a variety of RGB-D datasets (including especially challenging pure rotations) and demonstrate improved accuracy and lower drift error.

ICRA Conference 2017 Conference Paper

Robust visual localization in changing lighting conditions

  • Pyojin Kim
  • Brian Coltin
  • Oleg Alexandrov
  • H. Jin Kim

We present an illumination-robust visual localization algorithm for Astrobee, a free-flying robot designed to autonomously navigate on the International Space Station (ISS). Astrobee localizes with a monocular camera and a pre-built sparse map composed of natural visual features. Astrobee must perform tasks not only during the day, but also at night when the ISS lights are dimmed. However, the localization performance degrades when the observed lighting conditions differ from the conditions when the sparse map was built. We investigate and quantify the effect of lighting variations on visual feature-based localization systems, and discover that maps built in darker conditions can also be effective in bright conditions, but the reverse is not true. We extend Astrobee's localization algorithm to make it more robust to changing-light environments on the ISS by automatically recognizing the current illumination level, and selecting an appropriate map and camera exposure time. We extensively evaluate the proposed algorithm through experiments on Astrobee.

IROS Conference 2016 Conference Paper

Localization from visual landmarks on a free-flying robot

  • Brian Coltin
  • Jesse Fusco
  • Zachary Moratto
  • Oleg Alexandrov
  • Robert Nakamura

We present the localization approach for Astrobee, a new free-flying robot designed to navigate autonomously on the International Space Station (ISS). Astrobee will accommodate a variety of payloads and enable guest scientists to run experiments in zero-g, as well as assist astronauts and ground controllers. Astrobee will replace the SPHERES robots which currently operate on the ISS, whose use of fixed ultrasonic beacons for localization limits them to work in a 2 meter cube. Astrobee localizes with monocular vision and an IMU, without any environmental modifications. Visual features detected on a pre-built map, optical flow information, and IMU readings are all integrated into an extended Kalman filter (EKF) to estimate the robot pose. We introduce several modifications to the filter to make it more robust to noise, and extensively evaluate the localization algorithm.

IJCAI Conference 2015 Conference Paper

CoBots: Robust Symbiotic Autonomous Mobile Service Robots

  • Manuela Veloso
  • Joydeep Biswas
  • Brian Coltin
  • Stephanie Rosenthal

We research and develop autonomous mobile service robots as Collaborative Robots, i. e. , CoBots. For the last three years, our four CoBots have autonomously navigated in our multi-floor office buildings for more than 1, 000km, as the result of the integration of multiple perceptual, cognitive, and actuations representations and algorithms. In this paper, we identify a few core aspects of our CoBots underlying their robust functionality. The reliable mobility in the varying indoor environments comes from a novel episodic non-Markov localization. Service tasks requested by users are the input to a scheduler that can consider different types of constraints, including transfers among multiple robots. With symbiotic autonomy, the CoBots proactively seek external sources of help to fill-in for their inevitable occasional limitations. We present sampled results from a deployment and conclude with a brief review of other features of our service robots.

ICRA Conference 2014 Conference Paper

Online pickup and delivery planning with transfers for mobile robots

  • Brian Coltin
  • Manuela Veloso

We have deployed a fleet of robots that pickup and deliver items requested by users in an office building. Users specify time windows in which the items should be picked up and delivered, and send in requests online. Our goal is to form a schedule which picks up and delivers the items as quickly as possible at the lowest cost. We introduce an auction-based scheduling algorithm which plans to transfer items between robots to make deliveries more efficiently. The algorithm can obey either hard or soft time constraints. We discuss how to replan in response to newly requested items, cancelled requests, delayed robots, and robot failures. We demonstrate the effectiveness of our approach through execution on robots, and examine the effect of transfers on large simulated problems.

IROS Conference 2014 Conference Paper

Ridesharing with passenger transfers

  • Brian Coltin
  • Manuela Veloso

Recently, ridesharing mobile applications, which dynamically match passengers to drivers, have begun to gain popularity. These services have the potential to fill empty seats in cars, reduce emissions and enable more efficient transportation. Ridesharing services become even more practical as robotic cars become available to do all the driving. In this work, we propose rideshare services which transfer passengers between multiple drivers. By planning for transfers, we can increase the availability and range of the rideshare service, and also reduce the total vehicular miles travelled by the network. We propose three heuristic algorithms to schedule rideshare routes with transfers. Each gives a tradeoff in terms of effectiveness and computational cost. We demonstrate these tradeoffs, both in simulation and on data from taxi passengers in San Francisco. We demonstrate scenarios where transferring passengers can provide a significant advantage.

AAAI Conference 2014 Conference Paper

Scheduling for Transfers in Pickup and Delivery Problems with Very Large Neighborhood Search

  • Brian Coltin
  • Manuela Veloso

In pickup and delivery problems (PDPs), vehicles pickup and deliver a set of items under various constraints. We address the PDP with Transfers (PDP-T), in which vehicles plan to transfer items between one another to form more efficient schedules. We introduce the Very Large Neighborhood Search with Transfers (VLNS-T) algorithm to form schedules for the PDP-T. Our approach allows multiple transfers for items at arbitrary locations, and is not restricted to a set of predefined transfer points. We show that VLNS-T improves upon the best known PDP solutions for benchmark problems, and demonstrate its effectiveness on problems sampled from real world taxi data in New York City.

AAMAS Conference 2013 Conference Paper

Scheduling Mobile Exploration Tasks for Environment Learning

  • Max Korein
  • Brian Coltin
  • Manuela Veloso

Autonomous mobile service robots navigate in their environments in order to perform tasks requested by users. We envision service robots learning about their environment by scheduling exploration tasks in which they seek out new knowledge and using this knowledge to improve the services they offer. We present the Task Graph algorithm, which chooses times for user requests based on the robot’s knowledge so as to increase the chance of success, and schedules exploration tasks in between user requests by reducing the problem to a graph search.

AAMAS Conference 2013 Conference Paper

Towards Ridesharing with Passenger Transfers

  • Brian Coltin
  • Manuela Veloso

Ridesharing services have the potential to fill empty seats in cars, reduce emissions and enable more efficient transportation. We propose rideshare services which transfer passengers between multiple drivers. By planning for transfers, we increase the availability and range of the rideshare service, and reduce the total vehicular miles travelled by the network. We propose three heuristics to schedule rideshare routes with transfers. Each provides a tradeoff in effectiveness and computational cost. We demonstrate these tradeoffs and the advantage of transfers in simulation.

IROS Conference 2012 Conference Paper

CoBots: Collaborative robots servicing multi-floor buildings

  • Manuela Veloso
  • Joydeep Biswas
  • Brian Coltin
  • Stephanie Rosenthal
  • Thomas Kollar
  • Çetin Meriçli
  • Mehdi Samadi
  • Susana Brandão

In this video we briefly illustrate the progress and contributions made with our mobile, indoor, service robots CoBots (Collaborative Robots), since their creation in 2009. Many researchers, present authors included, aim for autonomous mobile robots that robustly perform service tasks for humans in our indoor environments. The efforts towards this goal have been numerous and successful, and we build upon them. However, there are clearly many research challenges remaining until we can experience intelligent mobile robots that are fully functional and capable in our human environments.

IROS Conference 2011 Conference Paper

Corrective gradient refinement for mobile robot localization

  • Joydeep Biswas
  • Brian Coltin
  • Manuela Veloso

Particle filters for mobile robot localization must balance computational requirements and accuracy of localization. Increasing the number of particles in a particle filter improves accuracy, but also increases the computational requirements. Hence, we investigate a different paradigm to better utilize particles than to increase their numbers. To this end, we introduce the Corrective Gradient Refinement (CGR) algorithm that uses the state space gradients of the observation model to improve accuracy while maintaining low computational requirements. We develop an observation model for mobile robot localization using point cloud sensors (LIDAR and depth cameras) with vector maps. This observation model is then used to analytically compute the state space gradients necessary for CGR. We show experimentally that the resulting complete localization algorithm is more accurate than the Sampling/Importance Resampling Monte Carlo Localization algorithm, while requiring fewer particles.

AAAI Conference 2011 Conference Paper

Multi-Observation Sensor Resetting Localization with Ambiguous Landmarks

  • Brian Coltin
  • Manuela Veloso

Successful approaches to the robot localization problem include Monte Carlo particle filters, which estimate non-parametric localization belief distributions. However, particle filters fare poorly at determining the robot’s position without a good initial hypothesis. This problem has been addressed for robots that sense visual landmarks with sensor resetting, by performing sensorbased resampling when the robot is lost. For robots that make sparse, ambiguous and noisy observations, standard sensor resetting places new location hypotheses across a wide region, in positions that may be inconsistent with previous observations. We propose Multi- Observation Sensor Resetting, where observations from multiple frames are merged to generate new hypotheses more effectively. We demonstrate experimentally in the robot soccer domain on the NAO humanoid robots that Multi-Observation Sensor Resetting converges more ef- ficiently to the robot’s true position than standard sensor resetting, and is more robust to systematic vision errors.

IROS Conference 2010 Conference Paper

Mobile robot task allocation in hybrid wireless sensor networks

  • Brian Coltin
  • Manuela Veloso

Hybrid sensor networks consisting of both in-expensive static wireless sensors and highly capable mobile robots have the potential to monitor large environments at a low cost. To do so, an algorithm is needed to assign tasks to mobile robots which minimizes communication among the static sensors in order to extend the lifetime of the network. We present three algorithms to solve this task allocation problem: a centralized algorithm, an auction-based algorithm, and a novel distributed algorithm utilizing a spanning tree over the static sensors to assign tasks. We compare the assignment quality and communication costs of these algorithms experimentally. Our experiments show that at a small cost in assignment quality, the distributed tree-based algorithm significantly extends the lifetime of the static sensor network.