Arrow Research search

Author name cluster

Jonathan Kelly

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

27 papers
2 author rows

Possible papers

27

AAAI Conference 2026 Conference Paper

Generative Graphical Inverse Kinematics (Abstract Reprint)

  • Oliver Limoyo
  • Filip Marić
  • Matthew Giamou
  • Petra Alexson
  • Ivan Petrović
  • Jonathan Kelly

Quickly and reliably finding accurate inverse kinematics (IK) solutions remains a challenging problem for many robot manipulators. Existing numerical solvers are broadly applicable but typically only produce a single solution and rely on local search techniques to minimize nonconvex objective functions. More recent learning-based approaches that approximate the entire feasible set of solutions have shown promise as a means to generate multiple fast and accurate IK results in parallel. However, existing learning-based techniques have a significant drawback: each robot of interest requires a specialized model that must be trained from scratch. To address this key shortcoming, we propose a novel distance-geometric robot representation coupled with a graph structure that allows us to leverage the sample efficiency of Euclidean equivariant functions and the generalizability of graph neural networks (GNNs). Our approach is generative graphical inverse kinematics (GGIK), the first learned IK solver able to accurately and efficiently produce a large number of diverse solutions in parallel while also displaying the ability to generalize -- a single learned model can be used to produce IK solutions for a variety of different robots. When compared to several other learned IK methods, GGIK provides more accurate solutions with the same amount of data. GGIK can generalize reasonably well to robot manipulators unseen during training. Additionally, GGIK can learn a constrained distribution that encodes joint limits and scales efficiently to larger robots and a high number of sampled solutions. Finally, GGIK can be used to complement local IK solvers by providing reliable initializations for a local optimization process.

ICRA Conference 2025 Conference Paper

Automated Planning Domain Inference for Task and Motion Planning

  • Jinbang Huang
  • Allen Tao
  • Rozilyn Marco
  • Miroslav Bogdanovic
  • Jonathan Kelly
  • Florian Shkurti

Task and motion planning (TAMP) frameworks address long and complex planning problems by integrating high-level task planners with low-level motion planners. However, existing TAMP methods rely heavily on the manual design of planning domains that specify the preconditions and postconditions of all high-level actions. This paper proposes a method to automate planning domain inference from a handful of test-time trajectory demonstrations, reducing the reliance on human design. Our approach incorporates a deep learning-based estimator that predicts the appropriate components of a domain for a new task and a search algorithm that refines this prediction, reducing the size and ensuring the utility of the inferred domain. Our method can generate new domains from minimal test time demonstrations, enabling robots to handle complex tasks more efficiently. We demonstrate that our approach outperforms behaviour cloning baselines, which directly imitate planner behaviour, in terms of planning performance and generalization across a variety of tasks. Additionally, our method reduces computational costs and data amount requirements at test time for inferring new planning domains.

ICRA Conference 2025 Conference Paper

Efficient Imitation Without Demonstrations via Value-Penalized Auxiliary Control from Examples

  • Trevor Ablett
  • Bryan Chan 0002
  • Jayce Haoran Wang
  • Jonathan Kelly

Common approaches to providing feedback in reinforcement learning are the use of hand-crafted rewards or full-trajectory expert demonstrations. Alternatively, one can use examples of completed tasks, but such an approach can be extremely sample inefficient. We introduce value-penalized auxiliary control from examples (VPACE), an algorithm that significantly improves exploration in example-based control by adding examples of simple auxiliary tasks and an above-success-level value penalty. Across both simulated and real robotic environments, we show that our approach substantially improves learning efficiency for challenging tasks, while maintaining bounded value estimates. Preliminary results also suggest that VPACE may learn more efficiently than the more common approaches of using full trajectories or true sparse rewards. Project site: https://papers.starslab.ca/vpace/.

IROS Conference 2024 Conference Paper

PhotoBot: Reference-Guided Interactive Photography via Natural Language

  • Oliver Limoyo
  • Jimmy Li 0001
  • Dmitriy Rivkin
  • Jonathan Kelly
  • Gregory Dudek

We introduce PhotoBot, a framework for fully automated photo acquisition based on an interplay between high-level human language guidance and a robot photographer. We propose to communicate photography suggestions to the user via reference images that are selected from a curated gallery. We leverage a visual language model (VLM) and an object detector to characterize the reference images via textual descriptions and then use a large language model (LLM) to retrieve relevant reference images based on a user’s language query through text-based reasoning. To correspond the reference image and the observed scene, we exploit pretrained features from a vision transformer capable of capturing semantic similarity across marked appearance variations. Using these features, we compute suggested pose adjustments for an RGB-D camera by solving a perspective-n-point (PnP) problem. We demonstrate our approach using a manipulator equipped with a wrist camera. Our user studies show that photos taken by PhotoBot are often more aesthetically pleasing than those taken by users themselves, as measured by human feedback. We also show that PhotoBot can generalize to other reference sources such as paintings.

IROS Conference 2024 Conference Paper

Working Backwards: Learning to Place by Picking

  • Oliver Limoyo
  • Abhisek Konar
  • Trevor Ablett
  • Jonathan Kelly
  • Francois Robert Hogan
  • Gregory Dudek

We present placing via picking (PvP), a method to autonomously collect real-world demonstrations for a family of placing tasks in which objects must be manipulated to specific, contact-constrained locations. With PvP, we approach the collection of robotic object placement demonstrations by reversing the grasping process and exploiting the inherent symmetry of the pick and place problems. Specifically, we obtain placing demonstrations from a set of grasp sequences of objects initially located at their target placement locations. Our system can collect hundreds of demonstrations in contact-constrained environments without human intervention using two modules: compliant control for grasping and tactile regrasping. We train a policy directly from visual observations through behavioural cloning, using the autonomously-collected demonstrations. By doing so, the policy can generalize to object placement scenarios outside of the training environment without privileged information (e. g. , placing a plate picked up from a table). We validate our approach in home robot scenarios that include dishwasher loading and table setting. Our approach yields robotic placing policies that outperform policies trained with kinesthetic teaching, both in terms of success rate and data efficiency, while requiring no human supervision.

ICRA Conference 2023 Conference Paper

The Sum of Its Parts: Visual Part Segmentation for Inertial Parameter Identification of Manipulated Objects

  • Philippe Nadeau
  • Matthew Giamou
  • Jonathan Kelly

To operate safely and efficiently alongside human workers, collaborative robots (cobots) require the ability to quickly understand the dynamics of manipulated objects. However, traditional methods for estimating the full set of inertial parameters rely on motions that are necessarily fast and unsafe (to achieve a sufficient signal-to-noise ratio). In this work, we take an alternative approach: by combining visual and force-torque measurements, we develop an inertial parameter identification algorithm that requires slow or “stop-and-go” motions only, and hence is ideally tailored for use around humans. Our technique, called Homogeneous Part Segmentation (HPS), leverages the observation that man-made objects are often composed of distinct, homogeneous parts. We combine a surface-based point clustering method with a volumetric shape segmentation algorithm to quickly produce a part-level segmentation of a manipulated object; the segmented representation is then used by HPS to accurately estimate the object's inertial parameters. To benchmark our algorithm, we create and utilize a novel dataset consisting of realistic meshes, segmented point clouds, and inertial parameters for 20 common workshop tools. Finally, we demonstrate the real-world performance and accuracy of HPS by performing an intricate ‘hammer balancing act’ autonomously and online with a low-cost collaborative robotic arm. Our code and dataset are open source and freely available.

ICRA Conference 2022 Conference Paper

Fast Object Inertial Parameter Identification for Collaborative Robots

  • Philippe Nadeau
  • Matthew Giamou
  • Jonathan Kelly

Collaborative robots (cobots) are machines designed to work safely alongside people in human-centric environments. Providing cobots with the ability to quickly infer the inertial parameters of manipulated objects will improve their flexibility and enable greater usage in manufacturing and other areas. To ensure safety, cobots are subject to kinematic limits that result in low signal-to-noise ratios (SNR) for velocity, acceleration, and force-torque data. This renders existing inertial parameter identification algorithms prohibitively slow and inaccurate. Motivated by the desire for faster model acquisition, we investigate the use of an approximation of rigid body dynamics to improve the SNR. Additionally, we introduce a mass discretization method that can make use of shape information to quickly identify plausible inertial parameters for a manipulated object. We present extensive simulation studies and real-world experiments demonstrating that our approach complements existing inertial parameter identification methods by specifically targeting the typical cobot operating regime.

ICRA Conference 2022 Conference Paper

Learning to Detect Slip with Barometric Tactile Sensors and a Temporal Convolutional Neural Network

  • Abhinav Grover
  • Philippe Nadeau
  • Christopher Grebe
  • Jonathan Kelly

The ability to perceive object slip via tactile feedback enables humans to accomplish complex manipulation tasks including maintaining a stable grasp. Despite the utility of tactile information for many applications, tactile sensors have yet to be widely deployed in industrial robotics settings; part of the challenge lies in identifying slip and other events from the tactile data stream. In this paper, we present a learning-based method to detect slip using barometric tactile sensors. These sensors have many desirable properties including high durability and reliability, and are built from inexpensive, off-the-shelf components. We train a temporal convolution neural network to detect slip, achieving high detection accuracies while displaying robustness to the speed and direction of the slip motion. Further, we test our detector on two manipulation tasks involving a variety of common objects and demonstrate successful generalization to real-world scenarios not seen during training. We argue that barometric tactile sensing technology, combined with data-driven learning, is suitable for many manipulation tasks such as slip compensation.

ICRA Conference 2021 Conference Paper

A Continuous-Time Approach for 3D Radar-to-Camera Extrinsic Calibration

  • Emmett Wise
  • Juraj Persic
  • Christopher Grebe
  • Ivan Petrovic
  • Jonathan Kelly

Reliable operation in inclement weather is essential to the deployment of safe autonomous vehicles (AVs). Robustness and reliability can be achieved by fusing data from the standard AV sensor suite (i. e. , lidars, cameras) with weather robust sensors, such as millimetre-wavelength radar. Critically, accurate sensor data fusion requires knowledge of the rigidbody transform between sensor pairs, which can be determined through the process of extrinsic calibration. A number of extrinsic calibration algorithms have been designed for 2D (planar) radar sensors—however, recently-developed, low-cost 3D millimetre-wavelength radars are set to displace their 2D counterparts in many applications. In this paper, we present a continuous-time 3D radar-to-camera extrinsic calibration algorithm that utilizes radar velocity measurements and, unlike the majority of existing techniques, does not require specialized radar retroreflectors to be present in the environment. We derive the observability properties of our formulation and demonstrate the efficacy of our algorithm through synthetic and real-world experiments.

IROS Conference 2021 Conference Paper

Seeing All the Angles: Learning Multiview Manipulation Policies for Contact-Rich Tasks from Demonstrations

  • Trevor Ablett
  • Yifan Zhai
  • Jonathan Kelly

Learned visuomotor policies have shown considerable success as an alternative to traditional, hand-crafted frameworks for robotic manipulation. Surprisingly, an extension of these methods to the multiview domain is relatively unexplored. A successful multiview policy could be deployed on a mobile manipulation platform, allowing the robot to complete a task regardless of its view of the scene. In this work, we demonstrate that a multiview policy can be found through imitation learning by collecting data from a variety of viewpoints. We illustrate the general applicability of the method by learning to complete several challenging multi-stage and contact-rich tasks, from numerous viewpoints, both in a simulated environment and on a real mobile manipulation platform. Furthermore, we analyze our policies to determine the benefits of learning from multiview data compared to learning with data collected from a fixed perspective. We show that learning from multiview data results in little, if any, penalty to performance for a fixed-view task compared to learning with an equivalent amount of fixed-view data. Finally, we examine the visual features learned by the multiview and fixed-view policies. Our results indicate that multiview policies implicitly learn to identify spatially correlated features.

IROS Conference 2021 Conference Paper

Self-Supervised Scale Recovery for Monocular Depth and Egomotion Estimation

  • Brandon Wagstaff
  • Jonathan Kelly

The self-supervised loss formulation for jointly training depth and egomotion neural networks with monocular images is well studied and has demonstrated state-of-the-art accuracy. One of the main limitations of this approach, however, is that the depth and egomotion estimates are only determined up to an unknown scale. In this paper, we present a novel scale recovery loss that enforces consistency between a known camera height and the estimated camera height, generating metric (scaled) depth and egomotion predictions. We show that our proposed method is competitive with other scale recovery techniques that require more information. Further, we demonstrate that our method facilitates network retraining within new environments, whereas other scale-resolving approaches are incapable of doing so. Notably, our egomotion network is able to produce more accurate estimates than a similar method which recovers scale at test time only.

ICRA Conference 2020 Conference Paper

Inverse Kinematics for Serial Kinematic Chains via Sum of Squares Optimization

  • Filip Maric
  • Matthew Giamou
  • Soroush Khoubyarian
  • Ivan Petrovic
  • Jonathan Kelly

Inverse kinematics is a fundamental challenge for articulated robots: fast and accurate algorithms are needed for translating task-related workspace constraints and goals into feasible joint configurations. In general, inverse kinematics for serial kinematic chains is a difficult nonlinear problem, for which closed form solutions cannot easily be obtained. Therefore, computationally efficient numerical methods that can be adapted to a general class of manipulators are of great importance. In this paper, we use convex optimization techniques to solve the inverse kinematics problem with joint limit constraints for highly redundant serial kinematic chains with spherical joints in two and three dimensions. This is accomplished through a novel formulation of inverse kinematics as a nearest point problem, and with a fast sum of squares solver that exploits the sparsity of kinematic constraints for serial manipulators. Our method has the advantages of post-hoc certification of global optimality and a runtime that scales polynomially with the number of degrees of freedom. Additionally, we prove that our convex relaxation leads to a globally optimal solution when certain conditions are met, and demonstrate empirically that these conditions are common and represent many practical instances. Finally, we provide an open source implementation of our algorithm.

ICRA Conference 2020 Conference Paper

Self-Supervised Deep Pose Corrections for Robust Visual Odometry

  • Brandon Wagstaff
  • Valentin Peretroukhin
  • Jonathan Kelly

We present a self-supervised deep pose correction (DPC) network that applies pose corrections to a visual odometry estimator to improve its accuracy. Instead of regressing inter-frame pose changes directly, we build on prior work that uses data-driven learning to regress pose corrections that account for systematic errors due to violations of modelling assumptions. Our self-supervised formulation removes any requirement for six-degrees-of-freedom ground truth and, in contrast to expectations, often improves overall navigation accuracy compared to a supervised approach. Through extensive experiments, we show that our self-supervised DPC network can significantly enhance the performance of classical monocular and stereo odometry estimators and substantially out-performs state-of-the-art learning-only approaches.

IROS Conference 2019 Conference Paper

Fast Manipulability Maximization Using Continuous-Time Trajectory optimization

  • Filip Maric
  • Oliver Limoyo
  • Luka Petrovic
  • Trevor Ablett
  • Ivan Petrovic
  • Jonathan Kelly

A significant challenge in manipulation motion planning is to ensure agility in the face of unpredictable changes during task execution. This requires the identification and possible modification of suitable joint-space trajectories, since the joint velocities required to achieve a specific endeffector motion vary with manipulator configuration. For a given manipulator configuration, the joint space-to-task space velocity mapping is characterized by a quantity known as the manipulability index. In contrast to previous control-based approaches, we examine the maximization of manipulability during planning as a way of achieving adaptable and safe joint space-to-task space motion mappings in various scenarios. By representing the manipulator trajectory as a continuous-time Gaussian process (GP), we are able to leverage recent advances in trajectory optimization to maximize the manipulability index during trajectory generation. Moreover, the sparsity of our chosen representation reduces the typically large computational cost associated with maximizing manipulability when additional constraints exist. Results from simulation studies and experiments with a real manipulator demonstrate increases in manipulability, while maintaining smooth trajectories with more dexterous (and therefore more agile) arm configurations.

ICRA Conference 2019 Conference Paper

The Phoenix Drone: An Open-Source Dual-Rotor Tail-Sitter Platform for Research and Education

  • Yilun Wu
  • Xintong Du
  • Rikky R. P. R. Duivenvoorden
  • Jonathan Kelly

In this paper, we introduce the Phoenix drone: the first completely open-source tail-sitter micro aerial vehicle (MAV) platform. The vehicle has a highly versatile, dual-rotor design and is engineered to be low-cost and easily extensible/modifiable. Our open-source release includes all of the design documents, software resources, and simulation tools needed to build and fly a high-performance tail-sitter for research and educational purposes. The drone has been developed for precision flight with a high degree of control authority. Our design methodology included extensive testing and characterization of the aerodynamic properties of the vehicle. The platform incorporates many off-the-shelf components and 3D-printed parts, in order to keep the cost down. Nonetheless, the paper includes results from flight trials which demonstrate that the vehicle is capable of very stable hovering and accurate trajectory tracking. Our hope is that the open-source Phoenix reference design will be useful to both researchers and educators. In particular, the details in this paper and the available open-source materials should enable learners to gain an understanding of aerodynamics, flight control, state estimation, software design, and simulation, while experimenting with a unique aerial robot.

ICRA Conference 2018 Conference Paper

Self-Calibration of Mobile Manipulator Kinematic and Sensor Extrinsic Parameters Through Contact-Based Interaction

  • Oliver Limoyo
  • Trevor Ablett
  • Filip Maric
  • Luke Volpatti
  • Jonathan Kelly

We present a novel approach for mobile manipulator self-calibration using contact information. Our method, based on point cloud registration, is applied to estimate the extrinsic transform between a fixed vision sensor mounted on a mobile base and an end effector. Beyond sensor calibration, we demonstrate that the method can be extended to include manipulator kinematic model parameters, which involves a nonrigid registration process. Our procedure uses on-board sensing exclusively and does not rely on any external measurement devices, fiducial markers, or calibration rigs. Further, it is fully automatic in the general case. We experimentally validate the proposed method on a custom mobile manipulator platform, and demonstrate centimetre-level post-calibration accuracy in positioning of the end effector using visual guidance only. We also discuss the stability properties of the registration algorithm, in order to determine the conditions under which calibration is possible.

ICRA Conference 2017 Conference Paper

Reducing drift in visual odometry by inferring sun direction using a Bayesian Convolutional Neural Network

  • Valentin Peretroukhin
  • Lee E. Clement
  • Jonathan Kelly

We present a method to incorporate global orientation information from the sun into a visual odometry pipeline using only the existing image stream, where the sun is typically not visible. We leverage recent advances in Bayesian Convolutional Neural Networks to train and implement a sun detection model that infers a three-dimensional sun direction vector from a single RGB image. Crucially, our method also computes a principled uncertainty associated with each prediction, using a Monte Carlo dropout scheme. We incorporate this uncertainty into a sliding window stereo visual odometry pipeline where accurate uncertainty estimates are critical for optimal data fusion. Our Bayesian sun detection model achieves a median error of approximately 12 degrees on the KITTI odometry benchmark training set, and yields improvements of up to 42% in translational ARMSE and 32% in rotational ARMSE compared to standard VO. An open source implementation of our Bayesian CNN sun estimator (Sun-BCNN) using Caffe is available at https://github.com/utiasSTARS/sun-bcnn-vo.

ICRA Conference 2016 Conference Paper

PROBE-GK: Predictive robust estimation using generalized kernels

  • Valentin Peretroukhin
  • William Vega-Brown
  • Nicholas Roy
  • Jonathan Kelly

Many algorithms in computer vision and robotics make strong assumptions about uncertainty, and rely on the validity of these assumptions to produce accurate and consistent state estimates. In practice, dynamic environments may degrade sensor performance in predictable ways that cannot be captured with static uncertainty parameters. In this paper, we employ fast nonparametric Bayesian inference techniques to more accurately model sensor uncertainty. By setting a prior on observation uncertainty, we derive a predictive robust estimator, and show how our model can be learned from sample images, both with and without knowledge of the motion used to generate the data. We validate our approach through Monte Carlo simulations, and report significant improvements in localization accuracy relative to a fixed noise model in several settings, including on synthetic data, the KITTI dataset, and our own experimental platform.

IROS Conference 2015 Conference Paper

PROBE: Predictive robust estimation for visual-inertial navigation

  • Valentin Peretroukhin
  • Lee E. Clement
  • Matthew Giamou
  • Jonathan Kelly

Navigation in unknown, chaotic environments continues to present a significant challenge for the robotics community. Lighting changes, self-similar textures, motion blur, and moving objects are all considerable stumbling blocks for state-of-the-art vision-based navigation algorithms. In this paper we present a novel technique for improving localization accuracy within a visual-inertial navigation system (VINS). We make use of training data to learn a model for the quality of visual features with respect to localization error in a given environment. This model maps each visual observation from a predefined prediction space of visual-inertial predictors onto a scalar weight, which is then used to scale the observation covariance matrix. In this way, our model can adjust the influence of each observation according to its quality. We discuss our choice of predictors and report substantial reductions in localization error on 4 km of data from the KITTI dataset, as well as on experimental datasets consisting of 700 m of indoor and outdoor driving on a small ground rover equipped with a Skybotix VI-Sensor.

ICRA Conference 2013 Conference Paper

An investigation on the accuracy of Regional Ocean Models through field trials

  • Ryan N. Smith
  • Jonathan Kelly
  • Kimia Nazarzadeh
  • Gaurav S. Sukhatme

Recent efforts in mission planning for underwater vehicles have utilised predictive models to aid in navigation, optimal path planning and drive opportunistic sampling. Although these models provide information at a unprecedented resolutions and have proven to increase accuracy and effectiveness in multiple campaigns, most are deterministic in nature. Thus, predictions cannot be incorporated into probabilistic planning frameworks, nor do they provide any metric on the variance or confidence of the output variables. In this paper, we provide an initial investigation into determining the confidence of ocean model predictions based on the results of multiple field deployments of two autonomous underwater vehicles. For multiple missions of two autonomous gliders conducted over a two-month period in 2011, we compare actual vehicle executions to simulations of the same missions through the Regional Ocean Modeling System in an ocean region off the coast of southern California. This comparison provides a qualitative analysis of the current velocity predictions for areas within the selected deployment region. Ultimately, we present a spatial heat-map of the correlation between the ocean model predictions and the actual mission executions. Knowing where the model provides unreliable predictions can be incorporated into planners to increase the utility and application of the deterministic estimations.

ICRA Conference 2013 Conference Paper

CELLO: A fast algorithm for Covariance Estimation

  • William Vega-Brown
  • Abraham Bachrach
  • Adam Bry
  • Jonathan Kelly
  • Nicholas Roy

We present CELLO (Covariance Estimation and Learning through Likelihood Optimization), an algorithm for predicting the covariances of measurements based on any available informative features. This algorithm is intended to improve the accuracy and reliability of on-line state estimation by providing a principled way to extend the conventional fixed-covariance Gaussian measurement model. We show that in experiments, CELLO learns to predict measurement covariances that agree with empirical covariances obtained by manually annotating sensor regimes. We also show that using the learned covariances during filtering provides substantial quantitative improvement to the overall state estimate.

ICRA Conference 2013 Conference Paper

Learning task error models for manipulation

  • Peter Pastor
  • Mrinal Kalakrishnan
  • Jonathan Binney
  • Jonathan Kelly
  • Ludovic Righetti
  • Gaurav S. Sukhatme
  • Stefan Schaal

Precise kinematic forward models are important for robots to successfully perform dexterous grasping and manipulation tasks, especially when visual servoing is rendered infeasible due to occlusions. A lot of research has been conducted to estimate geometric and non-geometric parameters of kinematic chains to minimize reconstruction errors. However, kinematic chains can include non-linearities, e. g. due to cable stretch and motor-side encoders, that result in significantly different errors for different parts of the state space. Previous work either does not consider such non-linearities or proposes to estimate non-geometric parameters of carefully engineered models that are robot specific. We propose a data-driven approach that learns task error models that account for such unmodeled non-linearities. We argue that in the context of grasping and manipulation, it is sufficient to achieve high accuracy in the task relevant state space. We identify this relevant state space using previously executed joint configurations and learn error corrections for those. Therefore, our system is developed to generate subsequent executions that are similar to previous ones. The experiments show that our method successfully captures the non-linearities in the head kinematic chain (due to a counterbalancing spring) and the arm kinematic chains (due to cable stretch) of the considered experimental platform, see Fig. 1. The feasibility of the presented error learning approach has also been evaluated in independent DARPA ARM-S testing contributing to successfully complete 67 out of 72 grasping and manipulation tasks.

ICRA Conference 2012 Conference Paper

Towards improving mission execution for autonomous gliders with an ocean model and kalman filter

  • Ryan N. Smith
  • Jonathan Kelly
  • Gaurav S. Sukhatme

Effective execution of a planned path by an underwater vehicle is important for proper analysis of the gathered science data, as well as to ensure the safety of the vehicle during the mission. Here, we propose the use of an unscented Kalman filter to aid in determining how the planned mission is executed. Given a set of waypoints that define a planned path and a dicretization of the ocean currents from a regional ocean model, we present an approach to determine the time interval at which the glider should surface to maintain a prescribed tracking error, while also limiting its time on the ocean surface. We assume practical mission parameters provided from previous field trials for the problem set up, and provide the simulated results of the Kalman filter mission planning approach. The results are initially compared to data from prior field experiments in which an autonomous glider executed the same path without pre-planning. Then, the results are validated through field trials with multiple autonomous gliders implementing different surfacing intervals simultaneously while following the same path.

ICRA Conference 2011 Conference Paper

Simultaneous mapping and stereo extrinsic parameter calibration using GPS measurements

  • Jonathan Kelly
  • Larry H. Matthies
  • Gaurav S. Sukhatme

Stereo vision is useful for a variety of robotics tasks, such as navigation and obstacle avoidance. However, recovery of valid range data from stereo depends on accurate calibration of the extrinsic parameters of the stereo rig, i. e. , the 6-DOF transform between the left and right cameras. Stereo self calibration is possible, but, without additional information, the absolute scale of the stereo baseline cannot be determined. In this paper, we formulate stereo extrinsic parameter calibration as a batch maximum likelihood estimation problem, and use GPS measurements to establish the scale of both the scene and the stereo baseline. Our approach is similar to photogrammetric bundle adjustment, and closely related to many structure from motion algorithms. We present results from simulation experiments using a range of GPS accuracy levels; these accuracies are achievable by varying grades of commercially-available receivers. We then validate the algorithm using stereo and GPS data acquired from a moving vehicle. Our results indicate that the approach is promising.

IROS Conference 2006 Conference Paper

Combinatorial Optimization of Sensing for Rule-Based Planar Distributed Assembly

  • Jonathan Kelly
  • Hong Zhang 0013

We describe a model for planar distributed assembly, in which agents move randomly and independently on a two-dimensional grid, joining square blocks together to form a desired target structure. The agents have limited capabilities, including local sensing and rule-based reactive control only, and operate without centralized coordination. We define the spatiotemporal constraints necessary for the ordered assembly of a structure and give a procedure for encoding these constraints in a rule set, such that production of the desired structure is guaranteed. Our main contribution is a stochastic optimization algorithm which is able to significantly reduce the number of environmental features that an agent must recognize to build a structure. Experiments show that our optimization algorithm outperforms existing techniques

IROS Conference 2003 Conference Paper

Development of a transformable mobile robot composed of homogeneous gear-type units

  • Hiroki Tokashiki
  • Hisaya Amagai
  • Satoshi Endo
  • Koji Yamada
  • Jonathan Kelly

Recently, there has been significant research interest in homogeneous modular robots that can transform (i. e. reconfigure their overall shape). However, many of the proposed transformation mechanisms are too expensive and complex to be practical. The transformation process is also normally slow, and therefore the mechanisms are not suitable for situations where frequent, quick reconfiguration is required. To solve these problems, we have studied a transformable mobile robot composed of multiple homogeneous gear-type units. Each unit has only one actuator and cannot move independently. But when engaged in a swarm configuration, units are able to move rapidly by rotating around one another. The most important problem encountered when developing our multi-module robot was determining how units should join together. We designed a passive attachment mechanism that employs a single, six-pole magnet carried by each unit. Motion principles for the swarm were confirmed in simulation, and based on these results we constructed a series of hardware prototypes. In our teleoperation experiments we verified that a powered unit can easily transfer from one stationary unit to another, and that the swarm can move quickly in any direction while transforming.

AIJ Journal 2002 Journal Article

Learning Bayesian networks from data: An information-theory based approach

  • Jie Cheng
  • Russell Greiner
  • Jonathan Kelly
  • David Bell
  • Weiru Liu

This paper provides algorithms that use an information-theoretic analysis to learn Bayesian network structures from data. Based on our three-phase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional independence (CI) tests in typical cases. We provide precise conditions that specify when these algorithms are guaranteed to be correct as well as empirical evidence (from real world applications and simulation tests) that demonstrates that these systems work efficiently and reliably in practice.