Arrow Research search

Author name cluster

Sungjoon Choi

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

27 papers
2 author rows

Possible papers

27

IROS Conference 2025 Conference Paper

High DOF Tendon-Driven Soft Hand: A Modular System for Versatile and Dexterous Manipulation

  • Yeonwoo Jang
  • Hajun Lee
  • Junghyo Kim
  • Taerim Yoon
  • Yoonbyung Chai
  • Heejae Won
  • Sungjoon Choi
  • Jiyun Kim

The soft robotic hand exhibits a wide range of manipulation capabilities, which are attributed to the dexterity of its soft fingers and their coordinated movements. Therefore, designing a versatile soft hand requires careful consideration of both the characteristics of the individual fingers, such as degree of freedom (DOF), and their strategic arrangements to improve performance for specific target tasks. This work presents a modularized high DOF tendon-driven soft finger and a customized design of a soft robotic hand for diverse dexterous manipulation tasks. Furthermore, an all-in-one module is developed that integrates both the 4-way tendon-driven soft finger body and drive parts. Its high DOF enables multidirectional actuations with a wide actuation range, thereby expanding possible manipulation modes. The modularity of the system expands the design space for finger arrangements, which enables the diverse configuration of robotic hands and facilitates the customization of task-oriented platforms. To achieve sophisticated control of these complex configurations, we employ neural network-planned trajectories, enabling the precise execution of complicated tasks. The performance of a single finger is validated, including dexterity and payload, and several real-world manipulation tasks are demonstrated, including writing, grasping, rotating, and spreading, using motion primitives of diverse soft hands with distinctive finger arrangements. These demonstrations showcase the system’s versatility and precision in various tasks. We expect that our system will contribute to the expansion of possibilities in the field of soft robotic manipulation.

ICRA Conference 2025 Conference Paper

Learning-Based Dynamic Robot-to-Human Handover

  • Hyeonseong Kim
  • Chanwoo Kim
  • Matthew K. X. J. Pan
  • Kyungjae Lee 0001
  • Sungjoon Choi

This paper presents a novel learning-based approach to dynamic robot-to-human handover, addressing the challenges of delivering objects to a moving receiver. We hypothesize that dynamic handover, where the robot adjusts to the receiver's movements, results in more efficient and comfortable interaction compared to static handover, where the receiver is assumed to be stationary. To validate this, we developed a nonparametric method for generating continuous handover motion, conditioned on the receiver's movements, and trained the model using a dataset of 1, 000 human-to-human handover demonstrations. We integrated preference learning for improved handover effectiveness and applied impedance control to ensure user safety and adaptiveness. The approach was evaluated in both simulation and real-world settings, with user studies demonstrating that dynamic handover significantly reduces handover time and improves user comfort compared to static methods. Videos and demonstrations of our approach are available at https://zerotohero7886.github.io/dyn-r2h-handover/.

IROS Conference 2025 Conference Paper

Robust and Expressive Humanoid Motion Retargeting via Optimization-Based Rig Unification

  • Taemoon Jeong
  • Taehyun Byun
  • Jihoon Kim
  • Keunjun Choi
  • Jaesung Oh
  • Sungpyo Lee
  • Omar Darwish
  • Joohyung Kim

Humanoid robots are increasingly being developed for seamless interaction with humans in diverse domains, yet generating expressive and physically-feasible motions remains a core challenge. We propose a robust and automated pipeline for motion retargeting that enables the generation of natural, stable, and highly expressive motions for a wide variety of humanoid robots using different motion data sources, including noisy pose estimations. To ensure robustness, our approach unifies motions from different kinematic structures into a common canonical rig, systematically refines the motion trajectory to address infeasible poses, enforces foot-contact constraints, and enhances stability. The retargeted motion is then refined to closely follow the source motion while respecting each robot’s physical limits. Through extensive experiments on 12 simulated robots and validation on three real robots, we show that our methodology reliably produces expressive upper-body movements with consistent foot contact. This work represents an important step towards automating robust and expressive motion generation for humanoid robots, enabling deployment in various real-world scenarios.

ICRA Conference 2024 Conference Paper

SPOTS: Stable Placement of Objects with Reasoning in Semi-Autonomous Teleoperation Systems

  • Joonhyung Lee
  • Sangbeom Park
  • Jeongeun Park 0002
  • Kyungjae Lee 0001
  • Sungjoon Choi

Pick-and-place is one of the fundamental tasks in robotics research. However, the attention has been mostly focused on the "pick" task, leaving the "place" task relatively unexplored. In this paper, we address the problem of placing objects in the context of a teleoperation framework. Particularly, we focus on two aspects of the place task: stability robustness and contextual reasonableness of object placements. Our proposed method combines simulation-driven physical stability verification via real-to-sim and the semantic reasoning capability of large language models. In other words, given place context information (e. g. , user preferences, object to place, and current scene information), our proposed method outputs a probability distribution over the possible placement candidates, considering the robustness and reasonableness of the place task. Our proposed method is extensively evaluated in two simulation and one real world environments and we show that our method can greatly increase the physical plausibility of the placement as well as contextual soundness while considering user preferences. Code, video, and details are available at: https://joonhyunglee.github.io/spots/

IROS Conference 2024 Conference Paper

Visual Preference Inference: An Image Sequence-Based Preference Reasoning in Tabletop Object Manipulation

  • Joonhyung Lee
  • Sangbeom Park
  • Yongin Kwon
  • Jemin Lee
  • Minwook Ahn
  • Sungjoon Choi

In robotic object manipulation, human preferences can often be influenced by the visual attributes of objects, such as color and shape. These properties play a crucial role in operating a robot to interact with objects and align with human intention. In this paper, we focus on the problem of inferring underlying human preferences from a sequence of raw visual observations in tabletop manipulation environments with a variety of object types, named Visual Preference Inference (VPI). To facilitate visual reasoning in the context of manipulation, we introduce the Chain-of-Visual-Residuals (CoVR) method. CoVR employs a prompting mechanism that describes the difference between the consecutive images (i. e. , visual residuals) and incorporates such texts with a sequence of images to infer the user’s preference. This approach significantly enhances the ability to understand and adapt to dynamic changes in its visual environment during manipulation tasks. Furthermore, we incorporate such texts along with a sequence of images to infer the user’s preferences. Our method outperforms baseline methods in terms of extracting human preferences from visual sequences in both simulation and real-world environments. Code and videos are available at: https://joonhyung-lee.github.io/vpi/

AAAI Conference 2023 Conference Paper

FLAME: Free-Form Language-Based Motion Synthesis & Editing

  • Jihoon Kim
  • Jiseob Kim
  • Sungjoon Choi

Text-based motion generation models are drawing a surge of interest for their potential for automating the motion-making process in the game, animation, or robot industries. In this paper, we propose a diffusion-based motion synthesis and editing model named FLAME. Inspired by the recent successes in diffusion models, we integrate diffusion-based generative models into the motion domain. FLAME can generate high-fidelity motions well aligned with the given text. Also, it can edit the parts of the motion, both frame-wise and joint-wise, without any fine-tuning. FLAME involves a new transformer-based architecture we devise to better handle motion data, which is found to be crucial to manage variable-length motions and well attend to free-form text. In experiments, we show that FLAME achieves state-of-the-art generation performances on three text-motion datasets: HumanML3D, BABEL, and KIT. We also demonstrate that FLAME’s editing capability can be extended to other tasks such as motion prediction or motion in-betweening, which have been previously covered by dedicated models.

NeurIPS Conference 2023 Conference Paper

Score-based Generative Modeling through Stochastic Evolution Equations in Hilbert Spaces

  • Sungbin Lim
  • EUN BI YOON
  • Taehyun Byun
  • Taewon Kang
  • Seungwoo Kim
  • Kyungjae Lee
  • Sungjoon Choi

Continuous-time score-based generative models consist of a pair of stochastic differential equations (SDEs)—a forward SDE that smoothly transitions data into a noise space and a reverse SDE that incrementally eliminates noise from a Gaussian prior distribution to generate data distribution samples—are intrinsically connected by the time-reversal theory on diffusion processes. In this paper, we investigate the use of stochastic evolution equations in Hilbert spaces, which expand the applicability of SDEs in two aspects: sample space and evolution operator, so they enable encompassing recent variations of diffusion models, such as generating functional data or replacing drift coefficients with image transformation. To this end, we derive a generalized time-reversal formula to build a bridge between probabilistic diffusion models and stochastic evolution equations and propose a score-based generative model called Hilbert Diffusion Model (HDM). Combining with Fourier neural operator, we verify the superiority of HDM for sampling functions from functional datasets with a power of kernel two-sample test of 4. 2 on Quadratic, 0. 2 on Melbourne, and 3. 6 on Gridwatch, which outperforms existing diffusion models formulated in function spaces. Furthermore, the proposed method shows its strength in motion synthesis tasks by utilizing the Wiener process with values in Hilbert space. Finally, our empirical results on image datasets also validate a connection between HDM and diffusion models using heat dissipation, revealing the potential for exploring evolution operators and sample spaces.

ICRA Conference 2023 Conference Paper

Zero-shot Active Visual Search (ZAVIS): Intelligent Object Search for Robotic Assistants

  • Jeongeun Park 0002
  • Taerim Yoon
  • Jejoon Hong
  • Youngjae Yu
  • Matthew K. X. J. Pan
  • Sungjoon Choi

In this paper, we focus on the problem of efficiently locating a target object described with free-form text using a mobile robot equipped with vision sensors (e. g. , an RGBD camera). Conventional active visual search predefines a set of objects to search for, rendering these techniques restrictive in practice. To provide added flexibility in active visual searching, we propose a system where a user can enter target commands using free-form text; we call this system Zero-shot Active Visual Search (ZAVIS). ZAVIS detects and plans to search for a target object inputted by a user through a semantic grid map represented by static landmarks (e. g. , desk or bed). For efficient planning of object search patterns, ZAVIS considers commonsense knowledge-based co-occurrence and predictive uncertainty while deciding which landmarks to visit first. We validate the proposed method with respect to SR (success rate) and SPL (success weighted by path length) in both simulated and real-world environments. The proposed method outperforms previous methods in terms of SPL in simulated scenarios, and we further demonstrate ZAVIS with a Pioneer-3AT robot in real-world studies.

ICRA Conference 2022 Conference Paper

Semi-Autonomous Teleoperation via Learning Non-Prehensile Manipulation Skills

  • Sangbeom Park
  • Yoonbyung Chai
  • Sunghyun Park
  • Jeongeun Park 0002
  • Kyungjae Lee 0001
  • Sungjoon Choi

In this paper, we present a semi-autonomous teleoperation framework for a pick-and-place task using an RGB-D sensor. In particular, we assume that the target object is located in a cluttered environment where both prehensile grasping and non-prehensile manipulation are combined for efficient teleoperation. A trajectory-based reinforcement learning is utilized for learning the non-prehensile manipulation to rearrange the objects for enabling direct grasping. From the depth image of the cluttered environment and the location of the goal object, the learned policy can provide multiple options of non-prehensile manipulation to the human operator. We carefully design a reward function for the rearranging task where the policy is trained in a simulational environment. Then, the trained policy is transferred to a real-world and evaluated in a number of real-world experiments with the varying number of objects where we show that the proposed method outperforms manual keyboard control in terms of the time duration for the grasping.

IROS Conference 2022 Conference Paper

Towards Defensive Autonomous Driving: Collecting and Probing Driving Demonstrations of Mixed Qualities

  • Jeongwoo Oh
  • Gunmin Lee
  • Jeongeun Park 0002
  • Wooseok Oh
  • Jaeseok Heo
  • Hojun Chung
  • Do Hyung Kim 0003
  • Byungkyu Park

Designing or learning an autonomous driving policy is undoubtedly a challenging task as the policy has to maintain its safety in all corner cases. In order to secure safety in autonomous driving, the ability to detect hazardous situations, which can be seen as an out-of-distribution (OOD) detection problem, becomes crucial. However, conventional datasets often only contain expert driving demonstrations, although some non-expert or uncommon driving behavior data are needed to implement a safety guaranteed autonomous driving platform. To this end, we present a dataset called the R3 Driving Dataset, composed of driving data with different qualities. The dataset categorizes abnormal driving behaviors into eight categories and 369 different detailed situations. The situations include dangerous lane changes and near-collision situations. To further enlighten how these abnormal driving behaviors can be detected, we utilize different uncertainty estimation and anomaly detection methods for the proposed dataset. From the results of the proposed experiment, it can be inferred that by using both uncertainty estimation and anomaly detection, most of the abnormal cases in the proposed dataset can be discriminated. https://rllab-snu.github.io/projects/R3-Driving-Dataset/doc.html

ICRA Conference 2021 Conference Paper

Self-Supervised Motion Retargeting with Safety Guarantee

  • Sungjoon Choi
  • Min Jae Song
  • Hyemin Ahn 0001
  • Joohyung Kim

In this paper, we present self-supervised shared latent embedding (S 3 LE), a data-driven motion retargeting method that enables the generation of natural motions in humanoid robots from motion capture data or RGB videos. While it requires paired data consisting of human poses and their corresponding robot configurations, it significantly alleviates the necessity of time-consuming data-collection via novel paired data generating processes. Our self-supervised learning procedure consists of two steps: automatically generating paired data to bootstrap the motion retargeting, and learning a projection-invariant mapping to handle the different expressivity of humans and humanoid robots. Furthermore, our method guarantees that the generated robot pose is collision-free and satisfies position limits by utilizing nonparametric regression in the shared latent space. We demonstrate that our method can generate expressive robotic motions from both the CMU motion capture database and YouTube videos.

IROS Conference 2020 Conference Paper

Realistic and Interactive Robot Gaze

  • Matthew K. X. J. Pan
  • Sungjoon Choi
  • James Kennedy
  • Kyna McIntosh
  • Daniel Campos Zamora
  • Günter Niemeyer
  • Joohyung Kim
  • Alexis Wieland

This paper describes the development of a system for lifelike gaze in human-robot interactions using a humanoid Audio-Animatronics ® bust. Previous work examining mutual gaze between robots and humans has focused on technical implementation. We present a general architecture that seeks not only to create gaze interactions from a technological standpoint, but also through the lens of character animation where the fidelity and believability of motion is paramount; that is, we seek to create an interaction which demonstrates the illusion of life. A complete system is described that perceives persons in the environment, identifies persons-of-interest based on salient actions, selects an appropriate gaze behavior, and executes high fidelity motions to respond to the stimuli. We use mechanisms that mimic motor and attention behaviors analogous to those observed in biological systems including attention habituation, saccades, and differences in motion bandwidth for actuators. Additionally, a subsumption architecture allows layering of simple motor movements to create increasingly complex behaviors which are able to interactively and realistically react to salient stimuli in the environment through subsuming lower levels of behavior. The result of this system is an interactive human-robot experience capable of human-like gaze behaviors.

IROS Conference 2019 Conference Paper

Towards a Natural Motion Generator: a Pipeline to Control a Humanoid based on Motion Data

  • Sungjoon Choi
  • Joohyung Kim

Imitation of the upper body motions of human demonstrators or animation characters to human-shaped robots is studied in this paper. We present a pipeline for motion retargeting by transferring the joints of interest (JOI) of source motions to the target humanoid robot. To this end, we deploy an optimization-based motion retargeting method utilizing link length modifications of the source skeleton and a task (Cartesian) space fine-tuning of JOI motion descriptors. To evaluate the effectiveness of the proposed pipeline, we use two different 3-D motion datasets from three human demonstrators and an Ogre animation character, Bork, and successfully transfer the motions to four different humanoid robots: DARwIn-OP, COmpliant HuMANoid Platform (COMAN), THORMANG, and Atlas. Furthermore, COMAN and THORMANG are actually controlled to show that the proposed method can be deployed to physical robots.

ICRA Conference 2019 Conference Paper

Trajectory-based Probabilistic Policy Gradient for Learning Locomotion Behaviors

  • Sungjoon Choi
  • Joohyung Kim

In this paper, we propose a trajectory-based reinforcement learning method named deep latent policy gradient (DLPG) for learning locomotion skills. We define the policy function as a probability distribution over trajectories and train the policy using a deep latent variable model to achieve sample efficient skill learning. We first evaluate the sample efficiency of DLPG compared to the state-of-the-art reinforcement learning methods in simulated environments. Then, we apply the proposed method to a four-legged walking robot named Snapbot to learn three basic locomotion skills of turn left, go straight, and turn right. We demonstrate that, by properly designing two reward functions for curriculum learning, Snapbot successfully learns the desired locomotion skills with moderate sample complexity.

ICRA Conference 2018 Conference Paper

A Nonparametric Motion Flow Model for Human Robot Cooperation

  • Sungjoon Choi
  • Kyungjae Lee 0001
  • Hyungju Andy Park
  • Songhwai Oh

In this paper, we present a novel nonparametric motion flow model that effectively describes a motion trajectory of a human and its application to human robot cooperation. To this end, motion flow similarity measure which considers both spatial and temporal properties of a trajectory is proposed by utilizing the mean and variance functions of a Gaussian process. We also present a human robot cooperation method using the proposed motion flow model. Given a set of interacting trajectories of two workers, the underlying reward function of cooperating behaviors is optimized by using the learned motion description as an input to the reward function where a stochastic trajectory optimization method is used to control a robot. The presented human robot cooperation method is compared with the state-of-the-art algorithm, which utilizes a mixture of interaction primitives (MIP), in terms of the RMS error between generated and target trajectories. While the proposed method shows comparable performance with the MIP when the full observation of human demonstrations is given, it shows superior performance when partial trajectory information is given.

NeurIPS Conference 2018 Conference Paper

Maximum Causal Tsallis Entropy Imitation Learning

  • Kyungjae Lee
  • Sungjoon Choi
  • Songhwai Oh

In this paper, we propose a novel maximum causal Tsallis entropy (MCTE) framework for imitation learning which can efficiently learn a sparse multi-modal policy distribution from demonstrations. We provide the full mathematical analysis of the proposed framework. First, the optimal solution of an MCTE problem is shown to be a sparsemax distribution, whose supporting set can be adjusted. The proposed method has advantages over a softmax distribution in that it can exclude unnecessary actions by assigning zero probability. Second, we prove that an MCTE problem is equivalent to robust Bayes estimation in the sense of the Brier score. Third, we propose a maximum causal Tsallis entropy imitation learning (MCTEIL) algorithm with a sparse mixture density network (sparse MDN) by modeling mixture weights using a sparsemax distribution. In particular, we show that the causal Tsallis entropy of an MDN encourages exploration and efficient mixture utilization while Boltzmann Gibbs entropy is less effective. We validate the proposed method in two simulation studies and MCTEIL outperforms existing imitation learning methods in terms of average returns and learning multi-modal policies.

ICRA Conference 2018 Conference Paper

Uncertainty-Aware Learning from Demonstration Using Mixture Density Networks with Sampling-Free Variance Modeling

  • Sungjoon Choi
  • Kyungjae Lee 0001
  • Sungbin Lim
  • Songhwai Oh

In this paper, we propose an uncertainty-aware learning from demonstration method by presenting a novel uncertainty estimation method utilizing a mixture density network appropriate for modeling complex and noisy human behaviors. The proposed uncertainty acquisition can be done with a single forward path without Monte Carlo sampling and is suitable for real-time robotics applications. Then, we show that it can be decomposed into explained variance and unexplained variance where the connections between aleatoric and epistemic uncertainties are addressed. The properties of the proposed uncertainty measure are analyzed through three different synthetic examples, absence of data, heavy measurement noise, and composition of functions scenarios. We show that each case can be distinguished using the proposed uncertainty measure and presented an uncertainty-aware learning from demonstration method for autonomous driving using this property. The proposed uncertainty-aware learning from demonstration method outperforms other compared methods in terms of safety using a complex real-world driving dataset.

IROS Conference 2017 Conference Paper

Scalable robust learning from demonstration with leveraged deep neural networks

  • Sungjoon Choi
  • Kyungjae Lee 0001
  • Songhwai Oh

In this paper, we propose a novel algorithm for learning from demonstration, which can learn a policy function robustly from a large number of demonstrations with mixed qualities. While most of the existing approaches assume that demonstrations are collected from skillful experts, the proposed method alleviates such restrictions by estimating the proficiency level of each demonstration using the proposed leverage optimization. Furthermore, a novel leveraged cost function is proposed to represent a policy function using deep neural networks by reformulating the objective function of leveraged Gaussian process regression using the representer theorem. The proposed method is successfully applied to autonomous track driving tasks, where a large number of demonstrations with mixed qualities are given as training data without labels.

IROS Conference 2016 Conference Paper

Gaussian random paths for real-time motion planning

  • Sungjoon Choi
  • Kyungjae Lee 0001
  • Songhwai Oh

In this paper, we propose Gaussian random paths by defining a probability distribution over continuous paths interpolating a finite set of anchoring points using Gaussian process regression. By utilizing the generative property of Gaussian random paths, a Gaussian random path planner is developed to safely steer a robot to a goal position. The Gaussian random path planner can be used in a number of applications, including local path planning for a mobile robot and trajectory optimization for whole body motion planning. We have conducted an extensive set of simulations and experiments, showing that the proposed planner outperforms look-ahead planners which use a pre-defined subset of egocentric trajectories in terms of collision rates and trajectory lengths. Furthermore, we apply the proposed method to existing trajectory optimization methods as an initialization step and demonstrate that it can help produce more cost-efficient trajectories.

IROS Conference 2016 Conference Paper

Inverse reinforcement learning with leveraged Gaussian processes

  • Kyungjae Lee 0001
  • Sungjoon Choi
  • Songhwai Oh

In this paper, we propose a novel inverse reinforcement learning algorithm with leveraged Gaussian processes that can learn from both positive and negative demonstrations. While most existing inverse reinforcement learning (IRL) methods suffer from the lack of information near low reward regions, the proposed method alleviates this issue by incorporating (negative) demonstrations of what not to do. To mathematically formulate negative demonstrations, we introduce a novel generative model which can generate both positive and negative demonstrations using a parameter, called proficiency. Moreover, since we represent a reward function using a leveraged Gaussian process which can model a nonlinear function, the proposed method can effectively estimate the structure of a nonlinear reward function.

ICRA Conference 2016 Conference Paper

Robust learning from demonstration using leveraged Gaussian processes and sparse-constrained optimization

  • Sungjoon Choi
  • Kyungjae Lee 0001
  • Songhwai Oh

In this paper, we propose a novel method for robust learning from demonstration using leveraged Gaussian process regression. While existing learning from demonstration (LfD) algorithms assume that demonstrations are given from skillful experts, the proposed method alleviates such assumption by allowing demonstrations from casual or novice users. To learn from demonstrations of mixed quality, we present a sparse-constrained leveraged optimization algorithm using proximal linearized minimization. The proposed sparse constrained leverage optimization algorithm is successfully applied to sensory field reconstruction and direct policy learning for planar navigation problems. In experiments, the proposed sparse-constrained method outperforms existing LfD methods.

IROS Conference 2016 Conference Paper

Robust modeling and prediction in dynamic environments using recurrent flow networks

  • Sungjoon Choi
  • Kyungjae Lee 0001
  • Songhwai Oh

To enable safe motion planning in a dynamic environment, it is vital to anticipate and predict object movements. In practice, however, an accurate object identification among multiple moving objects is extremely challenging, making it infeasible to accurately track and predict individual objects. Furthermore, even for a single object, its appearance can vary significantly due to external effects, such as occlusions, varying perspectives, or illumination changes. In this paper, we propose a novel recurrent network architecture called a recurrent flow network that can infer the velocity of each cell and the probability of future occupancy from a sequence of occupancy grids which we refer to as an occupancy flow. The parameters of the recurrent flow network are optimized using Bayesian optimization. The proposed method outperforms three baseline optical flow methods, Lucas-Kanade, Lucas-Kanade with Tikhonov regularization, and HornSchunck methods, and a Bayesian occupancy grid filter in terms of both prediction accuracy and robustness to noise.

ICRA Conference 2015 Conference Paper

Chance-constrained target tracking for mobile robots

  • Yoonseon Oh
  • Sungjoon Choi
  • Songhwai Oh

This paper presents a robust target tracking algorithm for a mobile sensor with a fan-shaped field of view and finite sensing range. The goal of the mobile robot is to track a moving target such that the probability of losing the target is minimized. We assume that the distribution of the next position of a moving target can be estimated using a motion prediction algorithm. If the next position of a moving target has the Gaussian distribution, the proposed algorithm can guarantee the tracking success probability. In addition, the proposed method minimizes the moving distance of the mobile robot based on a bound on the tracking success probability. While the problem considered in this paper is a non-convex optimization problem, we derive analytical solutions which can be easily solved in real-time. The performance of the proposed method is evaluated extensively in simulation and validated in pedestrian following experiments using a Pioneer mobile robot with a Microsoft Kinect sensor.

ICRA Conference 2015 Conference Paper

Leveraged non-stationary Gaussian process regression for autonomous robot navigation

  • Sungjoon Choi
  • Eunwoo Kim
  • Kyungjae Lee 0001
  • Songhwai Oh

In this paper, we propose a novel regression method that can incorporate both positive and negative training data into a single regression framework. In detail, a leveraged kernel function for non-stationary Gaussian process regression is proposed. With this new kernel function, we can vary the correlation betwen two inputs in both positive and negative directions by adjusting leverage parameters. By using this property, the resulting leveraged non-stationary Gaussian process regression can anchor the regressor to the positive data while avoiding the negative data. We first prove the positive semi-definiteness of the leveraged kernel function using Bochner's theorem. Then, we apply the leveraged non-stationary Gaussian process regression to a real-time motion control problem. In this case, the positive data refer to what to do and the negative data indicate what not to do. The results show that the controller using both positive and negative data outperforms the controller using positive data only in terms of the collision rate given training sets of the same size.

ICRA Conference 2015 Conference Paper

Structured low-rank matrix approximation in Gaussian process regression for autonomous robot navigation

  • Eunwoo Kim
  • Sungjoon Choi
  • Songhwai Oh

This paper considers the problem of approximating a kernel matrix in an autoregressive Gaussian process regression (AR-GP) in the presence of measurement noises or natural errors for modeling complex motions of pedestrians in a crowded environment. While a number of methods have been proposed to robustly predict future motions of humans, it still remains as a difficult problem in the presence of measurement noises. This paper addresses this issue by proposing a structured low-rank matrix approximation method using nuclear-norm regularized l 1 -norm minimization in AR-GP for robust motion prediction of dynamic obstacles. The proposed method approximates a kernel matrix by finding an orthogonal basis using low-rank symmetric positive semi-definite matrix approximation assuming that a kernel matrix can be well represented by a small number of dominating basis vectors. The proposed method is suitable for predicting the motion of a pedestrian, such that it can be used for safe autonomous robot navigation in a crowded environment. The proposed method is applied to well-known regression and motion prediction problems to demonstrate its robustness and excellent performance compared to existing approaches.

IROS Conference 2014 Conference Paper

A robust autoregressive gaussian process motion model using l1-norm based low-rank kernel matrix approximation

  • Eunwoo Kim
  • Sungjoon Choi
  • Songhwai Oh

This paper considers the problem of modeling complex motions of pedestrians in a crowded environment. A number of methods have been proposed to predict the motion of a pedestrian or an object. However, it is still difficult to make a good prediction due to challenges, such as the complexity of pedestrian motions and outliers in a training set. This paper addresses these issues by proposing a robust autoregressive motion model based on Gaussian process regression using l 1 -norm based low-rank kernel matrix approximation, called PCGP-l 1. The proposed method approximates a kernel matrix assuming that the kernel matrix can be well represented using a small number of dominating principal components, eliminating erroneous data. The proposed motion model is robust against outliers present in a training set and can reliably predict the motion of a pedestrian, such that it can be used by a robot for safe navigation in a crowded environment. The proposed method is applied to a number of regression and motion prediction problems to demonstrate its robustness and efficiency. The experimental results show that the proposed method considerably improves the motion prediction rate compared to other Gaussian process regression methods.

ICRA Conference 2014 Conference Paper

Real-time navigation in crowded dynamic environments using Gaussian process motion control

  • Sungjoon Choi
  • Eunwoo Kim
  • Songhwai Oh

In this paper, we propose a novel Gaussian process motion controller that can navigate through a crowded dynamic environment. The proposed motion controller predicts future trajectories of pedestrians using an autoregressive Gaussian process motion model (AR-GPMM) from the partially-observable egocentric view of a robot and controls a robot using an autoregressive Gaussian process motion controller (AR-GPMC) based on predicted pedestrian trajectories. The performance of the proposed method is extensively evaluated in simulation and validated experimentally using a Pioneer 3DX mobile robot with a Microsoft Kinect sensor. In particular, the proposed method shows over 68% improvement on the collision rate compared to a reactive planner and vector field histogram (VFH).