Arrow Research search

Author name cluster

Brian Clipp

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

ICRA Conference 2025 Conference Paper

Gradient-Based Trajectory Optimization with Parallelized Differentiable Traffic Simulation

  • Sanghyun Son 0003
  • Laura Zheng
  • Brian Clipp
  • Connor Greenwell
  • Sujin Philip
  • Ming Lin 0003

We present a parallelized differentiable traffic simulator based on the Intelligent Driver Model (IDM), a car-following framework that incorporates driver behavior as key variables. Our vehicle simulator efficiently models vehicle motion, generating trajectories that can be supervised to fit real-world data. By leveraging its differentiable nature, IDM parameters are optimized using gradient-based methods. With the capability to simulate up to 2 million vehicles in real time, the system is scalable for large-scale trajectory optimization. We show that we can use the simulator to filter noise in the input trajectories (trajectory filtering), reconstruct dense trajectories from sparse ones (trajectory reconstruction), and predict future trajectories (trajectory prediction), with all generated trajectories adhering to physical laws. We validate our simulator and algorithm on several datasets including NGSIM and Waymo Open Dataset. The code is publicly available at: https://github.com/SonSang/diffidm.

IROS Conference 2024 Conference Paper

Deep Stochastic Kinematic Models for Probabilistic Motion Forecasting in Traffic

  • Laura Zheng
  • Sanghyun Son 0003
  • Jing Liang 0006
  • Xijun Wang 0002
  • Brian Clipp
  • Ming Lin 0003

In trajectory forecasting tasks for traffic, future output trajectories can be computed by advancing the ego vehicle’s state with predicted actions according to a kinematics model. By unrolling predicted trajectories via time integration and models of kinematic dynamics, predicted trajectories should not only be kinematically feasible but also relate uncertainty from one timestep to the next. While current works in probabilistic prediction do incorporate kinematic priors for mean trajectory prediction, variance is often left as a learnable parameter, despite uncertainty in one time step being inextricably tied to uncertainty in the previous time step. In this paper, we show simple and differentiable analytical approximations describing the relationship between variance at one timestep and that at the next with the kinematic bicycle model. In our results, we find that encoding the relationship between variance across timesteps works especially well in unoptimal settings, such as with small or noisy datasets. We observe up to a 50% performance boost in partial dataset settings and up to an 8% performance boost in large-scale learning compared to previous kinematic prediction methods on SOTA trajectory forecasting architectures out-of-the-box, with no fine-tuning.

IROS Conference 2010 Conference Paper

Parallel, real-time visual SLAM

  • Brian Clipp
  • Jongwoo Lim
  • Jan-Michael Frahm
  • Marc Pollefeys

In this paper we present a novel system for real-time, six degree of freedom visual simultaneous localization and mapping using a stereo camera as the only sensor. The system makes extensive use of parallelism both on the graphics processor and through multiple CPU threads. Working together these threads achieve real-time feature tracking, visual odometry, loop detection and global map correction using bundle adjustment. The resulting corrections are fed back into to the visual odometry system to limit its drift over long sequences. We demonstrate our system on a series videos from challenging indoor environments with moving occluders, visually homogenous regions with few features, scene parts with large changes in lighting and fast camera motion. The total system performs its task of global map building in real time including loop detection and bundle adjustment on typical office building scale scenes.