Arrow Research search

Author name cluster

Pengcheng Han

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

7 papers
2 author rows

Possible papers

7

AAAI Conference 2026 Conference Paper

CoMA-SLAM: Collaborative Multi-Agent Gaussian SLAM with Geometric Consistency

  • Lin Chen
  • Yongxin Su
  • Jvboxi Wang
  • Pengcheng Han
  • Zhenyu Xia
  • Shuhui Bu
  • Kun Li
  • Boni Hu

Although Gaussian scene representation has achieved remarkable success in tracking and mapping, most existing methods are confined to single-agent systems. Current multi-agent solutions typically rely on centralized architectures, which struggle to account for communication bandwidth constraints. Furthermore, the inherent depth ambiguity of 3D Gaussian splatting poses notable challenges in maintaining geometric consistency. To address these challenges, we introduce CoMA-SLAM, the first distributed multi-agent Gaussian SLAM framework. By leveraging 2D Gaussian surfels and robust initialization strategy, CoMA-SLAM enhances tracking accuracy and geometry consistency. It efficiently manages communication bandwidth while dynamically scaling with the number of agents. Through the integration of intra- and inter-loop closure, distributed keyframe optimization and submap centric update, our framework ensures global consistency and robustly alignment. Synthetic and real-world experiments demonstrate that CoMA-SLAM outperforms state-of-the-art methods in pose accuracy, rendering fidelity, and geometric consistency while maintaining competitive efficiency across distributed multi-agent systems. Notably, by avoiding data transmission to a centralized server, our method reduces communication bandwidth by 99.8% compared to centralized approaches.

IROS Conference 2025 Conference Paper

CODE: COllaborative Visual-UWB SLAM for Online Large-Scale Metric DEnse Mapping

  • Lin Chen 0042
  • Xuan Jia
  • Shuhui Bu
  • Guangming Wang 0001
  • Kun Li
  • Zhenyu Xia
  • Xiaohan Li
  • Pengcheng Han

This paper presents a novel collaborative online dense mapping system for multiple Unmanned Aerial Vehicles (UAVs). The system confers two primary benefits: it facilitates simultaneous UAVs co-localization and real-time dense map reconstruction, and it recovers the metric scale even in GNSS-denied conditions. To achieve these advantages, Ultrawideband (UWB) measurements, monocular Visual Odometry (VO), and co-visibility observations are jointly employed to recover both relative positions and global UAV poses, thereby ensuring optimality at both local and global scales. In the proposed methodology, a two-stage optimization strategy is proposed to reduce optimization burden. Initially, relative Sim3 transformations among UAVs are swiftly estimated, with UWB measurements facilitating metric scale recovery in the absence of GNSS. Subsequently, a global pose optimization is performed to effectively mitigate cumulative drift. By integrating UWB, VO, and co-visibility data within this framework, both local geometric consistency and global pose accuracy are robustly maintained. Through comprehensive simulation and empirical real-world testing, we demonstrate that our system not only improves UAV positioning accuracy in challenging scenarios but also facilitates the high-quality, online integration of dense point clouds in large-scale areas. This research offers valuable contributions and practical techniques for precise, real-time map reconstruction using an autonomous UAV fleet, particularly in GNSS-denied environments.

IROS Conference 2020 Conference Paper

DenseFusion: Large-Scale Online Dense Pointcloud and DSM Mapping for UAVs

  • Lin Chen 0042
  • Yong Zhao
  • Shibiao Xu
  • Shuhui Bu
  • Pengcheng Han
  • Gang Wan

With the rapidly developing unmanned aerial vehicles, the requirements of generating maps efficiently and quickly are increasing. To realize online mapping, we develop a real-time dense mapping framework named DenseFusion which can incrementally generates dense geo-referenced 3D point cloud, digital orthophoto map (DOM) and digital surface model (DSM) from sequential aerial images with optional GPS information. The proposed method works in real-time on standard CPUs even for processing high resolution images. Based on the advanced monocular SLAM, our system first estimates appropriate camera poses and extracts effective keyframes, and next constructs virtual stereo-pair from consecutive frame to generate pruned dense 3D point clouds; then a novel realtime DSM fusion method is proposed which can incrementally process dense point cloud. Finally, a high efficiency visualization system is developed to adopt dynamic levels of detail (LoD) method, which makes it render dense point cloud and DSM smoothly. The performance of the proposed method is evaluated through qualitative and quantitative experiments. The results indicate that compared to traditional structure from motion based approaches, the presented framework is able to output both large-scale high-quality DOM and DSM in real-time with low computational cost.

IROS Conference 2019 Conference Paper

TerrainFusion: Real-time Digital Surface Model Reconstruction based on Monocular SLAM

  • Wei Wang
  • Yong Zhao
  • Pengcheng Han
  • Pengcheng Zhao
  • Shuhui Bu

This paper presents an algorithm which can generate live digtial surface model (DSM) during the flight based on simultaneous localization and mapping (SLAM). We process the keyframe which is output by a monocular SLAM system to generate a local DSM, and fuse the local DSM to the global tiled DSM incrementally. During the local DSM generation, a local digital elevation model (DEM) is estimated by projecting the filtered 2D Delaunay mesh to a 3D mesh, and a local orthomosaic is obtained by projecting triangle image patches onto a 2D mesh. During the DSM fusion, both the local DEM and orthomosaic are split into tiles and fused to the global tiled DEM and orthomosaic respectively with multiband algorithm. Both the efficient DSM generation and fusion algorithms contribute to achieving a real-time reconstruction. Qualitative and quantitative experiments on a public aerial image dataset with different scenarios are performed to validate the effectiveness of the proposed method. Compared with traditional structure from motion (SfM) based approaches, the presented system is able to output both large-scale high-quality DEM and orthomosaic in real-time with low computational cost.