Arrow Research search

Author name cluster

Shuhui Bu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

10 papers
2 author rows

Possible papers

10

AAAI Conference 2026 Conference Paper

CoMA-SLAM: Collaborative Multi-Agent Gaussian SLAM with Geometric Consistency

  • Lin Chen
  • Yongxin Su
  • Jvboxi Wang
  • Pengcheng Han
  • Zhenyu Xia
  • Shuhui Bu
  • Kun Li
  • Boni Hu

Although Gaussian scene representation has achieved remarkable success in tracking and mapping, most existing methods are confined to single-agent systems. Current multi-agent solutions typically rely on centralized architectures, which struggle to account for communication bandwidth constraints. Furthermore, the inherent depth ambiguity of 3D Gaussian splatting poses notable challenges in maintaining geometric consistency. To address these challenges, we introduce CoMA-SLAM, the first distributed multi-agent Gaussian SLAM framework. By leveraging 2D Gaussian surfels and robust initialization strategy, CoMA-SLAM enhances tracking accuracy and geometry consistency. It efficiently manages communication bandwidth while dynamically scaling with the number of agents. Through the integration of intra- and inter-loop closure, distributed keyframe optimization and submap centric update, our framework ensures global consistency and robustly alignment. Synthetic and real-world experiments demonstrate that CoMA-SLAM outperforms state-of-the-art methods in pose accuracy, rendering fidelity, and geometric consistency while maintaining competitive efficiency across distributed multi-agent systems. Notably, by avoiding data transmission to a centralized server, our method reduces communication bandwidth by 99.8% compared to centralized approaches.

IROS Conference 2025 Conference Paper

CODE: COllaborative Visual-UWB SLAM for Online Large-Scale Metric DEnse Mapping

  • Lin Chen 0042
  • Xuan Jia
  • Shuhui Bu
  • Guangming Wang 0001
  • Kun Li
  • Zhenyu Xia
  • Xiaohan Li
  • Pengcheng Han

This paper presents a novel collaborative online dense mapping system for multiple Unmanned Aerial Vehicles (UAVs). The system confers two primary benefits: it facilitates simultaneous UAVs co-localization and real-time dense map reconstruction, and it recovers the metric scale even in GNSS-denied conditions. To achieve these advantages, Ultrawideband (UWB) measurements, monocular Visual Odometry (VO), and co-visibility observations are jointly employed to recover both relative positions and global UAV poses, thereby ensuring optimality at both local and global scales. In the proposed methodology, a two-stage optimization strategy is proposed to reduce optimization burden. Initially, relative Sim3 transformations among UAVs are swiftly estimated, with UWB measurements facilitating metric scale recovery in the absence of GNSS. Subsequently, a global pose optimization is performed to effectively mitigate cumulative drift. By integrating UWB, VO, and co-visibility data within this framework, both local geometric consistency and global pose accuracy are robustly maintained. Through comprehensive simulation and empirical real-world testing, we demonstrate that our system not only improves UAV positioning accuracy in challenging scenarios but also facilitates the high-quality, online integration of dense point clouds in large-scale areas. This research offers valuable contributions and practical techniques for precise, real-time map reconstruction using an autonomous UAV fleet, particularly in GNSS-denied environments.

ICRA Conference 2024 Conference Paper

AutoFusion: Autonomous Visual Geolocation and Online Dense Reconstruction for UAV Cluster

  • Yizhu Zhang
  • Shuhui Bu
  • Yifei Dong 0008
  • Yu Zhang 0197
  • Kun Li
  • Lin Chen 0042

Real-time dense reconstruction using Unmanned Aerial Vehicle (UAV) is becoming increasingly popular in large-scale rescue and environmental monitoring tasks. However, due to the energy constraints of a single UAV, the efficiency can be greatly improved through the collaboration of multi-UAVs. Nevertheless, when faced with unknown environments or the loss of Global Navigation Satellite System (GNSS) signal, most multi-UAV SLAM systems can’t work, making it hard to construct a global consistent map. In this paper, we propose a real-time dense reconstruction system called AutoFusion for multiple UAVs, which robustly supports scenarios with lost global positioning and weak co-visibility. A method for Visual Geolocation and Matching Network (VGMN) is suggested by constructing a graph convolutional neural network as a feature extractor. It can acquire geographical location information solely through images. We also present a real-time dense reconstruction framework for multi-UAV with autonomous visual geolocation. UAV agents send images and relative positions to the ground server, which processes the data using VGMN for multi-agent geolocation optimization, including initialization, pose graph optimization, and map fusion. Extensive experiments demonstrate that our system can efficiently and stably construct large-scale dense maps in real-time with high accuracy and robustness.

IROS Conference 2020 Conference Paper

DenseFusion: Large-Scale Online Dense Pointcloud and DSM Mapping for UAVs

  • Lin Chen 0042
  • Yong Zhao
  • Shibiao Xu
  • Shuhui Bu
  • Pengcheng Han
  • Gang Wan

With the rapidly developing unmanned aerial vehicles, the requirements of generating maps efficiently and quickly are increasing. To realize online mapping, we develop a real-time dense mapping framework named DenseFusion which can incrementally generates dense geo-referenced 3D point cloud, digital orthophoto map (DOM) and digital surface model (DSM) from sequential aerial images with optional GPS information. The proposed method works in real-time on standard CPUs even for processing high resolution images. Based on the advanced monocular SLAM, our system first estimates appropriate camera poses and extracts effective keyframes, and next constructs virtual stereo-pair from consecutive frame to generate pruned dense 3D point clouds; then a novel realtime DSM fusion method is proposed which can incrementally process dense point cloud. Finally, a high efficiency visualization system is developed to adopt dynamic levels of detail (LoD) method, which makes it render dense point cloud and DSM smoothly. The performance of the proposed method is evaluated through qualitative and quantitative experiments. The results indicate that compared to traditional structure from motion based approaches, the presented framework is able to output both large-scale high-quality DOM and DSM in real-time with low computational cost.

IROS Conference 2019 Conference Paper

TerrainFusion: Real-time Digital Surface Model Reconstruction based on Monocular SLAM

  • Wei Wang
  • Yong Zhao
  • Pengcheng Han
  • Pengcheng Zhao
  • Shuhui Bu

This paper presents an algorithm which can generate live digtial surface model (DSM) during the flight based on simultaneous localization and mapping (SLAM). We process the keyframe which is output by a monocular SLAM system to generate a local DSM, and fuse the local DSM to the global tiled DSM incrementally. During the local DSM generation, a local digital elevation model (DEM) is estimated by projecting the filtered 2D Delaunay mesh to a 3D mesh, and a local orthomosaic is obtained by projecting triangle image patches onto a 2D mesh. During the DSM fusion, both the local DEM and orthomosaic are split into tiles and fused to the global tiled DEM and orthomosaic respectively with multiband algorithm. Both the efficient DSM generation and fusion algorithms contribute to achieving a real-time reconstruction. Qualitative and quantitative experiments on a public aerial image dataset with different scenarios are performed to validate the effectiveness of the proposed method. Compared with traditional structure from motion (SfM) based approaches, the presented system is able to output both large-scale high-quality DEM and orthomosaic in real-time with low computational cost.

IROS Conference 2016 Conference Paper

Map2DFusion: Real-time incremental UAV image mosaicing based on monocular SLAM

  • Shuhui Bu
  • Yong Zhao
  • Gang Wan
  • Zhenbao Liu

In this paper we present a real-time approach to stitch large-scale aerial images incrementally. A monocular SLAM system is used to estimate camera position and attitude, and meanwhile 3D point cloud map is generated. When GPS information is available, the estimated trajectory is transformed to WGS84 coordinates after time synchronized automatically. Therefore, the output orthoimage retains global coordinates without ground control points. The final image is fused and visualized instantaneously with a proposed adaptive weighted multiband algorithm. To evaluate the effectiveness of the proposed method, we create a publicly available aerial image dataset with sequences of different environments. The experimental results demonstrate that our system is able to achieve high efficiency and quality compared to state-of-the-art methods. In addition, we share the code on the website with detailed introduction and results.