Arrow Research search

Author name cluster

Hainan Cui

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

AAAI Conference 2026 Conference Paper

Resilient UAV Swarm with Fast Connectivity Recovery and Extensive Coverage

  • Yabin Peng
  • Chenyu Zhou
  • Hainan Cui
  • Tong Duan
  • Haoyang Chen
  • Fan Zhang
  • Shaoxun Liu

To address partial node failures in unmanned aerial vehicle swarms, self-healing communication techniques are commonly employed to restore backbone connectivity while preserving area coverage. However, existing heuristic methods struggle to scale under large-scale failures and dynamic conditions, while learning-based approaches often suffer from spatial collapse, resulting in significant coverage loss. To overcome these limitations, we propose a resilient self-healing framework that enables rapid connectivity recovery and wide-area coverage through a divide-and-conquer strategy. First, we introduce a buffered dynamic virtual force expansion mechanism that categorizes pairwise distances into repulsive, neutral, and attractive zones, allowing nodes to disperse appropriately while preserving communication links and maintaining safety buffers. Subsequently, we design a multipartite graph convolution module to reason over subnetwork-level interactions and facilitate cross-subnetwork reconnection with global structural awareness. Finally, we develop an adaptive fusion strategy that combines both outputs with time-aware weighting to generate the final motion decisions. Experimental results in both random and uniform deployment scenarios demonstrate that our approach outperforms many state-of-the-art methods in terms of connectivity restoration speed and communication coverage.

AAAI Conference 2022 Conference Paper

MMA: Multi-Camera Based Global Motion Averaging

  • Hainan Cui
  • Shuhan Shen

In order to fully perceive the surrounding environment, many intelligent robots and self-driving cars are equipped with a multi-camera system. Based on this system, the structurefrom-motion (SfM) technology is used to realize scene reconstruction, but the fixed relative poses between cameras in the multi-camera system are usually not considered. This paper presents a tailor-made multi-camera based motion averaging system, where the fixed relative poses are utilized to improve the accuracy and robustness of SfM. Our approach starts by dividing the images into reference images and nonreference images, and edges in view-graph are divided into four categories accordingly. Then, a multi-camera based rotating averaging problem is formulated and solved in two stages, where an iterative re-weighted least squares scheme is used to deal with outliers. Finally, a multi-camera based translation averaging problem is formulated and a l1-norm based optimization scheme is proposed to compute the relative translations of multi-camera system and reference camera positions simultaneously. Experiments demonstrate that our algorithm achieves superior accuracy and robustness on various data sets compared to the state-of-the-art methods.

IROS Conference 2022 Conference Paper

Multi-Camera-LiDAR Auto-Calibration by Joint Structure-from-Motion

  • Diantao Tu
  • Baoyu Wang
  • Hainan Cui
  • Yuqian Liu
  • Shuhan Shen

Multiple sensors, especially cameras and LiDARs, are widely used in autonomous vehicles. In order to fuse data from different sensors accurately, precise calibrations are required, including camera intrinsic parameters, and relative poses between multiple cameras and LiDARs. However, most existing camera-LiDAR calibration methods need to place manually designed calibration objects in multiple locations and multiple times, which are time-consuming and labor-intensive, and are not suitable for frequent use. To address that, in this paper we proposed a novel calibration pipeline that can automatically calibrate multiple cameras and multiple LiDARs in a Structure-from-Motion (SfM) process. In our pipeline, we first perform a global SfM on all images with the help of rough LiDAR data to get the initial poses of all sensors. Then, feature points on lines and planes are extracted from both SfM point cloud and LiDARs. With these features, a global Bundle Adjustment is performed to minimize the point reprojection errors, point-to-line errors, and point-to-plane errors together. During this minimization process, camera intrinsic parameters, camera and LiDAR poses, and SfM point cloud are refined jointly. The proposed method uses the characteristics of natural scenes, does not require manually designed calibration objects, and incorporates all calibration parameters into a unified optimization framework. Experiments on autonomous vehicles with different sensor configurations demonstrate the effectiveness and robustness of the proposed method.