Arrow Research search
Back to AAAI

AAAI 2026

Temporal and Spatial Representation Learning for Multimodal Low-Beam 3D Object Detection

Conference Paper AAAI Technical Track on Computer Vision IX Artificial Intelligence

Abstract

To facilitate the large-scale deployment of autonomous driving in real-world scenarios, developing low-cost and high-performance 3D object detection systems has become a critical technical challenge. Although high-beam LiDARs provide denser point cloud data, their prohibitive hardware cost and high power consumption limit their practicality. In contrast, low-beam LiDARs offer advantages in terms of affordability and energy efficiency, but often suffer from inadequate perception accuracy due to their sparser point cloud data. This paper focuses on the task of multimodal 3D object detection with low-beam LiDARs, and proposes a novel approach that integrates temporal and spatial representation learning to enhance detection accuracy under sparser sensor conditions. Specifically, our approach comprises: (1) a Temporal Feature Prediction Learning (TFPL) module, which predicts the current BEV representation based on a sequence of historical BEV features; (2) a Spatial Feature Observation Learning (SFOL) module, which aligns BEV (Bird's-Eye-View) features from high-beam and low-beam LiDAR to enforce the low-beam features to approximate high-beam representations; (3) an Uncertainty-Aware Fusion (UAF) strategy, which performs feature-wise weighting between the predicted and observed BEV features by leveraging channel-wise variances, effectively mitigating perturbations in the learned BEV representations. Extensive experiments on the KITTI and nuScenes 3D object detection datasets demonstrate that the proposed approach significantly improves detection performance under low-beam LiDAR configurations.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
681297453710172888