Arrow Research search
Back to AAAI

AAAI 2022

SVT-Net: Super Light-Weight Sparse Voxel Transformer for Large Scale Place Recognition

Conference Paper AAAI Technical Track on Computer Vision I Artificial Intelligence

Abstract

Simultaneous Localization and Mapping (SLAM) and Autonomous Driving are becoming increasingly more important in recent years. Point cloud-based large scale place recognition is the spine of them. While many models have been proposed and have achieved acceptable performance by learning short-range local features, they always skip long-range contextual properties. Moreover, the model size also becomes a serious shackle for their wide applications. To overcome these challenges, we propose a super light-weight network model termed SVT-Net. On top of the highly efficient 3D Sparse Convolution (SP-Conv), an Atom-based Sparse Voxel Transformer (ASVT) and a Cluster-based Sparse Voxel Transformer (CSVT) are proposed respectively to learn both shortrange local features and long-range contextual features. Consisting of ASVT and CSVT, SVT-Net can achieve state-ofthe-art performance in terms of both recognition accuracy and running speed with a super-light model size (0. 9M parameters). Meanwhile, for the purpose of further boosting efficiency, we introduce two simplified versions, which also achieve state-of-the-art performance and further reduce the model size to 0. 8M and 0. 4M respectively.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
771271404096793917