Arrow Research search

Author name cluster

Shuyuan Lin

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

8 papers
1 author row

Possible papers

8

AAAI Conference 2026 Conference Paper

MCI-Net: A Robust Multi-Domain Context Integration Network for Point Cloud Registration

  • Shuyuan Lin
  • Wenwu Peng
  • Junjie Huang
  • Qiang Qi
  • Miaohui Wang
  • Jian Weng

Robust and discriminative feature learning is critical for high-quality point cloud registration. However, existing deep learning–based methods typically rely on Euclidean neighborhood-based strategies for feature extraction, which struggle to effectively capture the implicit semantics and structural consistency in point clouds. To address these issues, we propose a multi-domain context integration network (MCI-Net) that improves feature representation and registration performance by aggregating contextual cues from diverse domains. Specifically, we propose a graph neighborhood aggregation module, which constructs a global graph to capture the overall structural relationships within point clouds. We then propose a progressive context interaction module to enhance feature discriminability by performing intra-domain feature decoupling and inter-domain context interaction. Finally, we design a dynamic inlier selection method that optimizes inlier weights using residual information from multiple iterations of pose estimation, thereby improving the accuracy and robustness of registration. Extensive experiments on indoor RGB-D and outdoor LiDAR datasets show that the proposed MCI-Net significantly outperforms existing state-of-the-art methods, achieving the highest registration recall of 96.4% on 3DMatch.

AAAI Conference 2026 Conference Paper

MSTDiff: Multiscale-Aware Transformer Diffusion Network for Video Object Detection

  • Qiang Qi
  • Wenqi Shang
  • Xiao Wang
  • Yanjie Liang
  • Shuyuan Lin

Video object detection is a fundamental yet challenging task in computer vision. Recently, DETR-based methods have gained prominence in this domain owing to their powerful global modeling capabilities. However, these methods are still confronted with two key limitations: frame-agnostic initialization of object queries and scale-agnostic attention mechanisms, which hinder their capability to capture the appearance variations of dynamic objects and model the temporal consistency across frames. To alleviate these limitations, we propose a multiscale-aware transformer diffusion network (MSTDiff), a novel framework designed for the video object detection task, including two technical improvements over existing methods. First, we design a diffusion-driven adaptive query module, which models the object query distribution through a diffusion process conditioned on input frames, enabling an adaptive and content-aware initialization of object queries. Second, we develop a multiscale-aware transformer encoder module, which combines multi-head convolutional units with attention mechanisms to enhance multi-scale feature representations while preserving global dependence modeling. We conduct extensive experiments on the public ImageNet VID dataset, and the results demonstrate that our MSTDiff achieves 87.7% mAP with ResNet-101, outperforming most previous state-of-the-art video object detection methods.

AAAI Conference 2026 Conference Paper

Perceive More with Less: LiDAR Point Cloud Compression at Just Recognizable Distortion for 3D Scene Understanding

  • Miaohui Wang
  • Runnan Huang
  • Taojun Liu
  • Shuyuan Lin
  • Ye Liu
  • Yun Song

Existing LiDAR point cloud (LPC) data coding methods primarily focus on balancing compression efficiency and reconstruction quality according to the human vision system (HVS). However, these methods rarely consider the requirements of downstream scene understanding tasks from the perspective of the machine vision system (MVS). To address this challenge, we explore the maximum degree of LPC compression that has negligible impact on perception accuracy, called LPC-based just recognizable compression distortion (lpcJRCD). Specifically, we introduce a novel point-wise quantization approach for constructing a MVS-based LiDAR dataset and present a new lpcJRCD-guided intelligent compression framework tailored for MVS applications. To enhance MVS-based LPC compression efficiency, we develop a dual-feature interaction (DFI) module that fuses point and voxel features. Additionally, we propose a mask-based loss function to ensure accurate point-wise quality level prediction. Experimental results demonstrate the effectiveness of our proposed model in reducing the average bit rate by up to 94.98% while preserving perception accuracy in autonomous vehicles.

AAAI Conference 2026 Conference Paper

SC-Net: Robust Correspondence Learning via Spatial and Cross-Channel Context

  • Shuyuan Lin
  • Hailiang Liao
  • Qiang Qi
  • Junjie Huang
  • Taotao Lai
  • Jian Weng

Recent research has focused on using convolutional neural networks (CNNs) as the backbones in two-view correspondence learning, demonstrating significant superiority over methods based on multilayer perceptrons. However, CNN backbones that are not tailored to specific tasks may fail to effectively aggregate global context and oversmooth dense motion fields in scenes with large disparity. To address these problems, we propose a novel network named SC-Net, which effectively integrates bilateral context from both spatial and channel perspectives. Specifically, we design an adaptive focused regularization module (AFR) to enhance the model's position-awareness and robustness against spurious motion samples, thereby facilitating the generation of a more accurate motion field. We then propose a bilateral field adjustment module (BFA) to refine the motion field by simultaneously modeling long-range relationships and facilitating interaction across spatial and channel dimensions. Finally, we recover the motion vectors from the refined field using a position-aware recovery module (PAR) that ensures consistency and precision. Extensive experiments demonstrate that SC-Net outperforms state-of-the-art methods in relative pose estimation and outlier removal tasks on YFCC100M and SUN3D datasets.

IJCAI Conference 2025 Conference Paper

MGCA-Net: Multi-Graph Contextual Attention Network for Two-View Correspondence Learning

  • Shuyuan Lin
  • Mengtin Lo
  • Haosheng Chen
  • Yanjie Liang
  • Qiangqiang Wu

Two-view correspondence learning is a key task in computer vision, which aims to establish reliable matching relationships for applications such as camera pose estimation and 3D reconstruction. However, existing methods have limitations in local geometric modeling and cross-stage information optimization, which make it difficult to accurately capture the geometric constraints of matched pairs and thus reduce the robustness of the model. To address these challenges, we propose a Multi-Graph Contextual Attention Network (MGCA-Net), which consists of a Contextual Geometric Attention (CGA) module and a Cross-Stage Multi-Graph Consensus (CSMGC) module. Specifically, CGA dynamically integrates spatial position and feature information via an adaptive attention mechanism and enhances the capability to capture both local and global geometric relationships. Meanwhile, CSMGC establishes geometric consensus via a cross-stage sparse graph network, ensuring the consistency of geometric information across different stages. Experimental results on two representative YFCC100M and SUN3D datasets show that MGCA-Net significantly outperforms existing SOTA methods in the outlier rejection and camera pose estimation tasks. Source code is available at http: //www. linshuyuan. com.

AAAI Conference 2019 Conference Paper

Hypergraph Optimization for Multi-Structural Geometric Model Fitting

  • Shuyuan Lin
  • Guobao Xiao
  • Yan Yan
  • David Suter
  • Hanzi Wang

Recently, some hypergraph-based methods have been proposed to deal with the problem of model fitting in computer vision, mainly due to the superior capability of hypergraph to represent the complex relationship between data points. However, a hypergraph becomes extremely complicated when the input data include a large number of data points (usually contaminated with noises and outliers), which will significantly increase the computational burden. In order to overcome the above problem, we propose a novel hypergraph optimization based model fitting (HOMF) method to construct a simple but effective hypergraph. Specifically, HOMF includes two main parts: an adaptive inlier estimation algorithm for vertex optimization and an iterative hyperedge optimization algorithm for hyperedge optimization. The proposed method is highly efficient, and it can obtain accurate model fitting results within a few iterations. Moreover, HOMF can then directly apply spectral clustering, to achieve good fitting performance. Extensive experimental results show that HOMF outperforms several state-of-the-art model fitting methods on both synthetic data and real images, especially in sampling efficiency and in handling data with severe outliers.