Arrow Research search

Author name cluster

Liang Yu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
2 author rows

Possible papers

6

AAAI Conference 2026 Conference Paper

OscuFit: Learning to Fit Osculating Implicit Quadrics for Point Clouds

  • Rao Fu
  • Qian Li
  • Liang Yu
  • Jianmin Zheng

This paper addresses the challenge of estimating local surface differential properties, specifically surface normals and curvatures, from raw 3D point clouds. Traditional methods either rely on fitting pre-defined analytic surfaces risking model bias, or directly regress normals and curvatures overlooking their intrinsic geometric correlation. We propose a learning-based approach that locally fits osculating implicit quadrics to recover both normals and curvatures simultaneously. Drawing on classical differential geometry, we exploit the fact that every point on a C² surface admits an osculating quadric in Monge form that exactly reproduces local differential properties. However, the Monge frame itself depends on the very differential quantities being estimated. To bypass this circularity, we reformulate the Monge-form quadric as an implicit representation in a canonical local frame derived solely from point coordinates, enabling supervised learning without requiring Monge frame alignment. This reformulation allows us to construct a ground-truth dataset of such local-frame quadrics and train a neural network to predict per-point weights and offsets for a robust weighted least squares fitting process. The learned offsets account for the deviations of neighboring points from the idealized osculating surface. We further incorporate stable curvature formulations into the training loss alongside normal supervision to enhance estimation fidelity. Extensive experiments on diverse datasets demonstrate that our method outperforms prior approaches in normal and curvature estimation from raw point clouds.

AAAI Conference 2026 Conference Paper

SchellingFormer: Laplacian Matrix-guided Geometric Transformer for Robust Schelling Point Detection

  • Yihao Chen
  • Haobo Jiang
  • Liang Yu
  • Jianmin Zheng

Detecting Schelling Points—salient 3D mesh landmarks that serve as natural reference points for shape analysis—is a challenging problem in geometry processing. While existing CNN-based methods struggle with limited receptive fields and poor geometric context modeling, this paper proposes {\em SchellingFormer}, a novel Laplacian matrix-guided Geometric Transformer that effectively captures long-range dependencies and discriminative geometric features for robust Schelling point prediction. Our framework consists of two key components: (i) a hybrid geometric feature embedding module that integrates handcrafted descriptors (coordinates, Gaussian curvature, and curvature differences) to encode local geometry, and (ii) a Laplacian-driven vector attention mechanism, where spatial relationships encoded by the Laplacian matrix guide feature aggregation with the Transformer. This approach enables adaptive, geometry-aware message passing and contextual representation learning. Extensive experiments demonstrate that SchellingFormer outperforms state-of-the-art methods across multiple evaluation metrics. Our work bridges the gap between spectral mesh analysis and Transformer-based learning, offering a powerful tool for 3D shape understanding tasks such as shape matching and saliency detection.

ICML Conference 2025 Conference Paper

Generative Point Cloud Registration

  • Haobo Jiang
  • Jin Xie 0001
  • Jian Yang 0003
  • Liang Yu
  • Jianmin Zheng

In this paper, we propose a novel 3D registration paradigm, Generative Point Cloud Registration, which bridges advanced 2D generative models with 3D matching tasks to enhance registration performance. Our key idea is to generate cross-view consistent image pairs that are well-aligned with the source and target point clouds, enabling geometric-color feature fusion to facilitate robust matching. To ensure high-quality matching, the generated image pair should feature both 2D-3D geometric consistency and cross-view texture consistency. To achieve this, we introduce Match-ControlNet, a matching-specific, controllable 2D generative model. Specifically, it leverages the depth-conditioned generation capability of ControlNet to produce images that are geometrically aligned with depth maps derived from point clouds, ensuring 2D-3D geometric consistency. Additionally, by incorporating a coupled conditional denoising scheme and coupled prompt guidance, Match-ControlNet further promotes cross-view feature interaction, guiding texture consistency generation. Our generative 3D registration paradigm is general and could be seamlessly integrated into various registration methods to enhance their performance. Extensive experiments on 3DMatch and ScanNet datasets verify the effectiveness of our approach.