Arrow Research search

Author name cluster

Yu-Qi Yang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

IJCAI Conference 2021 Conference Paper

Spline Positional Encoding for Learning 3D Implicit Signed Distance Fields

  • Peng-Shuai Wang
  • Yang Liu
  • Yu-Qi Yang
  • Xin Tong

Multilayer perceptrons (MLPs) have been successfully used to represent 3D shapes implicitly and compactly, by mapping 3D coordinates to the corresponding signed distance values or occupancy values. In this paper, we propose a novel positional encoding scheme, called Spline Positional Encoding, to map the input coordinates to a high dimensional space before passing them to MLPs, which help recover 3D signed distance fields with fine-scale geometric details from unorganized 3D point clouds. We verified the superiority of our approach over other positional encoding schemes on tasks of 3D shape reconstruction and 3D shape space learning from input point clouds. The efficacy of our approach extended to image reconstruction is also demonstrated and evaluated.

AAAI Conference 2021 Conference Paper

Unsupervised 3D Learning for Shape Analysis via Multiresolution Instance Discrimination

  • Peng-Shuai Wang
  • Yu-Qi Yang
  • Qian-Fang Zou
  • Zhirong Wu
  • Yang Liu
  • Xin Tong

We propose an unsupervised method for learning a generic and efficient shape encoding network for different shape analysis tasks. Our key idea is to jointly encode and learn shape and point features from unlabeled 3D point clouds. For this purpose, we adapt HRNet to octree-based convolutional neural networks for jointly encoding shape and point features with fused multiresolution subnetworks and design a simple-yetefficient Multiresolution Instance Discrimination (MID) loss for jointly learning the shape and point features. Our network takes a 3D point cloud as input and output both shape and point features. After training, Our network is concatenated with simple task-specific back-ends and fine-tuned for different shape analysis tasks. We evaluate the efficacy and generality of our method with a set of shape analysis tasks, including shape classification, semantic shape segmentation, as well as shape registration tasks. With simple back-ends, our network demonstrates the best performance among all unsupervised methods and achieves competitive performance to supervised methods. For fine-grained shape segmentation on the PartNet dataset, our method even surpasses existing supervised methods by a large margin.