Arrow Research search

Author name cluster

Yuru Pei

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

AAAI Conference 2025 Conference Paper

SCCS: Deep Neural Spectral Clustering for Self-Supervised Subcellular Structure Segmentation

  • Jimao Jiang
  • Diya Sun
  • Tianbing Wang
  • Yuru Pei

Subcellular structure segmentation is a fundamental task in biological imaging. Existing self-supervised representation learning combined with classical k-means clustering achieved unsupervised image segmentation, but it was constrained by time-consuming test-time pixel-wise feature extraction and clustering synchronization. This study introduces SCCS, a lightweight graph neural network-based spectral clustering framework for end-to-end subcellular structure segmentation upon superpixel graphs, greatly relieving the computational complexity in test-time numerical spectral clustering and inter-graph label inconsistency. Specifically, SCCS exploits the self-supervised masked autoencoder for representation learning and the construction of superpixel graphs (spG). Unlike per-graph scalar affinity-based spectral clustering, the proposed SCCS parameterizes the mapping from learned deep spG representations to coordinates in the spectral embedding space and the clustering assignments. The SCCS is optimized under unsupervised eigendecomposition and incremental clustering criteria, which synchronize the intra- and inter-graph spectral clustering. The proposed approach is evaluated on a publicly available volumetric electron microscopy dataset. Experiments demonstrate the effectiveness and performance gains of the proposed SCCS over the state-of-the-art in discovering a variety of subcellular structures.

AAAI Conference 2020 Conference Paper

Fully Convolutional Network for Consistent Voxel-Wise Correspondence

  • Yungeng Zhang
  • Yuru Pei
  • Yuke Guo
  • Gengyu Ma
  • Tianmin Xu
  • Hongbin Zha

In this paper, we propose a fully convolutional network-based dense map from voxels to invertible pair of displacement vector fields regarding a template grid for the consistent voxelwise correspondence. We parameterize the volumetric mapping using a convolutional network and train it in an unsupervised way by leveraging the spatial transformer to minimize the gap between the warped volumetric image and the template grid. Instead of learning the unidirectional map, we learn the nonlinear mapping functions for both forward and backward transformations. We introduce the combinational inverse constraints for the volumetric one-to-one maps, where the pairwise and triple constraints are utilized to learn the cycle-consistent correspondence maps between volumes. Experiments on both synthetic and clinically captured volumetric cone-beam CT (CBCT) images show that the proposed framework is effective and competitive against state-of-theart deformable registration techniques.