Arrow Research search

Author name cluster

Ziheng Chen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

7 papers
2 author rows

Possible papers

7

AAAI Conference 2026 Conference Paper

Wasserstein-Aligned Hyperbolic Multi-View Clustering

  • Rui Wang
  • Yuting Jiang
  • Xiaoqing Luo
  • Xiao-Jun Wu
  • Nicu Sebe
  • Ziheng Chen

Multi-view clustering (MVC) aims to uncover the latent structure of multi-view data by learning view-common and view-specific information. Although recent studies have explored hyperbolic representations for better tackling the representation gap between different views, they focus primarily on instance-level alignment and neglect global semantic consistency, rendering them vulnerable to view-specific information (e.g., noise and cross-view discrepancies). To this end, this paper proposes a novel Wasserstein-Aligned Hyperbolic (WAH) framework for multi-view clustering. Specifically, our method exploits a view-specific hyperbolic encoder for each view to embed features into the Lorentz manifold for hierarchical semantic modeling. Whereafter, a global semantic loss based on the hyperbolic sliced-Wasserstein distance is introduced to align manifold distributions across views. This is followed by soft cluster assignments to encourage cross-view semantic consistency. Extensive experiments on multiple benchmarking datasets show that our method can achieve SOTA clustering performance.

IJCAI Conference 2025 Conference Paper

A Correlation Manifold Self-Attention Network for EEG Decoding

  • Chen Hu
  • Rui Wang
  • Xiaoning Song
  • Tao Zhou
  • Xiao-Jun Wu
  • Nicu Sebe
  • Ziheng Chen

Riemannian neural networks, which generalize the deep learning paradigm to non-Euclidean geometries, have garnered widespread attention across diverse applications in artificial intelligence. Among these, the representative attention models have been studied on various non-Euclidean spaces to geometrically capture the spatiotemporal dependencies inherent in time series data, e. g. , electroencephalography (EEG). Recent studies have highlighted the full-rank correlation matrix as an advantageous alternative to the covariance matrix for data representation, owing to its invariance to the scale of variables. Motivated by these advancements, we propose the Correlation Attention Network (CorAtt) tailored for full-rank correlation matrices and implement it under the permutation-invariant and computationally efficient Off-Log and Log-Scaled geometries, respectively. Extensive evaluations on three benchmarking EEG datasets provide substantial evidence for the effectiveness of our introduced CorAtt. The code and supplementary material can be found at https: //github. com/ChenHu-ML/CorAtt.

IROS Conference 2025 Conference Paper

Magnetic Microswarms with Controlled Locomotion in Liquid and Air Environments

  • Ziheng Chen
  • Jiangfan Yu
  • Na Liu 0004

Magnetic microswarms have attracted significant attention in medical robotics, owing to their potential for performing complex tasks in challenging environments. However, developing microswarms that can operate effectively in both liquid and air environments remains a substantial challenge. This study presents the design and characterization of hydrogel-based microswarms composed of magnetic hydrogel particles prepared from agarose hydrogel and NdFeB magnetic microparticles. These microswarms form stable monolayer structures actuated by rotating magnetic fields at high frequencies (10 Hz) in liquid environments, enabling synchronization with the external magnet and achieving translational motion. Actuated by an oscillating magnetic field, the swarms transition from a monolayer configuration to a three-dimensional (3D) structure in the air environment. Experimental results demonstrate that the 3D swarms are capable of navigating complex terrains and interacting with tissue surfaces in air environments. Finally, we demonstrate the potential of these 3D swarms for targeted delivery and adaptive filling of gastric perforations using an ex vivo gastric tissue model, showcasing their potential for biomedical applications.

NeurIPS Conference 2025 Conference Paper

Towards a General Attention Framework on Gyrovector Spaces for Matrix Manifolds

  • Rui Wang
  • Chen Hu
  • Xiaoning Song
  • Xiaojun Wu
  • Nicu Sebe
  • Ziheng Chen

Deep neural networks operating on non-Euclidean geometries have recently demonstrated impressive performance across various machine-learning applications. Several studies have extended the attention mechanism to different manifolds. However, most existing non-Euclidean attention models are tailored to specific geometries, limiting their applicability. On the other hand, recent studies show that several matrix manifolds, such as Symmetric Positive Definite (SPD), Symmetric Positive Semi-Definite (SPSD), and Grassmannian manifolds, admit gyrovector structures, which extend vector addition and scalar product into manifolds. Leveraging these properties, we propose a Gyro Attention (GyroAtt) framework over general gyrovector spaces, applicable to various matrix geometries. Empirically, we manifest GyroAtt on three gyro structures on the SPD manifold, three on the SPSD manifold, and one on the Grassmannian manifold. Extensive experiments on four electroencephalography (EEG) datasets demonstrate the effectiveness of our framework.

IJCAI Conference 2024 Conference Paper

A Grassmannian Manifold Self-Attention Network for Signal Classification

  • Rui Wang
  • Chen Hu
  • Ziheng Chen
  • Xiao-Jun Wu
  • Xiaoning Song

In the community of artificial intelligence, significant progress has been made in encoding sequential data using deep learning techniques. Nevertheless, how to effectively mine useful information from channel dimensions remains a major challenge, as these features have a submanifold structure. Linear subspace, the basic element of the Grassmannian manifold, has proven to be an effective manifold-valued feature descriptor in statistical representation. Besides, the Euclidean self-attention mechanism has shown great success in capturing long-range relationships of data. Inspired by these facts, we extend the self-attention mechanism to the Grassmannian manifold. Our framework can effectively characterize the spatiotemporal fluctuations of sequential data encoded in the Grassmannian manifold. Extensive experimental results on three benchmarking datasets (a drone recognition dataset and two EEG signal classification datasets) demonstrate the superiority of our method over the state-of-the-art. The code and supplementary material for this work can be found at https: //github. com/ChenHu-ML/GDLNet.

NeurIPS Conference 2024 Conference Paper

RMLR: Extending Multinomial Logistic Regression into General Geometries

  • Ziheng Chen
  • Yue Song
  • Rui Wang
  • Xiao-Jun Wu
  • Nicu Sebe

Riemannian neural networks, which extend deep learning techniques to Riemannian spaces, have gained significant attention in machine learning. To better classify the manifold-valued features, researchers have started extending Euclidean multinomial logistic regression (MLR) into Riemannian manifolds. However, existing approaches suffer from limited applicability due to their strong reliance on specific geometric properties. This paper proposes a framework for designing Riemannian MLR over general geometries, referred to as RMLR. Our framework only requires minimal geometric properties, thus exhibiting broad applicability and enabling its use with a wide range of geometries. Specifically, we showcase our framework on the Symmetric Positive Definite (SPD) manifold and special orthogonal group, i. e. , the set of rotation matrices. On the SPD manifold, we develop five families of SPD MLRs under five types of power-deformed metrics. On rotation matrices we propose Lie MLR based on the popular bi-invariant metric. Extensive experiments on different Riemannian backbone networks validate the effectiveness of our framework.

AAAI Conference 2023 Conference Paper

Riemannian Local Mechanism for SPD Neural Networks

  • Ziheng Chen
  • Tianyang Xu
  • Xiao-Jun Wu
  • Rui Wang
  • Zhiwu Huang
  • Josef Kittler

The Symmetric Positive Definite (SPD) matrices have received wide attention for data representation in many scientific areas. Although there are many different attempts to develop effective deep architectures for data processing on the Riemannian manifold of SPD matrices, very few solutions explicitly mine the local geometrical information in deep SPD feature representations. Given the great success of local mechanisms in Euclidean methods, we argue that it is of utmost importance to ensure the preservation of local geometric information in the SPD networks. We first analyse the convolution operator commonly used for capturing local information in Euclidean deep networks from the perspective of a higher level of abstraction afforded by category theory. Based on this analysis, we define the local information in the SPD manifold and design a multi-scale submanifold block for mining local geometry. Experiments involving multiple visual tasks validate the effectiveness of our approach.