Arrow Research search

Author name cluster

Bowen Fan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

AAAI Conference 2026 Conference Paper

PAGE: A Unified Approach for Federated Graph Unlearning

  • Yuming Ai
  • Xunkai Li
  • Jiaqi Chao
  • Bowen Fan
  • Zhengyu Wu
  • Yinlin Zhu
  • Rong-Hua Li
  • Guoren Wang

Federated graph learning (FGL) is a distributive framework for graph representation learning that prioritizes privacy preservation. The right to be forgotten embodies the ethical principle of prioritizing user autonomy over data usage. In the context of FGL, upholding this right requires the method to remove specific entities and their associated knowledge within local subgraphs (Meta Unlearning) and the complete erasure of the entire client (Client Unlearning). We are the first to systematically define the above two unlearn requests in federated graph unlearning. Several studies have attempted to address this challenge, but key limitations persist: incomplete unlearning support and residual knowledge permeation. To this end, we propose a Prototype-guided Adversarial Graph Eraser for universal federated graph unlearning (PAGE), the first unified federated graph unlearning framework that extend to comprehensive unlearning requests. For meta unlearning, we employ the prototype gradients guide initial local unlearn, while adversarial graphs eliminate residual knowledge across the influenced clients. For client unlearning, PAGE exclusively utilizes adversarial graph generation to purge a departed client's influence from the remaining participants. PAGE outperforms existing methods on 8 benchmark datasets. It improves prediction accuracy by 5.08% (client unlearn) and 1.50% (meta-unlearn), with up to 11.84% gain on large-scale graphs. Furthermore, ablation studies confirm its efficacy as a plug-in for other meta unlearn methods, boosting prediction performance up to 4.49% and unlearning performance up to 7.22%.

NeurIPS Conference 2025 Conference Paper

OpenGU: A Comprehensive Benchmark for Graph Unlearning

  • Bowen Fan
  • Yuming Ai
  • Xunkai Li
  • Zhilin Guo
  • Lei Zhu
  • Guang Zeng
  • Rong-Hua Li
  • Guoren Wang

Graph Machine Learning is essential for understanding and analyzing relational data. However, privacy-sensitive applications demand the ability to efficiently remove sensitive information from trained graph neural networks (GNNs), avoiding the unnecessary time and space overhead caused by retraining models from scratch. To address this issue, Graph Unlearning (GU) has emerged as a critical solution to support dynamic graph updates while ensuring privacy compliance. Unlike machine unlearning in computer vision or other fields, GU faces unique difficulties due to the non-Euclidean nature of graph data and the recursive message-passing mechanism of GNNs. Additionally, the diversity of downstream tasks and the complexity of unlearning requests further amplify these challenges. Despite the proliferation of diverse GU strategies, the absence of a benchmark providing fair comparisons for GU, and the limited flexibility in combining downstream tasks and unlearning requests, have yielded inconsistencies in evaluations, hindering the development of this domain. To fill this gap, we present OpenGU, the first GU benchmark, where 16 SOTA GU algorithms and 37 multi-domain datasets are integrated, enabling various downstream tasks with 13 GNN backbones when responding to flexible unlearning requests. Through extensive experimentation, we have drawn $10$ crucial conclusions about existing GU methods, while also gaining valuable insights into their limitations, shedding light on potential avenues for future research. Our code is available at \href{https: //github. com/bwfan-bit/OpenGU}{https: //github. com/bwfan-bit/OpenGU}.

ICLR Conference 2023 Conference Paper

Unsupervised Manifold Alignment with Joint Multidimensional Scaling

  • Dexiong Chen
  • Bowen Fan
  • Carlos G. Oliver
  • Karsten M. Borgwardt

We introduce Joint Multidimensional Scaling, a novel approach for unsupervised manifold alignment, which maps datasets from two different domains, without any known correspondences between data instances across the datasets, to a common low-dimensional Euclidean space. Our approach integrates Multidimensional Scaling (MDS) and Wasserstein Procrustes analysis into a joint optimization problem to simultaneously generate isometric embeddings of data and learn correspondences between instances from two different datasets, while only requiring intra-dataset pairwise dissimilarities as input. This unique characteristic makes our approach applicable to datasets without access to the input features, such as solving the inexact graph matching problem. We propose an alternating optimization scheme to solve the problem that can fully benefit from the optimization techniques for MDS and Wasserstein Procrustes. We demonstrate the effectiveness of our approach in several applications, including joint visualization of two datasets, unsupervised heterogeneous domain adaptation, graph matching, and protein structure alignment. The implementation of our work is available at https://github.com/BorgwardtLab/JointMDS.