Arrow Research search

Author name cluster

Junwei Cheng

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

AAAI Conference 2026 Conference Paper

Generating In-Distribution Counterfactual Explanation for Graph Neural Networks

  • Linmao Chen
  • Chaobo He
  • Junwei Cheng
  • Chunying Li
  • Quanlong Guan

Graph Neural Networks (GNNs) have received increasing attention due to their ability to handle graph-structured data, yet their explainability remains a significant challenge. An effective solution is to provide the GNN models with counterfactual explanations, which aim to answer “How should the input instance be perturbed to change the model's prediction?". However, existing works mainly focus on generating explanations that can effectively alter model predictions, while neglecting whether the explanations remain aligned with the original data distribution, leading to the distribution shift problem. To address this problem, we propose a novel method called ICExplainer for generating explanations within the original distribution. Specifically, we introduce graph diffusion-based generative model into the counterfactual reasoning, treating it as an optimization objective for graph distribution learning. Taking insights from variational inference, we use it to estimate the true distribution of the input graphs to retain essential structural and semantic information. The inferred distribution is then utilized as prior knowledge to guide the reverse process, ensuring that generated explanations are both counterfactual and distributionally coherent. Extensive experiments conducted on both synthetic and real-world datasets demonstrate the superior performance of ICExplainer over existing methods.

AAAI Conference 2025 Conference Paper

Community-Aware Variational Autoencoder for Continuous Dynamic Networks

  • Junwei Cheng
  • Chaobo He
  • Pengxing Feng
  • Weixiong Liu
  • Kunlin Han
  • Yong Tang

Variational autoencoder performs well in community detection on static networks, but it is difficult to directly extend to continuous dynamic networks. The main reason is that traditional methods mainly rely on adjacency structures to complete the inference and generation processes. However, continuous dynamic networks cannot be described by this structure because the inherent timeliness and causality information of the network would be lost. To address this issue, we propose a novel variational autoencoder, CT-VAE, for community detection in continuous dynamic networks, along with its scalable variant, CT-CAVAE. By conceptualizing node interactions as event streams and adopting the Hawkes process to capture temporal dynamics and causality, and incorporating them into the inference process, CT-VAE can effectively extend the traditional inference approach to continuous dynamic networks. Additionally, in the generation phase, CT-VAE combines pseudo-labeling and compact constraint strategies to facilitate the reconstruction process of non-adjacent structures. For the scalable variant, CT-CAVAE, end-to-end community detection is achieved by cleverly combining Gaussian mixture distribution. Extensive experimental results demonstrate that the proposed CT-VAE and CT-CAVAE achieve more favorable performance compared with the state-of-the-art baselines.

UAI Conference 2025 Conference Paper

DyGMAE: A Novel Dynamic Graph Masked Autoencoder for Link Prediction

  • Weixiong Liu
  • Junwei Cheng
  • Zhongyu Pan
  • Chaobo He
  • Quanlong Guan

Dynamic link prediction (DLP) is a crucial task in graph learning, aiming to predict future links between nodes at subsequent time in dynamic graphs. Recently, graph masked autoencoders (GMAEs) have shown promising performance in self-supervised learning. However, their application to DLP is under-explored. Existing GMAEs struggle to capture temporal dependencies, and their random masking causes crucial information loss for DLP. Moreover, most existing DLP methods rely on local information, ignoring global information and failing to capture complex features in real-world dynamic graphs. To address these issues, we propose DyGMAE, a novel dynamic GMAE method specifically designed for DLP. DyGMAE introduces a Multi-Scale Masking Strategy (MSMS), which generates multiple graph views by masking parts of the edges and tries to reconstruct them. Additionally, a multi-scale masking representation alignment module with a contrastive learning objective is employed to align representations which are encoded by unmasked edges across these views. Through this design, different masked views can provide diverse information to alleviate the drawbacks of random masking, and contrastive learning can align different views to mitigate the problem of exploiting local and global information simultaneously. Experiments on benchmark datasets show DyGMAE achieves superior performance in the DLP task.