Arrow Research search
Back to AAAI

AAAI 2026

Disentangled Generation-Based Prototypical Alignment for Few-Shot Unsupervised Domain Adaptation in Graph-Level Anomaly Detection

Conference Paper AAAI Technical Track on Machine Learning VI Artificial Intelligence

Abstract

Graph-Level Anomaly Detection (GLAD) seeks to identify anomalous graphs within graph datasets, which has significant applications across diverse real-world fields. Most existing GLAD methods are trained in an unsupervised manner due to high costs for labeling, resulting in sub-optimal performance when compared to supervised methods. To fill this gap, we propose a Disentangled Generation-Based Prototypical Alignment (DGPA) method that extends graph-level anomaly detection to Few-Shot Unsupervised Domain Adaptation (FUDA) setting, aiming to identify anomalous graphs from a set of unlabeled graphs (target domain) by using partially labeled graphs from a different but related domain (source domain), which fulfills the practical requirement of transferring anomaly knowledge. This is specifically achieved through a dedicated Disentangled Sample Generation module, which addresses label scarcity by generating faithful samples with disentangled representation learning grounded in Information Bottleneck principle, along with a Graph-based Prototypical Self-Supervision module, which alleviates domain shift by encoding and aligning semantic structures in the shared latent space across domains in a self-supervised manner. Extensive experiments on five benchmark datasets reveal the effectiveness of our proposed DGPA.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
1023867239873615537