Arrow Research search

Author name cluster

Yajie Wang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
1 author row

Possible papers

5

AAAI Conference 2026 Conference Paper

On the Misalignment Between Data Learnability and Forgettability in Machine Unlearning

  • Zijie Pan
  • Zuobin Ying
  • Yajie Wang
  • Wanlei Zhou

We report a structural mismatch between a data point’s {learnability}—how quickly it improves the loss—and its {forgettability}—how much it anchors the final parameters—an aspect ignored by prior machine unlearning frameworks such as SISA, Fisher-Forget, and influence-based fine-tuning. To make this gap measurable we introduce Unlearning Gradient Sensitivity (UGS), an influence score computable with a single Hutch++ sketch, and derive the Learnability–Forgettability Divergence (LFD), the Jensen–Shannon distance between the model’s learning and forgetting distributions. We prove that UGS dispersion decays exponentially only under explicit regularisation and that LFD converges to zero when its weight grows sub-linearly relative to the UGS term. Building on these findings, we introduce Dual-Aware Training (DAT)—a lightweight regularization method that reduces variability in how easily data points can be forgotten and aligns learning and forgetting behaviors during training. On CIFAR-10, MNIST, and IMDB, DAT maintains the original model accuracy while cutting forgettability divergence in half and significantly lowering the cost of certified unlearning, showing that it’s effective to make models forgettable from the start.

AAAI Conference 2025 Conference Paper

Personalized Label Inference Attack in Federated Transfer Learning via Contrastive Meta Learning

  • Hanyu Zhao
  • Zijie Pan
  • Yajie Wang
  • Zuobin Ying
  • Lei Xu
  • Yu-an Tan

Federated Transfer Learning (FTL) is a popular approach to solve the problem of heterogeneous feature space and label distribution. Among the mainstream strategies for FTL, parameter decoupling, which balance the impact of a single global model and multiple personalized models under data heterogeneity, has attracted the attention of many researchers. However, few attacks have been proposed to evaluate the privacy risk of FTL. We find that the fine-tuned structures and the gradient update mechanisms of parameter decoupling would be more likely to leak personalized information for the server to infer private labels. Based on our findings, we propose the label inference attack that combines meta classifier with contrastive learning in FTL. Our experiments show that the proposed attack has ability to extract local personalized information from the differences before and after fine-tuning to improve the accuracy of the attack in the absence of a downstream model. Our research can reveal potential privacy risks in FTL and motivate more research on private and secure FTL.

AAAI Conference 2024 Conference Paper

Towards Transferable Adversarial Attacks with Centralized Perturbation

  • Shangbo Wu
  • Yu-an Tan
  • Yajie Wang
  • Ruinan Ma
  • Wencong Ma
  • Yuanzhang Li

Adversarial transferability enables black-box attacks on unknown victim deep neural networks (DNNs), rendering attacks viable in real-world scenarios. Current transferable attacks create adversarial perturbation over the entire image, resulting in excessive noise that overfit the source model. Concentrating perturbation to dominant image regions that are model-agnostic is crucial to improving adversarial efficacy. However, limiting perturbation to local regions in the spatial domain proves inadequate in augmenting transferability. To this end, we propose a transferable adversarial attack with fine-grained perturbation optimization in the frequency domain, creating centralized perturbation. We devise a systematic pipeline to dynamically constrain perturbation optimization to dominant frequency coefficients. The constraint is optimized in parallel at each iteration, ensuring the directional alignment of perturbation optimization with model prediction. Our approach allows us to centralize perturbation towards sample-specific important frequency features, which are shared by DNNs, effectively mitigating source model overfitting. Experiments demonstrate that by dynamically centralizing perturbation on dominating frequency coefficients, crafted adversarial examples exhibit stronger transferability, and allowing them to bypass various defenses.

IJCAI Conference 2021 Conference Paper

Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations with Perceptual Similarity

  • Yajie Wang
  • Shangbo Wu
  • Wenyi Jiang
  • Shengang Hao
  • Yu-an Tan
  • Quanxin Zhang

Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples. Adversarial examples are malicious images with visually imperceptible perturbations. While these carefully crafted perturbations restricted with tight Lp norm bounds are small, they are still easily perceivable by humans. These perturbations also have limited success rates when attacking black-box models or models with defenses like noise reduction filters. To solve these problems, we propose Demiguise Attack, crafting "unrestricted" perturbations with Perceptual Similarity. Specifically, we can create powerful and photorealistic adversarial examples by manipulating semantic information based on Perceptual Similarity. Adversarial examples we generate are friendly to the human visual system (HVS), although the perturbations are of large magnitudes. We extend widely-used attacks with our approach, enhancing adversarial effectiveness impressively while contributing to imperceptibility. Extensive experiments show that the proposed method not only outperforms various state-of-the-art attacks in terms of fooling rate, transferability, and robustness against defenses but can also improve attacks effectively. In addition, we also notice that our implementation can simulate illumination and contrast changes that occur in real-world scenarios, which will contribute to exposing the blind spots of DNNs.