Arrow Research search

Author name cluster

Zuobin Ying

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
1 author row

Possible papers

4

AAAI Conference 2026 Conference Paper

On the Misalignment Between Data Learnability and Forgettability in Machine Unlearning

  • Zijie Pan
  • Zuobin Ying
  • Yajie Wang
  • Wanlei Zhou

We report a structural mismatch between a data point’s {learnability}—how quickly it improves the loss—and its {forgettability}—how much it anchors the final parameters—an aspect ignored by prior machine unlearning frameworks such as SISA, Fisher-Forget, and influence-based fine-tuning. To make this gap measurable we introduce Unlearning Gradient Sensitivity (UGS), an influence score computable with a single Hutch++ sketch, and derive the Learnability–Forgettability Divergence (LFD), the Jensen–Shannon distance between the model’s learning and forgetting distributions. We prove that UGS dispersion decays exponentially only under explicit regularisation and that LFD converges to zero when its weight grows sub-linearly relative to the UGS term. Building on these findings, we introduce Dual-Aware Training (DAT)—a lightweight regularization method that reduces variability in how easily data points can be forgotten and aligns learning and forgetting behaviors during training. On CIFAR-10, MNIST, and IMDB, DAT maintains the original model accuracy while cutting forgettability divergence in half and significantly lowering the cost of certified unlearning, showing that it’s effective to make models forgettable from the start.

AAAI Conference 2025 Conference Paper

Personalized Label Inference Attack in Federated Transfer Learning via Contrastive Meta Learning

  • Hanyu Zhao
  • Zijie Pan
  • Yajie Wang
  • Zuobin Ying
  • Lei Xu
  • Yu-an Tan

Federated Transfer Learning (FTL) is a popular approach to solve the problem of heterogeneous feature space and label distribution. Among the mainstream strategies for FTL, parameter decoupling, which balance the impact of a single global model and multiple personalized models under data heterogeneity, has attracted the attention of many researchers. However, few attacks have been proposed to evaluate the privacy risk of FTL. We find that the fine-tuned structures and the gradient update mechanisms of parameter decoupling would be more likely to leak personalized information for the server to infer private labels. Based on our findings, we propose the label inference attack that combines meta classifier with contrastive learning in FTL. Our experiments show that the proposed attack has ability to extract local personalized information from the differences before and after fine-tuning to improve the accuracy of the attack in the absence of a downstream model. Our research can reveal potential privacy risks in FTL and motivate more research on private and secure FTL.