IS Journal 2026 Journal Article
A Survey on Continuous Unlearning in Generative AI: Approaches and Tradeoffs
- Yang Zhao
- Hongyang Du
- Yijing Lin
- Keyi Xiang
- Dusit Niyato
- H. Vincent Poor
Generative artificial intelligence (GenAI) models have innovated content creation but raise concerns about privacy, security, and regulatory compliance such as General Data Protection Regulation. In response, unlearning techniques have emerged to selectively remove data while preserving the utility of the model. This article reviews unlearning methods in centralized and decentralized settings. These strategies mitigate risks, such as data leakage, membership inference, and bias amplification. By integrating unlearning with continuous or lifelong learning paradigms, GenAI models can adapt dynamically while honoring the “right to be forgotten. ” In existing unlearning methods, we explore key tradeoffs involving computational overhead, accuracy retention, generative quality, and thorough data deletion. Our review covers technical and ethical considerations and future directions, highlighting a balanced path toward responsible GenAI systems.