Arrow Research search

Author name cluster

Kaibo Wang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
1 author row

Possible papers

6

TIST Journal 2026 Journal Article

Multi-Stage Robust Federated Learning: Addressing Label Noise under Data Heterogeneity and Imbalance

  • Kaibo Wang
  • Anqi Zhang
  • Tangyou Liu
  • Wenqian Zhang
  • Guanglin Zhang

Federated Learning (FL) enables collaborative model training while preserving data privacy, but the presence of noisy labels in local datasets remains a significant challenge, particularly under heterogeneous noise conditions and class imbalance. In this work, we introduce a novel Multi-Stage Robust Federated Learning (MRFL) framework to address these issues. In the warm-up noise detection stage, MRFL computes per-class average losses on each client and employs a Gaussian mixture model to accurately identify clients with substantial label noise. In the subsequent noise-robust training stage, a robust loss function and noise solver are designed to distinguish clean from noisy samples, while semi-supervised learning is used to recover valuable information from tail classes. Moreover, a robust weighted aggregation strategy is adopted to mitigate the adverse effects of noisy clients. Extensive experiments on CIFAR-10/100-LT and ICH datasets demonstrate that MRFL outperforms state-of-the-art methods in federated noisy label learning scenarios characterized by data heterogeneity and imbalance.

AAAI Conference 2026 Conference Paper

TweezeEdit: Consistent and Efficient Image Editing with Path Regularization

  • Jianda Mao
  • Kaibo Wang
  • Yang Xiang
  • Kani Chen

Recent progress in training-free image editing has enabled existing text-to-image diffusion models to be directly adapted into text-guided image editors without additional training. However, existing methods often over-align with target prompts while inadequately preserving source image semantics. These approaches generate target images explicitly or implicitly from the inversion noise of the source images, termed the inversion anchors. We identify this strategy as suboptimal for semantic preservation and inefficient due to elongated editing paths. We propose TweezeEdit, a tuning- and inversion-free framework for consistent and efficient image editing. Our method addresses these limitations by regularizing the entire denoising path rather than relying solely on the inversion anchors, ensuring source semantic retention and shortening editing paths. Guided by gradient-driven regularization, we efficiently inject target prompt semantics along a direct path using a consistency model. Extensive experiments demonstrate TweezeEdit's superior performance in semantic preservation and target alignment, outperforming existing methods. Remarkably, it requires only 12 steps (1.6 seconds per edit), underscoring its potential for real-time applications. The appendix is available in the extended version.

NeurIPS Conference 2025 Conference Paper

Towards a Golden Classifier-Free Guidance Path via Foresight Fixed Point Iterations

  • Kaibo Wang
  • Jianda Mao
  • Tong Wu
  • Yang Xiang

Classifier-Free Guidance (CFG) is an essential component of text-to-image diffusion models, and understanding and advancing its operational mechanisms remains a central focus of research. Existing approaches stem from divergent theoretical interpretations, thereby limiting the design space and obscuring key design choices. To address this, we propose a unified perspective that reframes conditional guidance as fixed point iterations, seeking to identify a golden path where latents produce consistent outputs under both conditional and unconditional generation. We demonstrate that CFG and its variants constitute a special case of single-step short-interval iteration, which is theoretically proven to exhibit inefficiency. To this end, we introduce Foresight Guidance (FSG), which prioritizes solving longer-interval subproblems in early diffusion stages with increased iterations. Extensive experiments across diverse datasets and model architectures validate the superiority of FSG over state-of-the-art methods in both image quality and computational efficiency. Our work offers novel perspectives for conditional guidance and unlocks the potential of adaptive design.

NeurIPS Conference 2024 Conference Paper

DiffHammer: Rethinking the Robustness of Diffusion-Based Adversarial Purification

  • Kaibo Wang
  • Xiaowen Fu
  • Yuxuan Han
  • Yang Xiang

Diffusion-based purification has demonstrated impressive robustness as an adversarial defense. However, concerns exist about whether this robustness arises from insufficient evaluation. Our research shows that EOT-based attacks face gradient dilemmas due to global gradient averaging, resulting in ineffective evaluations. Additionally, 1-evaluation underestimates resubmit risks in stochastic defenses. To address these issues, we propose an effective and efficient attack named DiffHammer. This method bypasses the gradient dilemma through selective attacks on vulnerable purifications, incorporating $N$-evaluation into loops and using gradient grafting for comprehensive and efficient evaluations. Our experiments validate that DiffHammer achieves effective results within 10-30 iterations, outperforming other methods. This calls into question the reliability of diffusion-based purification after mitigating the gradient dilemma and scrutinizing its resubmit risk.