Arrow Research search

Author name cluster

Zhibin Wan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

IJCAI Conference 2025 Conference Paper

Free Lunch of Image-mask Alignment for Anomaly Image Generation and Segmentation

  • Xiangyue Li
  • Xiaoyang Wang
  • Zhibin Wan
  • Quan Zhang
  • Yupei Wu
  • Tao Deng
  • Mingjie Sun

This paper aims at generating anomalous images and their segmentation labels to address the lack of real-world anomaly samples and privacy issues. Departing from conventional approaches that use masks solely to guide the generation of anomaly images, we propose a dual-branch training strategy for the generative model. This strategy enables the simultaneous production of anomaly images and masks, with an alignment regularization loss that ensures the coherence between the generated images and their masks. During inference, only the image-generation branch is activated to produce synthetic samples for training the downstream segmentation model. Furthermore, we propose to integrate the well-trained generative model into the training of segmentation models, utilizing a generative feedback loss to refine the segmentation model's performance. Experiments show our method's IoU metrics exceed previous methods by 5. 03%, 5. 68% and 16. 63% on Real-IAD (industrial), polyp (medical), and Floor Dirty (indoor) datasets. The code is publicly accessible at https: //github. com/huan-yin/anomaly-alignment.

AAAI Conference 2021 Conference Paper

Multi-View Information-Bottleneck Representation Learning

  • Zhibin Wan
  • Changqing Zhang
  • Pengfei Zhu
  • Qinghua Hu

In real-world applications, clustering or classification can usually be improved by fusing information from different views. Therefore, unsupervised representation learning on multi-view data becomes a compelling topic in machine learning. In this paper, we propose a novel and flexible unsupervised multi-view representation learning model termed Collaborative Multi-View Information Bottleneck Networks (CMIB-Nets), which comprehensively explores the common latent structure and the view-specific intrinsic information, and discards the superfluous information in the data significantly improving the generalization capability of the model. Specifically, our proposed model relies on the information bottleneck principle to integrate the shared representation among different views and the view-specific representation of each view, prompting the multi-view complete representation and flexibly balancing the complementarity and consistency among multiple views. We conduct extensive experiments (including clustering analysis, robustness experiment, and ablation study) on real-world datasets, which empirically show promising generalization ability and robustness compared to state-of-the-arts.