Arrow Research search

Author name cluster

Haiming Xu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

8 papers
1 author row

Possible papers

8

AAAI Conference 2026 Conference Paper

Adversarial Fair Incomplete Multi-View Clustering

  • Qianqian Wang
  • Haiming Xu
  • Wei Feng
  • Quanxue Gao

Fair incomplete multi-view clustering (FIMVC) confronts a critical yet unresolved challenge, as existing methods often fail to address the intertwined issues of data missingness and algorithmic bias simultaneously. In this paper, we propose a novel FIMVC method named Adversarial Fair Incomplete Multi-View Clustering (AFIMVC). The core of AFIMVC is a new adaptive adversarial disentanglement mechanism. This mechanism trains the feature encoder to produce representations that are invariant to sensitive attributes by adversary learning, where the adversarial intensity is dynamically controlled by the model's real-time bias. Additionally, we develop a probabilistic cross-view contrastive learning strategy to achieve semantic consistency in latent space. To handle missing data, AFIMVC employs a context-aware fusion strategy that leverages cross-sample attention to robustly synthesize a unified representation from incomplete views. Extensive experiments demonstrate that AFIMVC achieves a state-of-the-art balance between clustering accuracy and fairness, significantly outperforming existing methods.

AAAI Conference 2025 Conference Paper

Deep Multi-modal Graph Clustering via Graph Transformer Network

  • Qianqian Wang
  • Haiming Xu
  • Zihao Zhang
  • Wei Feng
  • Quanxue Gao

Current deep multi-modal graph clustering methods primarily rely on Graph Neural Network (GNN) to fully exploit attribute features and graph structures, including message propagation and low-dimensional feature embedding. However, these methods lack further exploration of graph structural information, such as the relationship between nodes and shortest paths. Additionally, they may not sufficiently mine complementary information among multi-modal graph data. To address these issues, we propose a novel Deep Multi-modal Graph Clustering via Graph Transformer Network method, called DMGC-GTN. This method thoroughly dissects and utilizes graph structural information, applying graph smoothing to node features and incorporating various forms of embeddings into the transformer architecture. This achieves a unified embedding of graph structure and multi-modal feature attributes, fully exploiting the complementary information within multi-modal graph data. Extensive experiments demonstrate the effectiveness of our algorithm.

IJCAI Conference 2025 Conference Paper

Efficient Multi-view Clustering via Reinforcement Contrastive Learning

  • Qianqian Wang
  • Haiming Xu
  • Zihao Zhang
  • Zhiqiang Tao
  • Quanxue Gao

Contrastive multi-view clustering has demonstrated remarkable potential in complex data analysis, yet existing approaches face two critical challenges: difficulty in constructing high-quality positive and negative pairs and high computational overhead due to static optimization strategies. To address these challenges, we propose an innovative efficient Multi-View Clustering framework with Reinforcement Contrastive Learning (EMVCRCL). Our key innovation is developing a reinforcement contrastive learning paradigm for dynamic clustering optimization. First, we leverage multi-view contrastive learning to obtain latent features, which are then sent to the reinforcement learning module to refine low-quality features. Specifically, it selects high-confident features to guide the positive/negative pair construction of contrastive learning. For the low-confident features, it utilizes the prior balanced distribution to adjust their assignment. Extensive experimental results showcase the effectiveness and superiority of our proposed method on multiple benchmark datasets.

IJCAI Conference 2025 Conference Paper

Fair Incomplete Multi-View Clustering via Distribution Alignment

  • Qianqian Wang
  • Haiming Xu
  • Meiling Liu
  • Wei Feng
  • Xiangdong Zhang

Incomplete multi-view clustering (IMVC) extracts consistent and complementary information from multi-source/modality data with missing views, aiming to partition the data into different clusters. It can effectively address the problem of unsupervised multi-source data analysis in complex environments and has gained considerable attention. However, the fairness of IMVC remains underexplored, particularly when data contains sensitive features ({e. g. }, gender, marital status, and age). To tackle the problem, this work presents a novel Fair Incomplete Multi-View Clustering (FIMVC) method. The proposed FIMVC introduces fairness constraints to ensure clustering results are independent of sensitive features. Additionally, it learns consensus representations to enhance clustering performance by maximizing mutual information and aligning the distributions of different views. Experimental results on three datasets containing sensitive features demonstrate that our method improves the fairness of clustering results while outperforming state-of-the-art IMVC methods in clustering performance.

IJCAI Conference 2024 Conference Paper

Reconstruction Weighting Principal Component Analysis with Fusion Contrastive Learning

  • Qianqian Wang
  • Meiling Liu
  • Wei Feng
  • Mengping Jiang
  • Haiming Xu
  • Quanxue Gao

Principal component analysis (PCA) is a popular unsupervised dimensionality reduction method to extract the principal components of data. However, there are two problems with the existing PCA: (1) Traditional PCA methods treat each sample equally and ignore sample differences. (2) They fail to extract the discriminative features required by recognition tasks. To overcome these problems, we incorporate contrastive learning to develop a novel weighted PCA algorithm. Specifically, our method weights the reconstruction error of individual samples to reduce the influence of outliers. Besides, it integrates contrastive learning into PCA to increase inter-class distances and reduce intra-class distance, which helps to improve PCA's discriminative capability. We further develop an unsupervised strategy to select positive and negative samples, which eliminates pseudo-negative samples guided by clustering labels. Specifically, it employs confidence level to distinguish positive and negative samples. Consequently, our method achieves higher recognition accuracy on benchmark datasets.

AAAI Conference 2024 Conference Paper

Revisiting Open-Set Panoptic Segmentation

  • Yufei Yin
  • Hao Chen
  • Wengang Zhou
  • Jiajun Deng
  • Haiming Xu
  • Houqiang Li

In this paper, we focus on the open-set panoptic segmentation (OPS) task to circumvent the data explosion problem. Different from the close-set setting, OPS targets to detect both known and unknown categories, where the latter is not annotated during training. Different from existing work that only selects a few common categories as unknown ones, we move forward to the real-world scenario by considering the various tail categories (~1k). To this end, we first build a new dataset with long-tail distribution for the OPS task. Based on this dataset, we additionally add a new class type for unknown classes and re-define the training annotations to make the OPS definition more complete and reasonable. Moreover, we analyze the influence of several significant factors in the OPS task and explore the upper bound of performance on unknown classes with different settings. Furthermore, based on the analyses, we design an effective two-phase framework for the OPS task, including thing-agnostic map generation and unknown segment mining. We further adopt semi-supervised learning to improve the OPS performance. Experimental results on different datasets validate the effectiveness of our method.

NeurIPS Conference 2022 Conference Paper

Semi-supervised Semantic Segmentation with Prototype-based Consistency Regularization

  • Haiming Xu
  • Lingqiao Liu
  • Qiuchen Bian
  • Zhen Yang

Semi-supervised semantic segmentation requires the model to effectively propagate the label information from limited annotated images to unlabeled ones. A challenge for such a per-pixel prediction task is the large intra-class variation, i. e. , regions belonging to the same class may exhibit a very different appearance even in the same picture. This diversity will make the label propagation hard from pixels to pixels. To address this problem, we propose a novel approach to regularize the distribution of within-class features to ease label propagation difficulty. Specifically, our approach encourages the consistency between the prediction from a linear predictor and the output from a prototype-based predictor, which implicitly encourages features from the same pseudo-class to be close to at least one within-class prototype while staying far from the other between-class prototypes. By further incorporating CutMix operations and a carefully-designed prototype maintenance strategy, we create a semi-supervised semantic segmentation algorithm that demonstrates superior performance over the state-of-the-art methods from extensive experimental evaluation on both Pascal VOC and Cityscapes benchmarks.