Arrow Research search

Author name cluster

Zongqian Wu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

7 papers
2 author rows

Possible papers

7

AAAI Conference 2025 Conference Paper

Noisy Node Classification by Bi-level Optimization Based Multi-Teacher Distillation

  • Yujing Liu
  • Zongqian Wu
  • Zhengyu Lu
  • Ci Nie
  • Guoqiu Wen
  • Yonghua Zhu
  • Xiaofeng Zhu

Previous graph neural networks (GNNs) usually assume that the graph data is with clean labels for representation learning, but it is not true in real applications. In this paper, we propose a new multi-teacher distillation method based on bi-level optimization (namely BO-NNC), to conduct noisy node classification on the graph data. Specifically, we first employ multiple self-supervised learning methods to train diverse teacher models, and then aggregate their predictions through a teacher weight matrix. Furthermore, we design a new bi-level optimization strategy to dynamically adjust the teacher weight matrix based on the training progress of the student model. Finally, we design a label improvement module to improve the label quality. Extensive experimental results on real datasets show that our method achieves the best results compared to state-of-the-art methods.

ICML Conference 2025 Conference Paper

Rethinking Chain-of-Thought from the Perspective of Self-Training

  • Zongqian Wu
  • Baoduo Xu
  • Ruochen Cui
  • Mengmeng Zhan
  • Xiaofeng Zhu 0001
  • Lei Feng 0006

Chain-of-thought (CoT) reasoning has emerged as an effective approach for activating latent capabilities in LLMs. Interestingly, we observe that both CoT reasoning and self-training share the core objective: iteratively leveraging model-generated information to progressively reduce prediction uncertainty. Building on this insight, we propose a novel CoT framework to improve reasoning performance. Our framework integrates two key components: (i) a task-specific prompt module that optimizes the initial reasoning process, and (ii) an adaptive reasoning iteration module that dynamically refines the reasoning process and addresses the limitations of previous CoT approaches, i. e. , over-reasoning and high similarity between consecutive reasoning iterations. Extensive experiments show that the proposed method achieves significant advantages in both performance and computational efficiency. Our code is available at: https: //github. com/zongqianwu/ST-COT.

AAAI Conference 2024 Conference Paper

Self-Training Based Few-Shot Node Classification by Knowledge Distillation

  • Zongqian Wu
  • Yujie Mo
  • Peng Zhou
  • Shangbo Yuan
  • Xiaofeng Zhu

Self-training based few-shot node classification (FSNC) methods have shown excellent performance in real applications, but they cannot make the full use of the information in the base set and are easily affected by the quality of pseudo-labels. To address these issues, this paper proposes a new self-training FSNC method by involving the representation distillation and the pseudo-label distillation. Specifically, the representation distillation includes two knowledge distillation methods (i.e., the local representation distillation and the global representation distillation) to transfer the information in the base set to the novel set. The pseudo-label distillation is designed to conduct knowledge distillation on the pseudo-labels to improve their quality. Experimental results showed that our method achieves supreme performance, compared with state-of-the-art methods. Our code and a comprehensive theoretical version are available at https://github.com/zongqianwu/KD-FSNC.

IJCAI Conference 2024 Conference Paper

Towards Dynamic-Prompting Collaboration for Source-Free Domain Adaptation

  • Mengmeng Zhan
  • Zongqian Wu
  • Rongyao Hu
  • Ping Hu
  • Heng Tao Shen
  • Xiaofeng Zhu

In domain adaptation, challenges such as data privacy constraints can impede access to source data, catalyzing the development of source-free domain adaptation (SFDA) methods. However, current approaches heavily rely on models trained on source data, posing the risk of overfitting and suboptimal generalization. This paper introduces a dynamic prompt learning paradigm that harnesses the power of large-scale vision-language models to enhance the semantic transfer of source models. Specifically, our approach fosters robust and adaptive collaboration between the source-trained model and the vision-language model, facilitating the reliable extraction of domain-specific information from unlabeled target data, while consolidating domain-invariant knowledge. Without the need for accessing source data, our method amalgamates the strengths inherent in both traditional SFDA approaches and vision-language models, formulating a collaborative framework for addressing SFDA challenges. Extensive experiments conducted on three benchmark datasets showcase the superiority of our framework over previous SOTA methods.

AAAI Conference 2023 Conference Paper

Multiplex Graph Representation Learning via Common and Private Information Mining

  • Yujie Mo
  • Zongqian Wu
  • Yuhuan Chen
  • Xiaoshuang Shi
  • Heng Tao Shen
  • Xiaofeng Zhu

Self-supervised multiplex graph representation learning (SMGRL) has attracted increasing interest, but previous SMGRL methods still suffer from the following issues: (i) they focus on the common information only (but ignore the private information in graph structures) to lose some essential characteristics related to downstream tasks, and (ii) they ignore the redundant information in node representations of each graph. To solve these issues, this paper proposes a new SMGRL method by jointly mining the common information and the private information in the multiplex graph while minimizing the redundant information within node representations. Specifically, the proposed method investigates the decorrelation losses to extract the common information and minimize the redundant information, while investigating the reconstruction losses to maintain the private information. Comprehensive experimental results verify the superiority of the proposed method, on four public benchmark datasets.

IJCAI Conference 2023 Conference Paper

Totally Dynamic Hypergraph Neural Networks

  • Peng Zhou
  • Zongqian Wu
  • Xiangxiang Zeng
  • Guoqiu Wen
  • Junbo Ma
  • Xiaofeng Zhu

Recent dynamic hypergraph neural networks (DHGNNs) are designed to adaptively optimize the hypergraph structure to avoid the dependence on the initial hypergraph structure, thus capturing more hidden information for representation learning. However, most existing DHGNNs cannot adjust the hyperedge number and thus fail to fully explore the underlying hypergraph structure. This paper proposes a new method, namely, totally hypergraph neural network (TDHNN), to adjust the hyperedge number for optimizing the hypergraph structure. Specifically, the proposed method first captures hyperedge feature distribution to obtain dynamical hyperedge features rather than fixed ones, by conducting the sampling from the learned distribution. The hypergraph is then constructed based on the attention coefficients of both sampled hyperedges and nodes. The node features are dynamically updated by designing a simple hypergraph convolution algorithm. Experimental results on real datasets demonstrate the effectiveness of the proposed method, compared to SOTA methods. The source code can be accessed via https: //github. com/HHW-zhou/TDHNN.

IJCAI Conference 2022 Conference Paper

Information Augmentation for Few-shot Node Classification

  • Zongqian Wu
  • Peng Zhou
  • Guoqiu Wen
  • Yingying Wan
  • Junbo Ma
  • Debo Cheng
  • Xiaofeng Zhu

Although meta-learning and metric learning have been widely applied for few-shot node classification (FSNC), some limitations still need to be addressed, such as expensive time costs for the meta-train and difficult of exploring the complex structure inherent the graph data. To address in issues, this paper proposes a new data augmentation method to conduct FSNC on the graph data including parameter initialization and parameter fine-tuning. Specifically, parameter initialization only conducts a multi-classification task on the base classes, resulting in good generalization ability and less time cost. Parameter fine-tuning designs two data augmentation methods (i. e. , support augmentation and shot augmentation) on the novel classes to generate sufficient node features so that any traditional supervised classifiers can be used to classify the query set. As a result, the proposed method is the first work of data augmentation for FSNC. Experiment results show the effectiveness and the efficiency of our proposed method, compared to state-of-the-art methods, in terms of different classification tasks.