Arrow Research search

Author name cluster

Shiping Wang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

13 papers
1 author row

Possible papers

13

AAAI Conference 2026 Conference Paper

Cooperative Graph Transformer with Structural Consensus for Multi-View Learning

  • Zhiyuan Lai
  • Jiacheng Li
  • Jiayuan Wang
  • Shiping Wang

Multi-view learning aims to effectively integrate data from different sources by exploring the consistency and complementarity across views. Current multi-view methods based on Graph Convolutional Networks (GCNs) primarily focus on local information, making it difficult to capture global dependencies. Furthermore, multi-view data typically lack explicit structural representations, and the topologies constructed via node similarity in existing approaches are prone to noise, while simple fusion strategies are often inadequate for effectively suppressing this noise and for uncovering meaningful structural information. To tackle these issues, this paper proposes CoGFormer, a cooperative graph transformer with structural consensus learning. CoGFormer maps multi-view data into a unified space and jointly models local and global consensus: a denoising structural consensus graph convolutional network refines the consensus graph to enhance local consistency and robustness, while a structure-guided attention mechanism explicitly injects high-order cross-view structural biases to capture global consistency and improve semantic coherence. Experiments on multiple benchmarks demonstrate that CoGFormer outperforms existing state-of-the-art methods, validating its effectiveness.

AAAI Conference 2026 Conference Paper

DIN: Dual Impulse Network for Multi-view Representation Learning

  • Yilin Wu
  • Weihong Lin
  • Renjie Lin
  • Zihan Fang
  • Shide Du
  • Shiping Wang

Multi-view representation learning, which utilizes multiple channels to improve perceptual accuracy, is recognized for its effectiveness in the analysis of multi-view data. However, deploying these methods in real-world scenarios presents two primary challenges. 1) Lack of Variegation: Multi-view representation techniques commonly observe along a singular axis, i.e., the attribute axis; 2) Insufficient Relationship: Most multi-view models lack mechanisms for exploring potential relationships between attribute axis and channel axis. To mitigate these obstacles, we design a Dual Impulse Network framework for multi-view representation learning (DIN) to train a feature representation. In this framework, a strategy observed along the channel axis and attribute axis simultaneously is introduced, and two different representations are generated by two analogous impulse networks, which are capable of extracting information corresponding to different axes. Furthermore, we incorporate an integration network that analyzes the potential relationship between attribute axis and channel axis to generate two attention matrices. The final two feature representations derived from these attention matrices are aggregated to amplify the expression of internal information. Comprehensive experimental results support the efficacy and superiority of the proposed framework, demonstrating improvements in classification performance compared to state-of-the-art methods.

AAAI Conference 2026 Conference Paper

From Static to Active: Knowledge-Aware Node State Selection in Multi-view Graph Learning

  • Weiran Liao
  • Jielong Lu
  • Yuhong Chen
  • Shide Du
  • Hongrong Chen
  • Shiping Wang

Multimedia technologies leverage multi-source to alleviate real-world data incompleteness, providing a versatile platform for multi-view learning. Among existing research, graph-based multi-view learning has achieved notable success. However, prior studies always immerse in comprehensive collaboration across all views and nodes to pursue consistency and complementary, which ignore the negative contribution of nodes from low-quality views. To overcome the above limitation, we explore node behavior selection in multi-view dynamic modeling and propose a knowledge-aware multi-view state space model. Specifically, nodes autonomously select either activation sequences or static sequences according to their current knowledge. In the former, we design the mask-based attention mechanism to capture the dynamics of node behaviors. In the latter, we construct a history pool and simulate synaptic signals to regulate the behavioral distribution of nodes. Moreover, the proposed model provides a directional inter-view diffusion equation that selectively propagates information to alleviate interference from low-quality nodes across views. Extensive experiments demonstrate that the proposed model outperforms baselines on multiple benchmarks and achieves significant performance improvement.

TMLR Journal 2025 Journal Article

GMAgent: A Graph-oriented Multi-agent Collaboration Framework for Text-attributed Graph Analysis

  • Hang Lv
  • Pengxiang Zhan
  • Yanchao Tan
  • Zixuan Guo
  • Shiping Wang
  • Carl Yang

Text-Attributed Graphs (TAGs) are crucial for modeling interconnected data in numerous real-world applications. Graph Neural Networks (GNNs) excel at efficiently capturing global structural information across TAGs, while Large Language Models (LLMs) offer strong capabilities in local semantic understanding. Despite the recent development of integrating GNNs and LLMs for TAG analysis, these approaches often fail to fully exploit their complementary strengths by relying primarily on a single architecture. Furthermore, LLM-based multi-agent collaboration systems have shown promising potential across diverse fields. However, their integration with GNNs for graph analytical tasks remains underexplored. To this end, we introduce GMAgent, a novel graph-oriented multi-agent collaboration framework that effectively and flexibly interacts between diverse GNN-based and LLM-based graph agents, facilitating comprehensive TAG analysis. First, we deploy multiple GNNs as graph agents to perform conflict evaluation, identifying conflict scenarios for further multi-agent collaboration. Then, we repurpose LLMs as graph agents via graph-driven instruction tuning and adopt a role-play expert recruiting strategy, thereby generating LLM graph experts' initial analyses for conflict scenarios. Finally, we conduct a graph-oriented multi-agent collaboration to effectively and efficiently guide collaborative self-reflection on graph experts and the final answer selection. Extensive experimental results on five datasets demonstrate significant improvements, showcasing the potential of our GMAgent in improving the effectiveness, interoperability, and flexibility of comprehensive TAG analysis.

IJCAI Conference 2025 Conference Paper

HiTuner: Hierarchical Semantic Fusion Model Fine-Tuning on Text-Attributed Graphs

  • Zihan Fang
  • Zhiling Cai
  • Yuxuan Zheng
  • Shide Du
  • Yanchao Tan
  • Shiping Wang

Text-Attributed Graphs (TAGs) are vital for modeling entity relationships across various domains. Graph Neural Networks have become cornerstone for processing graph structures, while the integration of text attributes remains a prominent research. The development of Large Language Models (LLMs) provides new opportunities for advancing textual encoding in TAGs. However, LLMs face challenges in specialized domains due to their limited task-specific knowledge, and fine-tuning them for specific tasks demands significant resources. To cope with the above challenges, we propose HiTuner, a novel framework that leverages fine-tuned Pre-trained Language Models (PLMs) with domain expertise as tuner to enhance the hierarchical LLM contextualized representations for modeling TAGs. Specifically, we first strategically select hierarchical hidden states of LLM to form a set of diverse and complementary descriptions as input for the sparse projection operator. Concurrently, a hybrid representation learning is developed to amalgamate the broad linguistic comprehension of LLMs with task-specific insights of the fine-tuned PLMs. Finally, HiTuner employs a confidence network to adaptively fuse the semantically-augmented representations. Empirical results across benchmark datasets spanning various domains validate the effectiveness of the proposed framework. Our codes are available at: https: //github. com/ZihanFang11/HiTuner

AAAI Conference 2025 Conference Paper

Multi-View Incremental Learning with Structured Hebbian Plasticity for Enhanced Fusion Efficiency

  • Yuhong Chen
  • Ailin Song
  • Huifeng Yin
  • Shuai Zhong
  • Fuhai Chen
  • Qi Xu
  • Shiping Wang
  • Mingkun Xu

The rapid evolution of multimedia technology has revolutionized human perception, paving the way for multi-view learning. However, traditional multi-view learning approaches are tailored for scenarios with fixed data views, falling short of emulating the intricate cognitive procedures of the human brain processing signals sequentially. Our cerebral architecture seamlessly integrates sequential data through intricate feed-forward and feedback mechanisms. In stark contrast, traditional methods struggle to generalize effectively when confronted with data spanning diverse domains, highlighting the need for innovative strategies that can mimic the brain's adaptability and dynamic integration capabilities. In this paper, we propose a bio-neurologically inspired multi-view incremental framework named MVIL aimed at emulating the brain's fine-grained fusion of sequentially arriving views. MVIL lies two fundamental modules: structured Hebbian plasticity and synaptic partition learning. The structured Hebbian plasticity reshapes the structure of weights to express the high correlation between view representations, facilitating a fine-grained fusion of view representations. Moreover, synaptic partition learning is efficient in alleviating drastic changes in weights and also retaining old knowledge by inhibiting partial synapses. These modules bionically play a central role in reinforcing crucial associations between newly acquired information and existing knowledge repositories, thereby enhancing the network's capacity for generalization. Experimental results on six benchmark datasets show MVIL's effectiveness over state-of-the-art methods.

AAAI Conference 2025 Conference Paper

OpenViewer: Openness-Aware Multi-View Learning

  • Shide Du
  • Zihan Fang
  • Yanchao Tan
  • Changwei Wang
  • Shiping Wang
  • Wenzhong Guo

Multi-view learning methods leverage multiple data sources to enhance perception by mining correlations across views, typically relying on predefined categories. However, deploying these models in real-world scenarios presents two primary openness challenges. 1) Lack of Interpretability: The integration mechanisms of multi-view data in existing black-box models remain poorly explained; 2) Insufficient Generalization: Most models are not adapted to multi-view scenarios involving unknown categories. To address these challenges, we propose OpenViewer, an openness-aware multi-view learning framework with theoretical support. This framework begins with a Pseudo-Unknown Sample Generation Mechanism to efficiently simulate open multi-view environments and previously adapt to potential unknown samples. Subsequently, we introduce an Expression-Enhanced Deep Unfolding Network to intuitively promote interpretability by systematically constructing functional prior-mapping modules and effectively providing a more transparent integration mechanism for multi-view data. Additionally, we establish a Perception-Augmented Open-Set Training Regime to significantly enhance generalization by precisely boosting confidences for known categories and carefully suppressing inappropriate confidences for unknown ones. Experimental results demonstrate that OpenViewer effectively addresses openness challenges while ensuring recognition performance for both known and unknown samples.

AAAI Conference 2024 Conference Paper

BCLNet: Bilateral Consensus Learning for Two-View Correspondence Pruning

  • Xiangyang Miao
  • Guobao Xiao
  • Shiping Wang
  • Jun Yu

Correspondence pruning aims to establish reliable correspondences between two related images and recover relative camera motion. Existing approaches often employ a progressive strategy to handle the local and global contexts, with a prominent emphasis on transitioning from local to global, resulting in the neglect of interactions between different contexts. To tackle this issue, we propose a parallel context learning strategy that involves acquiring bilateral consensus for the two-view correspondence pruning task. In our approach, we design a distinctive self-attention block to capture global context and parallel process it with the established local context learning module, which enables us to simultaneously capture both local and global consensuses. By combining these local and global consensuses, we derive the required bilateral consensus. We also design a recalibration block, reducing the influence of erroneous consensus information and enhancing the robustness of the model. The culmination of our efforts is the Bilateral Consensus Learning Network (BCLNet), which efficiently estimates camera pose and identifies inliers (true correspondences). Extensive experiments results demonstrate that our network not only surpasses state-of-the-art methods on benchmark datasets but also showcases robust generalization abilities across various feature extraction techniques. Noteworthily, BCLNet obtains significant improvement gains over the second best method on unknown outdoor dataset, and obviously accelerates model training speed.

IJCAI Conference 2024 Conference Paper

Enhancing Dual-Target Cross-Domain Recommendation with Federated Privacy-Preserving Learning

  • Zhenghong Lin
  • Wei Huang
  • Hengyu Zhang
  • Jiayu Xu
  • Weiming Liu
  • Xinting Liao
  • Fan Wang
  • Shiping Wang

Recently, dual-target cross-domain recommendation (DTCDR) has been proposed to alleviate the data sparsity problem by sharing the common knowledge across domains simultaneously. However, existing methods often assume that personal data containing abundant identifiable information can be directly accessed, which results in a controversial privacy leakage problem of DTCDR. To this end, we introduce the P2DTR framework, a novel approach in DTCDR while protecting private user information. Specifically, we first design a novel inter-client knowledge extraction mechanism, which exploits the private set intersection algorithm and prototype-based federated learning to enable collaboratively modeling among multiple users and a server. Furthermore, to improve the recommendation performance based on the extracted common knowledge across domains, we proposed an intra-client enhanced recommendation, consisting of a constrained dominant set (CDS) propagation mechanism and dual-recommendation module. Extensive experiments on real-world datasets validate that our proposed P2DTR framework achieves superior utility under a privacy-preserving guarantee on both domains.

AAAI Conference 2024 Conference Paper

Graph Context Transformation Learning for Progressive Correspondence Pruning

  • Junwen Guo
  • Guobao Xiao
  • Shiping Wang
  • Jun Yu

Most of existing correspondence pruning methods only concentrate on gathering the context information as much as possible while neglecting effective ways to utilize such information. In order to tackle this dilemma, in this paper we propose Graph Context Transformation Network (GCT-Net) enhancing context information to conduct consensus guidance for progressive correspondence pruning. Specifically, we design the Graph Context Enhance Transformer which first generates the graph network and then transforms it into multi-branch graph contexts. Moreover, it employs self-attention and cross-attention to magnify characteristics of each graph context for emphasizing the unique as well as shared essential information. To further apply the recalibrated graph contexts to the global domain, we propose the Graph Context Guidance Transformer. This module adopts a confident-based sampling strategy to temporarily screen high-confidence vertices for guiding accurate classification by searching global consensus between screened vertices and remaining ones. The extensive experimental results on outlier removal and relative pose estimation clearly demonstrate the superior performance of GCT-Net compared to state-of-the-art methods across outdoor and indoor datasets.

AAAI Conference 2023 Conference Paper

Beyond Graph Convolutional Network: An Interpretable Regularizer-Centered Optimization Framework

  • Shiping Wang
  • Zhihao Wu
  • Yuhong Chen
  • Yong Chen

Graph convolutional networks (GCNs) have been attracting widespread attentions due to their encouraging performance and powerful generalizations. However, few work provide a general view to interpret various GCNs and guide GCNs' designs. In this paper, by revisiting the original GCN, we induce an interpretable regularizer-centerd optimization framework, in which by building appropriate regularizers we can interpret most GCNs, such as APPNP, JKNet, DAGNN, and GNN-LF/HF. Further, under the proposed framework, we devise a dual-regularizer graph convolutional network (dubbed tsGCN) to capture topological and semantic structures from graph data. Since the derived learning rule for tsGCN contains an inverse of a large matrix and thus is time-consuming, we leverage the Woodbury matrix identity and low-rank approximation tricks to successfully decrease the high computational complexity of computing infinite-order graph convolutions. Extensive experiments on eight public datasets demonstrate that tsGCN achieves superior performance against quite a few state-of-the-art competitors w.r.t. classification tasks.

AAAI Conference 2023 Conference Paper

Dual Low-Rank Graph Autoencoder for Semantic and Topological Networks

  • Zhaoliang Chen
  • Zhihao Wu
  • Shiping Wang
  • Wenzhong Guo

Due to the powerful capability to gather the information of neighborhood nodes, Graph Convolutional Network (GCN) has become a widely explored hotspot in recent years. As a well-established extension, Graph AutoEncoder (GAE) succeeds in mining underlying node representations via evaluating the quality of adjacency matrix reconstruction from learned features. However, limited works on GAE were devoted to leveraging both semantic and topological graphs, and they only indirectly extracted the relationships between graphs via weights shared by features. To better capture the connections between nodes from these two types of graphs, this paper proposes a graph neural network dubbed Dual Low-Rank Graph AutoEncoder (DLR-GAE), which takes both semantic and topological homophily into consideration. Differing from prior works that share common weights between GCNs, the presented DLR-GAE conducts sustained exploration of low-rank information between two distinct graphs, and reconstructs adjacency matrices from learned latent factors and embeddings. In order to obtain valid adjacency matrices that meet certain conditions, we design some surrogates and projections to restrict the learned factor matrix. We compare the proposed model with state-of-the-art methods on several datasets, which demonstrates the superior accuracy of DLR-GAE in semi-supervised classification.

JBHI Journal 2023 Journal Article

Graph Neural Networks With Multiple Prior Knowledge for Multi-Omics Data Analysis

  • Shunxin Xiao
  • Huibin Lin
  • Conghao Wang
  • Shiping Wang
  • Jagath C. Rajapakse

With the development of biotechnology, a large amount of multi-omics data have been collected for precision medicine. There exists multiple graph-based prior biological knowledge about omics data, such as gene-gene interaction networks. Recently, there has been an increasing interest in introducing graph neural networks (GNNs) into multi-omics learning. However, existing methods have not fully exploited these graphical priors since none have been able to integrate knowledge from multiple sources simultaneously. To solve this problem, we propose a multi-omics data analysis framework by incorporating multiple prior knowledge into graph neural network (MPK-GNN). To the best of our knowledge, this is the first attempt to introduce multiple prior graphs into multi-omics data analysis. Specifically, the proposed method contains four parts: (1) a feature-level learning module to aggregate information from prior graphs; (2) a projection module to maximize the agreement among prior networks by optimizing a contrastive loss; (3) a sample-level module to learn a global representation from input multi-omics features; (4) a task-specific module to flexibly extend MPK-GNN for various downstream multi-omics analysis tasks. Finally, we verify the effectiveness of the proposed multi-omics learning algorithm on the cancer molecular subtype classification task. Experimental results show that MPK-GNN outperforms other state-of-the-art algorithms, including multi-view learning methods and multi-omics integrative approaches.