Arrow Research search

Author name cluster

Xingwang Zhao

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

NeurIPS Conference 2025 Conference Paper

CMoB: Modality Valuation via Causal Effect for Balanced Multimodal Learning

  • Jun Wang
  • Fuyuan Cao
  • Zhixin Xue
  • Xingwang Zhao
  • Jiye Liang

Existing early and late fusion frameworks in multimodal learning are confronted with the fundamental challenge of modality imbalance, wherein disparities in representational capacities induce inter-modal competition during training. Current research methodologies primarily rely on modality-level contribution assessments to measure gaps in representational capabilities and enhance poorly learned modalities, overlooking the dynamic variations of modality contributions across individual samples. To address this, we propose a Causal-aware Modality valuation approach for Balanced multimodal learning (CMoB). We define a benefit function based on Shannon's theory of informational uncertainty to evaluate the changes in the importance of samples across different stages of multimodal training. Inspired by human cognitive science, we propose a causal-aware modality contribution quantification method from a causal perspective to capture fine-grained changes in modality contribution degrees within samples. In the iterative training of multimodal learning, we develop targeted modal enhancement strategies that dynamically select and optimize modalities based on real-time evaluation of their contribution variations across training samples. Our method enhances the discriminative ability of key modalities and the learning capacity of weak modalities while achieving fine-grained balance in multimodal learning. Extensive experiments on benchmark multimodal datasets and multimodal frameworks demonstrate the superiority of our CMoB approach for balanced multimodal learning.

AAAI Conference 2025 Conference Paper

Counterfactual Task-augmented Meta-learning for Cold-start Sequential Recommendation

  • Zhiqiang Wang
  • Jiayi Pan
  • Xingwang Zhao
  • Jianqing Liang
  • Chenjiao Feng
  • Kaixuan Yao

Cold-start sequential recommendation, where user interaction histories are sparse or minimal, remains a significant challenge in recommendation systems. Current meta-learning-based approaches rely heavily on the interaction histories of regular users to construct meta-tasks, aiming to acquire prior knowledge for cold-start adaptation. However, these methods often fail to account for preference discrepancies between regular and cold-start users, leading to biased preference modeling and suboptimal recommendations. To address this issue, we propose a novel counterfactual task-augmented meta-learning method for cold-start sequential recommendations. Our approach intervenes in user interaction histories to create counterfactual sequences that simulate potential but unrealized user behaviors, establishing counterfactual tasks within a meta-learning framework. Additionally, we aggregate meta-path neighbors to uncover latent relationships between items, enabling more detailed and accurate modeling of user preferences. Moreover, by integrating real and counterfactual task losses, we jointly optimize the model through a combination of global and local updates, enhancing its adaptability to cold-start scenarios. Extensive experiments demonstrate that our method significantly outperforms existing state-of-the-art techniques, achieving superior results in cold-start sequential recommendation tasks.

IJCAI Conference 2025 Conference Paper

Uncertainty-guided Graph Contrastive Learning from a Unified Perspective

  • Zhiqiang Li
  • Jie Wang
  • Jianqing Liang
  • Junbiao Cui
  • Xingwang Zhao
  • Jiye Liang

The success of current graph contrastive learning methods largely relies on the choice of data augmentation and contrastive objectives. However, most existing methods tend to optimize these two components independently, neglecting their potential interplay, which leads to suboptimal quality of the learned embeddings. To address this issue, we propose Uncertainty-guided Graph Contrastive Learning (UGCL) from a unified perspective. The core of our method is the introduction of sample uncertainty, a critical metric that quantifies the degree of class ambiguity within individual samples. On this basis, we design a novel multi-scale data augmentation strategy and a weighted graph contrastive loss function, both of which significantly enhance the quality of embeddings. Theoretically, we demonstrate that UGCL can coordinate overall optimization objectives through uncertainty, and through experiments, we show that it improves the performance of tasks such as node classification, node clustering, and link prediction, thereby verifying the effectiveness of our method.