Arrow Research search

Author name cluster

Yixuan Du

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

AAAI Conference 2026 Conference Paper

DuoKD: Dual Knowledge Distillation from Large Language Models for Robust Graph Neural Networks

  • Cuiying Huo
  • Xiaotong Huang
  • Dongxiao He
  • Yixuan Du
  • Wenhuan Lu
  • Di Jin

Graph neural networks (GNNs) have become a dominant modeling paradigm for graph-structured data, and the emergence of large language models (LLMs) has spurred growing interest in integrating external semantic knowledge into GNNs. Current LLM-based GNNs are devoted to extracting semantically similar information from LLMs to enhance representation learning. However, they generally overlook key signals that are semantically dissimilar but exhibit stronger inter-class discriminative ability. Especially when the original graph data contains noise or semantic ambiguity, a single similarity-based semantic augmentation strategy not only fails to provide effective enhancement, but may also amplify misleading signals generated by the LLM in response to low-quality inputs or its own hallucinations, further degrading the discriminative power and robustness of GNNs. To this end, we propose a dual positive-negative knowledge extraction strategy based on LLMs, and integrate it with a knowledge distillation mechanism to dynamically transfer multi-dimensional enhanced signals to GNNs, thereby achieving fine-grained and robust graph representation learning. Specifically, we design personalized prompts to guide LLMs in generating semantically similar positive signals and semantically dissimilar negative signals, which help the model capture intra-class consistency and inter-class distinction. Then, we further generate structural and semantic reasoning as supplementary knowledge to support the rationality and guidance of supervision signals. To identify high-confidence transferred knowledge, we introduce a language-based evaluation mechanism to filter low-confidence or hallucinated outputs. Finally, under a unified distillation framework, our method uses both positive and negative knowledge to guide GNN training, achieving adaptive and robust representation learning. Extensive experiments on benchmark datasets verify the superior performance of our approach across various tasks.

NeurIPS Conference 2024 Conference Paper

Deep Graph Neural Networks via Posteriori-Sampling-based Node-Adaptative Residual Module

  • Jingbo Zhou
  • Yixuan Du
  • Ruqiong Zhang
  • Jun Xia
  • Zhizhi Yu
  • Zelin Zang
  • Di Jin
  • Carl Yang

Graph Neural Networks (GNNs), a type of neural network that can learn from graph-structured data through neighborhood information aggregation, have shown superior performance in various downstream tasks. However, as the number of layers increases, node representations becomes indistinguishable, which is known as over-smoothing. To address this issue, many residual methods have emerged. In this paper, we focus on the over-smoothing issue and related residual methods. Firstly, we revisit over-smoothing from the perspective of overlapping neighborhood subgraphs, and based on this, we explain how residual methods can alleviate over-smoothing by integrating multiple orders neighborhood subgraphs to avoid the indistinguishability of the single high-order neighborhood subgraphs. Additionally, we reveal the drawbacks of previous residual methods, such as the lack of node adaptability and severe loss of high-order neighborhood subgraph information, and propose a \textbf{Posterior-Sampling-based, Node-Adaptive Residual module (PSNR)}. We theoretically demonstrate that PSNR can alleviate the drawbacks of previous residual methods. Furthermore, extensive experiments verify the superiority of the PSNR module in fully observed node classification and missing feature scenarios. Our codeis available at \href{https: //github. com/jingbo02/PSNR-GNN}{https: //github. com/jingbo02/PSNR-GNN}.

NeurIPS Conference 2024 Conference Paper

DiffPhyCon: A Generative Approach to Control Complex Physical Systems

  • Long Wei
  • Peiyan Hu
  • Ruiqi Feng
  • Haodong Feng
  • Yixuan Du
  • Tao Zhang
  • Rui Wang
  • Yue Wang

Controlling the evolution of complex physical systems is a fundamental task across science and engineering. Classical techniques suffer from limited applicability or huge computational costs. On the other hand, recent deep learning and reinforcement learning-based approaches often struggle to optimize long-term control sequences under the constraints of system dynamics. In this work, we introduce Diffusion Physical systems Control (DiffPhyCon), a new class of method to address the physical systems control problem. DiffPhyCon excels by simultaneously minimizing both the learned generative energy function and the predefined control objectives across the entire trajectory and control sequence. Thus, it can explore globally and plan near-optimal control sequences. Moreover, we enhance DiffPhyCon with prior reweighting, enabling the discovery of control sequences that significantly deviate from the training distribution. We test our method on three tasks: 1D Burgers' equation, 2D jellyfish movement control, and 2D high-dimensional smoke control, where our generated jellyfish dataset is released as a benchmark for complex physical system control research. Our method outperforms widely applied classical approaches and state-of-the-art deep learning and reinforcement learning methods. Notably, DiffPhyCon unveils an intriguing fast-close-slow-open pattern observed in the jellyfish, aligning with established findings in the field of fluid dynamics. The project website, jellyfish dataset, and code can be found at https: //github. com/AI4Science-WestlakeU/diffphycon.