Arrow Research search

Author name cluster

Tianlong Gu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
1 author row

Possible papers

5

AAAI Conference 2026 Conference Paper

FairGSE: Fairness-Aware Graph Neural Network Without High False Positive Rates

  • Zhenqiang Ye
  • Jinjie Lu
  • Tianlong Gu
  • Fengrui Hao
  • Xuemin Wang

Graph neural networks (GNNs) have emerged as the mainstream paradigm for graph representation learning due to their effective message aggregation. However, this advantage also amplifies biases inherent in graph topology, raising fairness concerns. Existing fairness-aware GNNs provide satisfactory performance on fairness metrics such as Statistical Parity and Equal Opportunity while maintaining acceptable accuracy trade-offs. Unfortunately, we observe that this pursuit of fairness metrics neglects the GNN's ability to predict negative labels, which renders their predications with extremely high False Positive Rates (FPRs), resulting in negative effects in high-risk scenarios. To this end, we advocate that classification performance should be carefully calibrated while improving fairness, rather than simply constraining accuracy loss. Furthermore, we propose Fair GNN via Structural Entropy (FairGSE), a novel framework that maximizes two-dimensional structural entropy (2D-SE) to improve fairness without neglecting false positives. Experiments on several real-world datasets show FairGSE reduces FPR by 39% vs. state-of-the-art fairness-aware GNNs, with comparable fairness improvement.

TAAS Journal 2025 Journal Article

Credible Negotiation for Multi-agent Reinforcement Learning in Long-term Coordination

  • Tianlong Gu
  • Taihang Zhi
  • Xuguang Bao
  • Liang Chang

The coordination of multi-agent is one of the critical problems in Multi-agent Reinforcement Learning (MARL). The traditional methods of MARL focus on finding a stochastically acceptable solution called Nash Equilibrium (NE) for all agents from the Markov Game in which multiple equilibria exist. However, learning a fair equilibrium is crucial for the sustainability and stability of collaboration in the long-term coordination game, especially when the leadership competition exists. In this article, we propose the bi-level reinforcement learning method N-Bi-AC, whose solution is a Pareto improvement for traditional NE, to choose a fair Equilibrium. There are two parts in our method, the first is that we propose the Negotiator to determine the leader in stage game, and the other is to update the Q-value of agents in the game by using a bi-level actor-critic learning method based on the Joint Mixed Strategy Equilibrium Q-learning algorithm (JMSE Q-learning). The convergence proof is given, and the learning algorithm is compared with the state-of-the-art algorithms. We found that the proposed N-Bi-AC method successfully converged to a fair NE, which guarantees the fairness of agents in different matrix game environments.

AAAI Conference 2025 Conference Paper

Learning from Mistakes: Self-correct Adversarial Training for Chinese Unnatural Text Correction

  • Xuan Feng
  • Tianlong Gu
  • Xiaoli Liu
  • Liang Chang

Unnatural text correction aims to automatically detect and correct spelling errors or adversarial perturbation errors in sentences. Existing methods typically rely on fine-tuning or adversarial training to correct errors, which have achieved significant success. However, these methods exhibit poor generalization performance due to the difference in data distribution between training data and real-world scenarios, known as the exposure bias problem. In this paper, we propose a self-correct adversarial training framework for learning from mistakes (LIMIT), which is a task- and model-independent framework to correct unnatural errors or mistakes. Specifically, we fully utilize errors generated by the model that are actively exposed during the inference phase, i.e., predictions that are inconsistent with the target. This training method not only simulates potential errors in real application scenarios, but also mitigates the exposure bias of the traditional training process. Meanwhile, we design a novel decoding intervention strategy to maintain semantic consistency. Extensive experimental results on Chinese unnatural text error correction datasets show that our proposed method can correct multiple forms of errors and outperforms the state-of-the-art text correction methods. In addition, extensive results on Chinese and English datasets validate that LIMIT can serve as a plug-and-play defense module and can extend to new models and datasets without further training.

AAAI Conference 2025 Conference Paper

Modeling Inter-Intra Heterogeneity for Graph Federated Learning

  • Wentao Yu
  • Shuo Chen
  • Yongxin Tong
  • Tianlong Gu
  • Chen Gong

Heterogeneity is a fundamental and challenging issue in federated learning, especially for the graph data due to the complex relationships among the graph nodes. To deal with the heterogeneity, lots of existing methods perform the weighted federation based on their calculated similarities between pairwise clients (i.e., subgraphs). However, their inter-subgraph similarities estimated with the outputs of local models are less reliable, because the final outputs of local models may not comprehensively represent the real distribution of subgraph data. In addition, they ignore the critical intra-heterogeneity which usually exists within each subgraph itself. To address these issues, we propose a novel Federated learning method by integrally modeling the Inter-Intra Heterogeneity (FedIIH). For the inter-subgraph relationship, we propose a novel hierarchical variational model to infer the whole distribution of subgraph data in a multi-level form, so that we can accurately characterize the inter-subgraph similarities with the global perspective. For the intra-heterogeneity, we disentangle the subgraph into multiple latent factors and partition the model parameters into multiple parts, where each part corresponds to a single latent factor. Our FedIIH not only properly computes the distribution similarities between subgraphs, but also learns disentangled representations that are robust to irrelevant factors within subgraphs, so that it successfully considers the inter- and intra- heterogeneity simultaneously. Extensive experiments on six homophilic and five heterophilic graph datasets in both non-overlapping and overlapping settings demonstrate the effectiveness of our method when compared with eight state-of-the-art methods. Specifically, FedIIH averagely outperforms the second-best method by a large margin of 5.79% on all heterophilic datasets.