EAAI Journal 2026 Journal Article
An interpretable causal invariant graph neural network for unseen domain gear fault diagnosis
- Zhenpeng Lao
- Gang Chen
- Yiyue Zhang
- Penghong Lu
- Zhenzhen Jin
In recent years, causal learning has provided application prospects for revealing the internal causal relationships of equipment and the explainability of intelligent diagnostic models. However, existing methods still have limitations of the difficulty in eliminating spurious causal correlations in high-dimensional data and insufficient explainability, leading to unstable and unreliable diagnostic performance in unseen domains. Aiming at the above problems, an interpretable fault diagnosis method based on causal invariant graph neural network (CIGNN) is proposed to enhance model’s accuracy and interpretability for gears in the unseen domain. Firstly, a structural causal model is constructed from the cross-domain perspective and combined with GNN to clarify the internal causal mechanism of faults. Then, a causal disentanglement refining module is proposed to separate the effective causal parts from the high-dimensional and complex GNN. Furthermore, a domain causal feature consistency method is proposed to guide CIGNN in learning consistent causal feature embeddings across multi-source domains. Finally, a causal intervention risk minimization strategy is introduced to enable CIGNN deeply mine potential features and block the interference of backdoor paths, enhancing diagnostic stability. Experimental results reveal that the proposed CIGNN model performs robustly in the unseen domain diagnosis task and provides interpretable explanation for decision-making in engineering applications.