EAAI Journal 2026 Journal Article
A deep learning-based imaging classification framework for interstitial lung disease
- Hongyi Wang
- Anqi Liu
- Xiaoyan Yang
- Yifei Ni
- Jianping Wang
- Jie Du
- Yuhui Qiang
- Bingbing Xie
Author name cluster
Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.
EAAI Journal 2026 Journal Article
JBHI Journal 2025 Journal Article
Traditional Chinese medicine (TCM) herb recommendations aim to personalize herb combinations for specific symptom profiles in accordance with TCM compatibility principles, thereby ensuring optimal therapeutic efficacy. However, existing approaches often fail to account for the varying intensity of complex multi-relational interactions among herbs, therapeutic effects, and symptoms. This limitation hampers effective hybrid feature fusion across multi-source herbal knowledge and constrains overall recommendation performance. To address these challenges, we propose Multi-Relational Hierarchical Attention with Hybrid Knowledge Fusion (MRHAF), a novel framework designed to improve both predictive accuracy and interpretability by modeling latent relationships among therapeutic effects, herbs, and symptoms, as well as the material basis underlying herbal efficacy. MRHAF consists of three core components: (1) a global Herb-Efficacy-Symptom Knowledge Graph (HESKG), which applies multi-head attention to capture global semantic information; (2) a Herb-Symptom Interaction Graph (HSIG), which leverages self-attention to model direct therapeutic associations; and (3) a molecular-level Herb-Attribute-Component Knowledge Graph (HACKG), which integrates explicit attributes and implicit biochemical information to establish the material basis of efficacy. Additionally, we integrate global semantic features and local interaction features through a dual-branch attention architecture. Extensive experiments on two benchmark datasets demonstrate that MRHAF outperforms state-of-the-art baselines, achieving improvements of 9. 75% and 22. 3% in Precision@10, respectively. Clinical evaluations confirm that MRHAF effectively captures TCM formulation principles and delivers reliable recommendation outcomes, while network pharmacology analyses further validate the rationality of the recommended herbs. Overall, this study provides a new perspective on herbal compatibility and offers valuable guidance for clinical decision-making in TCM.
TIST Journal 2021 Journal Article
Data-driven models are becoming essential parts in modern mechanical systems, commonly used to capture the behavior of various equipment and varying environmental characteristics. Despite the advantages of these data-driven models on excellent adaptivity to high dynamics and aging equipment, they are usually hungry for massive labels, mostly contributed by human engineers at a high cost. Fortunately, domain adaptation enhances the model generalization by utilizing the labeled source data and the unlabeled target data. However, the mainstream domain adaptation methods cannot achieve ideal performance on time series data, since they assume that the conditional distributions are equal. This assumption works well in the static data but is inapplicable for the time series data. Even the first-order Markov dependence assumption requires the dependence between any two consecutive time steps. In this article, we assume that the causal mechanism is invariant and present our Causal Mechanism Transfer Network (CMTN) for time series domain adaptation. By capturing causal mechanisms of time series data, CMTN allows the data-driven models to exploit existing data and labels from similar systems, such that the resulting model on a new system is highly reliable even with limited data. We report our empirical results and lessons learned from two real-world case studies, on chiller plant energy optimization and boiler fault detection, which outperform the existing state-of-the-art method.
AAAI Conference 2021 Conference Paper
Domain adaptation on time series data is an important but challenging task. Most of the existing works in this area are based on the learning of the domain-invariant representation of the data with the help of restrictions like MMD. However, such extraction of the domain-invariant representation is a non-trivial task for time series data, due to the complex dependence among the timestamps. In detail, in the fully dependent time series, a small change of the time lags or the offsets may lead to difficulty in the domain invariant extraction. Fortunately, the stability of the causality inspired us to explore the domain invariant structure of the data. To reduce the difficulty in the discovery of causal structure, we relax it to the sparse associative structure and propose a novel sparse associative structure alignment model for domain adaptation. First, we generate the segment set to exclude the obstacle of offsets. Second, the intra-variables and inter-variables sparse attention mechanisms are devised to extract associative structure time-series data with considering time lags. Finally, the associative structure alignment is used to guide the transfer of knowledge from the source domain to the target one. Experimental studies not only verify the good performance of our methods on three real-world datasets but also provide some insightful discoveries on the transferred knowledge.
YNIMG Journal 2021 Journal Article
IJCAI Conference 2018 Conference Paper
Machine translation is going through a radical revolution, driven by the explosive development of deep learning techniques using Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). In this paper, we consider a special case in machine translation problems, targeting to convert natural language into Structured Query Language (SQL) for data retrieval over relational database. Although generic CNN and RNN learn the grammar structure of SQL when trained with sufficient samples, the accuracy and training efficiency of the model could be dramatically improved, when the translation model is deeply integrated with the grammar rules of SQL. We present a new encoder-decoder framework, with a suite of new approaches, including new semantic features fed into the encoder, grammar-aware states injected into the memory of decoder, as well as recursive state management for sub-queries. These techniques help the neural network better focus on understanding semantics of operations in natural language and save the efforts on SQL grammar learning. The empirical evaluation on real world database and queries show that our approach outperform state-of-the-art solution by a significant margin.