EAAI Journal 2026 Journal Article
A dual-stream regional feature learning and adaptive fusion method for electroencephalogram-based emotion recognition
- Yong Yang
- Wenhao Wang
- Kaibo Shi
- Yuanlun Xie
- Nan Zhou
- Shiping Wen
- Ming Zhu
- Badong Chen
Electroencephalogram (EEG) has become a research hotspot in emotion recognition due to its high temporal resolution and ability to truly reflect brain activity. However, few existing EEG-based emotion recognition methods integrate brain region information into the algorithm and do not fully extract the deep features of each region. Brain science has shown that different brain regions have different functions and are highly correlated with the production of emotions. In this paper, based on the division of brain regions, a dual-branch regional feature learning and adaptive fusion neural network (DRFNet) is proposed to extract the features of different brain regions and adaptively fuse regional features, thereby achieving accurate EEG emotion recognition. Specifically, DRFNet mainly consists of regional feature extraction modules (DB-CTFEM) and a feature fusion module (RFM). The DB-CTFEM extracts regional local and global features through the dual-branch structure of convolutional neural network (CNN) and Transformer, respectively, and then uses cross-attention to effectively fuse the two to obtain enhanced regional features. Considering the differences of brain regions, RFM uses the attention mechanism to fuse regional features and adaptively reconstruct global brain features. In addition, a region loss function based on the importance of region features is proposed to dynamically adjust the contribution weights of different brain regions, thereby guiding the model to pay more attention to key regions. This paper conducts subject-dependent experiments on the SJTU Emotion EEG Datasets (SEED, SEED-IV, SEED-V, and SEED-VII) to verify effectiveness and robustness of the proposed method.