Abstract:
Emotion recognition leveraging electroencephalogram (EEG) signals has emerged as a pivotal area in affective computing. However, existing approaches often overlook the in...Show MoreMetadata
Abstract:
Emotion recognition leveraging electroencephalogram (EEG) signals has emerged as a pivotal area in affective computing. However, existing approaches often overlook the interaction between EEG and other modalities. This study introduces the Bimodal Emotion Recognition Network (BERN), an innovative framework designed to improve emotion recognition accuracy by integrating EEG and eye movement features. The BERN model employs a deep learning architecture tailored for EEG and eye movement data, utilizing 3D convolution for EEG feature extraction, alongside a refined residual connection structure for eye movement feature extraction. Subsequently, the model incorporates a cross-modal attention mechanism and feature fusion techniques to effectively integrate EEG and eye movement information, significantly enhancing emotional state recognition. Experimental results on the SEED-IV emotion dataset demonstrate that the fusion model achieves an impressive average accuracy of 83.33% across four classes, surpassing several outstanding unimodal and multimodal methods. These findings emphasize the effectiveness of combining EEG and eye movement features, enriching the information for emotion recognition and markedly enhancing the model’s overall recognition accuracy.
Published in: IEEE Transactions on Cognitive and Developmental Systems ( Early Access )