Differential Entropy Feature for EEG-based Emotion Classification

Ruo-Nan Duan, Jia-Yi Zhu and Bao-Liang Lu, 2013

Abstract: EEG-based emotion recognition has been studied for a long time. In this paper, a new effective EEG feature named differential entropy is proposed to represent the characteristics associated with emotional states. Differential entropy (DE) and its combination on symmetrical electrodes (Differential asymmetry, DASM; and rational asymmetry, RASM) are compared with traditional frequency domain feature (energy spectrum, ES). The average classification accuracies using features DE, DASM, RASM, and ES on EEG data collected in our experiment are 84.22%, 80.96%, 83.28%, and 76.56%, respectively. This result indicates that DE is more suited for emotion recognition than traditional feature, ES. It is also confirmed that EEG signals on frequency band Gamma relates to emotional states more closely than other frequency bands. Feature smoothing method- linear dynamical system (LDS), and feature selection algorithm- minimal-redundancy-maximal-relevance (MRMR) algorithm also help to increase the accuracies and efficiencies of EEG-based emotion classifiers.


Investigating Critical Frequency Bands and Channels for EEG-based Emotion Recognition with Deep Neural Networks

Wei-Long Zheng and Bao-Liang Lu, 2015

Abstract: To investigate critical frequency bands and channels, this paper introduces deep belief networks (DBNs) to constructing EEG-based emotion recognition models for three emotions: positive, neutral and negative. We develop an EEG dataset acquired from 15 subjects. Each subject performs the experiments twice at the interval of a few days. DBNs are trained with differential entropy features extracted from multichannel EEG data. We examine the weights of the trained DBNs and investigate the critical frequency bands and channels. Four different profiles of 4, 6, 9, and 12 channels are selected. The recognition accuracies of these four profiles are relatively stable with the best accuracy of 86.65%, which is even better than that of the original 62 channels. The critical frequency bands and channels determined by using the weights of trained DBNs are consistent with the existing observations. In addition, our experiment results show that neural signatures associated with different emotions do exist and they share commonality across sessions and individuals. We compare the performance of deep models with shallow models. The average accuracies of DBN, SVM, LR, and KNN are 86.08%, 83.99%, 82.70%, and 72.60%, respectively.

Responsive image

A Multimodal Approach to Estimating Vigilance Using EEG and Forehead EOG

Wei-Long Zheng and Bao-Liang Lu, 2017

Abstract: Objective. Covert aspects of ongoing user mental states provide key context information for user-aware human computer interactions. In this paper, we focus on the problem of estimating the vigilance of users using EEG and EOG signals.Approach. The PERCLOS index as vigilance annotation is obtained from eye tracking glasses. To improve the feasibility and wearability of vigilance estimation devices for real-world applications, we adopt a novel electrode placement for forehead EOG and extract various eye movement features, which contain the principal information of traditional EOG. We explore the effects of EEG from different brain areas and combine EEG and forehead EOG to leverage their complementary characteristics for vigilance estimation. Considering that the vigilance of users is a dynamic changing process because the intrinsic mental states of users involve temporal evolution, we introduce continuous conditional neural field and continuous conditional random field models to capture dynamic temporal dependency. Main results. We propose a multimodal approach to estimating vigilance by combining EEG and forehead EOG and incorporating the temporal dependency of vigilance into model training. The experimental results demonstrate that modality fusion can improve the performance compared with a single modality, EOG and EEG contain complementary information for vigilance estimation, and the temporal dependency-based models can enhance the performance of vigilance estimation. From the experimental results, we observe that theta and alpha frequency activities are increased, while gamma frequency activities are decreased in drowsy states in contrast to awake states. Significance. The forehead setup allows for the simultaneous collection of EEG and EOG and achieves comparative performance using only four shared electrodes in comparison with the temporal and posterior sites.

Responsive image

EmotionMeter: A Multimodal Framework for Recognizing Human Emotions

Wei-Long Zheng, Wei Liu, Yifei Lu, Bao-Liang Lu, and Andrzej Cichocki, 2019

Abstract: In this paper, we present a multimodal emotion recognition framework called EmotionMeter that combines brain waves and eye movements. To increase the feasibility and wearability of EmotionMeter in real-world applications, we design a six-electrode placement above the ears to collect electroencephalography (EEG) signals. We combine EEG and eye movements for integrating the internal cognitive states and external subconscious behaviors of users to improve the recognition accuracy of EmotionMeter. The experimental results demonstrate that modality fusion with multimodal deep neural networks can significantly enhance the performance compared with a single modality, and the best mean accuracy of 85.11% is achieved for four emotions (happy, sad, fear, and neutral). We explore the complementary characteristics of EEG and eye movements for their representational capacities and identify that EEG has the advantage of classifying happy emotion, whereas eye movements outperform EEG in recognizing fear emotion. To investigate the stability of EmotionMeter over time, each subject performs the experiments three times on different days. EmotionMeter obtains a mean recognition accuracy of 72.39% across sessions with the six-electrode EEG and eye movement features. These experimental results demonstrate the effectiveness of EmotionMeter within and between sessions.

Responsive image

Comparing Recognition Performance and Robustness of Multimodal Deep Learning Models for Multimodal Emotion Recognition

Wei Liu, Jie-Lin Qiu, Wei-Long Zheng and Bao-Liang Lu, 2021

Abstract: Multimodal signals are powerful for emotion recognition since they can represent emotions comprehensively. In this paper, we compare the recognition performance and robustness of two multimodal emotion recognition models: deep canonical correlation analysis (DCCA) and bimodal deep autoencoder (BDAE). The contributions of this paper are three folds: 1) We propose two methods for extending the original DCCA model for multimodal fusion: weighted sum fusion and attention-based fusion. 2) We systemically compare the performance of DCCA, BDAE, and traditional approaches on five multimodal datasets. 3) We investigate the robustness of DCCA, BDAE, and traditional approaches on SEED-V and DREAMER datasets under two conditions: adding noises to multimodal features and replacing EEG features with noises. Our experimental results demonstrate that DCCA achieves state-of-the-art recognition results on all five datasets: 94.6% on the SEED dataset, 87.5% on the SEEDIV dataset, 84.3% and 85.6% on the DEAP dataset, 85.3% on the SEED-V dataset, and 89.0%, 90.6%, and 90.7% on the DREAMER dataset. Meanwhile, DCCA has greater robustness when adding various amounts of noises to the SEED-V and DREAMER datasets. By visualizing features before and after DCCA transformation on the SEED-V dataset, we find that the transformed features are more homogeneous and discriminative across emotions.

Responsive image

Identifying similarities and differences in emotion recognition with EEG and eye movements among Chinese, German, and French People

Wei Liu, Wei-Long Zheng, Ziyi Li, Si-Yuan Wu, Lu Gan and Bao-Liang Lu, 2022

Abstract: Objective. Cultures have essential influences on emotions. However, most studies on cultural influences on emotions are in the areas of psychology and neuroscience, while the existing affective models are mostly built with data from the same culture. In this paper, we identify the similarities and differences among Chinese, German, and French individuals in emotion recognition with electroencephalogram (EEG) and eye movements from an affective computing perspective. Approach. Three experimental settings were designed: intraculture subject dependent, intraculture subject independent, and cross-culture subject independent. EEG and eye movements are acquired simultaneously from Chinese, German, and French subjects while watching positive, neutral, and negative movie clips. The affective models for Chinese, German, and French subjects are constructed by using machine learning algorithms. A systematic analysis is performed from four aspects: affective model performance, neural patterns, complementary information from different modalities, and cross-cultural emotion recognition. Main results. From emotion recognition accuracies, we find that EEG and eye movements can adapt to Chinese, German, and French cultural diversities and that a cultural in-group advantage phenomenon does exist in emotion recognition with EEG. From the topomaps of EEG, we find that the γ and β bands exhibit decreasing activities for Chinese, while for German and French, θ and α bands exhibit increasing activities. From confusion matrices and attentional weights, we find that EEG and eye movements have complementary characteristics. From a cross-cultural emotion recognition perspective, we observe that German and French people share more similarities in topographical patterns and attentional weight distributions than Chinese people while the data from Chinese are a good fit for test data but not suitable for training data for the other two cultures. Significance. Our experimental results provide concrete evidence of the in-group advantage phenomenon, cultural influences on emotion recognition, and different neural patterns among Chinese, German, and French individuals.

Responsive image

Reference

If you feel that the dataset is helpful for your study, please add the following references to your publications.

1. Ruo-Nan Duan, Jia-Yi Zhu and Bao-Liang Lu, Differential Entropy Feature for EEG-based Emotion Classification, Proc. of the 6th International IEEE EMBS Conference on Neural Engineering (NER). 2013: 81-84. [link] [BibTex]

2. Wei-Long Zheng, and Bao-Liang Lu, Investigating Critical Frequency Bands and Channels for EEG-based Emotion Recognition with Deep Neural Networks, accepted by IEEE Transactions on Autonomous Mental Development (IEEE TAMD) 7(3): 162-175, 2015. [link] [BibTex]

3. Wei-Long Zheng and Bao-Liang Lu, A multimodal approach to estimating vigilance using EEG and forehead EOG. Journal of Neural Engineering, 14(2): 026017, 2017. [link] [BibTex]

4. Wei-Long Zheng, Wei Liu, Yifei Lu, Bao-Liang Lu, and Andrzej Cichocki, EmotionMeter: A Multimodal Framework for Recognizing Human Emotions. IEEE Transactions on Cybernetics, Volume: 49, Issue: 3, March 2019, Pages: 1110-1122, DOI: 10.1109/TCYB.2018.2797176. [link] [BibTex]

5. Wei Liu, Jie-Lin Qiu, Wei-Long Zheng and Bao-Liang Lu, Comparing Recognition Performance and Robustness of Multimodal Deep Learning Models for Multimodal Emotion Recognition, IEEE Transactions on Cognitive and Developmental Systems, 2021. [link] [BibTex]

6. Wei Liu, Wei-Long Zheng, Ziyi Li, Si-Yuan Wu, Lu Gan and Bao-Liang Lu, Identifying similarities and differences in emotion recognition with EEG and eye movements among Chinese, German, and French People, Journal of Neural Engineering 19.2 (2022): 026012. [link] [BibTex]