Dataset Summary

The SEED consists of two parts:

1. The stimuli. About 15 Chinese film clips (positive, neutral and negative emotions) were chosed from the pool of materials in a preliminary study. The duration of each film clip is about 4 minutes.

2. The EEG data. EEG signals and facial vedios of 15 subjects were recorded while they were watching the emotional film clips. In order to investigate neural signatures and stable patterns across sessions and individuals, each subject is required to perform the experiments for three sessions. There are totally 45 experiments in this dataset.

Stimuli

These are the selected emotional film clips as stimuli used in the experiments. The selection criteria for film clips are as follows: (a) the length of the whole experiment should not be too long in case it will make subjects fatigue; (b) the videos should be understood without explanation; and (c) the videos should elicit a single desired target emotion. At last, 15 Chinese film clips (positive, neutral and negative emotions) were chosen from the pool of materials, which received highest match across participates. The duration of each film clip is about 4 minutes. Each film clip is well edited to create coherent emotion eliciting and maximize emotional meanings. The details of the film clips used in the experiments are listed below:

There are totally 15 trials for each experiment. There is a 15s hint before each clips and 10s feedback after
each clip. The order of presentation is arranged so that two film clips targeting the same emotion are not
shown consecutively. For the feedback, participants are told to report their emotional reactions to each
film clip by completing the questionnaire immediately after watching each clip. The detailed protocol is shown below:

Subject_list

This file contains the name list of the subjects during the experiments. The EEG signals of each subjects were saved as seperate files with the name of the subjects and the date. So this file can help list each data file when analyzing this dataset. Fifteen subjects (7 males and 8 females; MEAN: 23.27, STD: 2.37) participated in the experiments.

The detailed list of the subjects can be downloaded in the DOWNLOAD page.

Data_preprocessed

These files contain a downsampled, preprocessed and segmented version of the EEG data in Matlab (.mat file). The data was downsampled to 200Hz. A bandpass frequency filter from 0-75Hz was applied. We extracted the EEG segments corresponding to the duration of each movie. There are totally 45 .mat (Matlab) files, one for per experiment. Each subject performed the experiment three times with an interval of about one week. Each subject file contains 16 arrays. 15 arrays contain segmented preprocessed EEG data of 15 trials in one experiment (eeg_1~eeg_15, channel×data). A array name labels contains the label of the corresponding emotional labels (-1 for negative, 0 for neutral and +1 for positive).

The detailed order of the channels can be downloaded here. The EEG cap according to the international 10-20 system for 62 channels is shown below:

 

Extracted Features

These files contain the extracted differential entropy (DE) features of the EEG signals, which was first proposed in [1]. These data is well-suited to those who want to quickly test a classification method without propcessing the raw EEG data. The file format is the same as the Data_prepocessed. We also computed differential asymmetry (DASM) and rational asymmetry (RASM) features as the differences and ratios between the DE features of 27 pairs of hemispheric asymmetry electrodes. All the features were further smooth with conventional moving average and linear dynamic systems (LDS) approaches. For more details about the feature extraction and feature smooth, please refer to [1] and [2].

References

1. Ruo-Nan Duan, Jia-Yi Zhu and Bao-Liang Lu, Differential Entropy Feature for EEG-based Emotion Classification, Proc. of the 6th International IEEE EMBS Conference on Neural Engineering (NER). 2013: 81-84. [link] [BibTex]

2. Wei-Long Zheng, and Bao-Liang Lu, Investigating Critical Frequency Bands and Channels for EEG-based Emotion Recognition with Deep Neural Networks, accepted by IEEE Transactions on Autonomous Mental Development (IEEE TAMD) 7(3): 162-175, 2015. [link] [BibTex]