SEED Dataset
A dataset collection for various purposes using EEG signals
Experiment Setup
The emotion induction is based on video clips. Twelve clips were selected for each emotion (except neutrality). The neutral emotion comprised eight clips, resulting in a total of 80 video clips. Each video clip lasted for two to five minutes, and the total duration of all the clips was approximately 14,097.86 seconds.Subjects
Twenty subjects (10 males and 10 females) aged 19 to 26 years (mean: 22.5; STD: 1.80) participated in the experiments entirely with available data recorded. All participants were right-handed and had self-reported normal or corrected-to-normal vision and normal hearing at Shanghai Jiao Tong University. The participants were selected through the Eysenck Personality Questionnaire (EPQ), a widely used questionnaire developed by Eysenck to assess an individual's personality traits.Feature Extraction
EEG Features
To mitigate the impact of noise, we first visually inspect the EEG signals and interpolate any bad channels using the MNE-Python toolbox. We then apply a bandpass filter with cutoff frequencies of 0.1 Hz and 70 Hz to remove low-frequency noise. Additionally, a notch filter with a cutoff frequency of 50 Hz is applied to prevent powerline interference. To reduce the computational complexity of our method, we downsample the raw EEG signals from the original sampling rate of 1000 Hz to 200 Hz. Afterward, we extract differential entropy (DE) features within each segment at 5 frequency bands: 1) delta: 1~4 Hz; 2) theta: 4~8 Hz; 3) alpha: 8~14 Hz; 4) beta: 14~31 Hz; and 5) gamma: 31~50 Hz.Eye Movement Features
For the eye movement information collected with the Tobii Pro Fusion eye tracker, we extracted various features from different detailed parameters used in the literature, such as the pupil diameter, fixation, saccade, and blink. A detailed list of eye movement features is shown below.Dataset Summary
- Folder EEG_features: This folder contains DE features of 20 participants. Feature data are named in "subjectID.mat". For example, file "1.mat" means that this file is the DE feature for the first subject. The keys in each .mat file are named in "de_LDS_videoID". For eample, The key "de_LDS_1" means the DE features smoothed by LDS algorithm from the first video trial. The key "de_80" means the DE features without smoothing from the 80th video trial.
- Folder EEG_raw: This folder contains raw EEG data (.cnt file) collected with the Neuroscan device. Raw data are named in "subjectID_date_sessionID.cnt" format. For example, "1_20221001_1.cnt" means that this is the raw data of the first subject and first session. **Notice: session numbers are given based on the stimuli materials watched, not based on the time. **
- Folder EYE_features: This folder contains extracted eye movement features.
- Folder EYE_raw: This folder contains tsv files extracted from the eye tracking device.
- Folder src: This folder contains two files. load_cnt_file_with_mne.py is an example code for loading cnt file, preproccessing ,and cliping the EEG signals according to the triggers. channel_62_pos.locs is the montage file for the 62-channel EEG scalp.
- Folder save_info: This file contains two kinds of files. The first kind of files are named in "subjectID_date_sessionID_trigger_info.csv" format, including the start and end times of the stimulus movie clips. Specifically, trigger 1 indicates the start time of a trial and trigger 2 indicates the end time of a trial. The other kind of files are named in "subjectID_date_sessionID_save_info.csv" format, including the subject feedback of each movie clip. The score is 0 to 1, indicating how successful the targeted emotion is elicited by each video.
- File emotion_label_and_stimuli_order.xlsx: This file contains the emotion labels and stimuli orders.
- File subject info.xlsx: This file contains meta-information of the subjects.
- File Channel Order.xlsx: This file contains the channel order for DE features.
Download
Download SEED-VIIReference
If you feel that the dataset is helpful for your study, please add the following references to your publications.
Wei-Bang Jiang, Xuan-Hao Liu, Wei-Long Zheng and Bao-Liang Lu, SEED-VII: A Multimodal Dataset of Six Basic Emotions with Continuous Labels for Emotion Recognition, IEEE Transactions on Affective Computing, 2024. [link] [BibTex]