Experiment Setup

72 film clips were carefully chosen by a preliminary study, which have the tendency of inducing happy, sad, fear or neutral emotion.
Responsive image
A total number of 15 subjects participated the experiment. For each participant, 3 sessions are performed on different days, and each session contains 24 trials. In one trial, the participant watch one of the film clips, while his(her) EEG signals and eye movements are collected with the 62-channel ESI NeuroScan System and SMI eye-tracking glasses. An schedule diagram of trial is as follows.
Responsive image
The experiment scene and the corresponding EEG electrode placement are shown in the following figures.
Responsive image
Responsive image

Feature Extraction

Each session is sliced into 4-second non-overlapping segments. Each segment is regarded as one data sample during model training.

EEG Features

For the EEG signals, we extract power spectral density (PSD) and differential entropy (DE) features within each segment at 5 frequency bands: 1) delta: 1~4 Hz; 2) theta: 4~8 Hz; 3) alpha: 8~14 Hz; 4) beta: 14~31 Hz; and 5) gamma: 31~50 Hz. The calculation of PSD and DE of a random variable Responsive image:
Responsive image
We assume the EEG signals obey gaussian distribution: Responsive image. Then the calculation of DE features can be simplified:
Responsive image

Eye Movement Features

For the eye movement information collected with the SMI eye tracking glasses, we extracted various features from different detailed parameters used in the literature, such as pupil diameter, fixation, saccade, and blink. A detailed list of eye movement features is shown below.
Responsive image

Dataset Summary

You can find four folders and 2 files in the dataset folder.
  1. The "eeg_raw_data" folder contains the raw EEG signals of the 15 participants. The inner 3 folders named '1', '2' and '3' correspond to the 3 sessions. For each ".mat" file in the folders, it stores a structure with fields named "cz_eeg1", "cz_eeg2", ..., "cz_eeg24", which correspond to the EEG signals recorded during the 24 trials. Below show the architecture of one of the files.
    Responsive image
  2. The "eeg_feature_smooth" folder has the same structure as eeg_raw_data's. Each ".mat" file stores a structure with fields named "{X}_{Y}{Z}". The "X" indicates the type of feature, can be "psd" or "de". The "Y" indicates the type of smoothing method, can be "movingAve" or "LDS". Linear dynamic system (LDS) and moving average are two different approaches to filter out noise and artifacts that are unrelated to the EEG features. The "Z" indicates the trial number. Each field is in shape of channel_number*sample_number*frequency_bands, that is 62*W*5, where W indicates the number of time windows in that trial (different trials have different W because the film clips are not in same length). Below shows the architecture of one of the files.
    Responsive image
  3. The "eye_raw_data" folder contains raw data of eye movement information recorded with the eye tracking glasses. There are 5 files for each session, which are in the following form:

    • {SubjectName}_{Date}_blink.mat
    • {SubjectName}_{Date}_event.mat
    • {SubjectName}_{Date}_fixation.mat
    • {SubjectName}_{Date}_pupil.mat
    • {SubjectName}_{Date}_saccade.mat
    The description of each file is as follows.

    • {SubjectName}_{Date}_blink.mat
      Here shows the structure of one of the files.
      Responsive image
      We can see that there are 24 matrices in the file, corresponding to 24 movie clips. For example, in the first movie clip, 89 represents the number of blinks, and the data in this matrix represents the duration [ms] of each blink. That is, the subject blinked 89 times, and each blink duration was recorded.
    • {SubjectName}_{Date}_event.mat
      Here shows the structure of one of the files.
      Responsive image
      We can see that there are 24 matrices in the file, corresponding to 24 movie clips. 28 in each matrix stand for 28 kinds of events, which can be found in the following table.
      Responsive image
    • {SubjectName}_{Date}_fixation.mat
      Here shows the structure of one of the files.
      Responsive image
      We can see that there are 24 matrices in the file, corresponding to 24 movie clips. For example, in the first movie clip, 439 represents the number of fixation, and the data in the matrix represents the fixation duration [ms].
    • {SubjectName}_{Date}_pupil.mat
      Here shows the structure of one of the files.
      Responsive image
      We can see that there are 24 matrices in the file, corresponding to 24 movie clips. For example, in first film clip, 439 stand for the number of pupil recording and the 4 stand for 4 features, namely ‘Average Pupil Size [px] X’, ‘Average Pupil Size [px] Y’, ‘Dispersion X’ and ‘ Dispersion Y’, respectively.
    • {SubjectName}_{Date}_saccade.mat
      Here shows the structure of one of the files.
      Responsive image
      We can see that there are 24 matrices in the file, corresponding to 24 movie clips. For example, in first film clip, 444 stand for the number of saccade and the 2 stand for 2 features, which are ‘Saccade Duration [ms]’ and ‘Amplitude [°]’, respectively.
  4. The "eye_feature_smooth" folder contains features extracted from the files in the eye_raw_data folder. The naming of the files follows the "{SubjectName}_{date}.mat" formation. The structure of each file is shown in the following figure.
    Responsive image
    The left part shows 24 fields, each of them for one session. The right part shows the data matrix in one of the fields. Each row corresponds to one type of feature, and each column corresponds to one data sample. The relationship between the row number and the feature type is
    • 1-12 : Pupil diameter (X and Y)
    • 13-16: Dispersion (X and Y)
    • 17-18: Fixation duration (ms)
    • 19-22: Saccade
    • 23-31: Event statistics
  5. The "Channel Order.xlsx" file lists the channel names in the EEG placement figure in the order of the channels in the EEG raw data provided in the "eeg_raw_data" folder.
  6. The "ReadMe.txt" file demonstrates the label of each trial in each session and some other additional information.

Download

Download SEED-IV

Reference

If you feel the dataset helpful for your study, please add the following reference to your publications.

Wei-Long Zheng, Wei Liu, Yifei Lu, Bao-Liang Lu, and Andrzej Cichocki, EmotionMeter: A Multimodal Framework for Recognizing Human Emotions. IEEE Transactions on Cybernetics, 2018. [link] [BibTex]