Responsive image Responsive image

The SJTU Emotion EEG Dataset (SEED), is a collection of EEG datasets provided by the BCMI laboratory, which is led by Prof. Bao-Liang Lu. The name is inherited from the first version of the dataset, but now we provide not only emotion but also a vigilance dataset. As of October 2021, the cumulative number of applications and research institutions using SEED have reached more than 2600 and 770, respectively. If you are interested in the datasets, take a look at the download page.

NEWS: SEED-V dataset has been released! For a detailed description of the data files, please see the corresponding DESCRIPTION page. The download access can be obtained with a request to the administrator.

NEWS: Details of the stimulation materials used in SEED and SEED-IV are released, including the name of each clip, the emotion we want to stimulate, the website where you can see the whole video from which the clip is extracted, and the corresponding start and end time points of the online video. The download access can be obtained with a request to the administrator.

Responsive image


SEED
Responsive image Responsive image Responsive image

The SEED dataset contains subjects' EEG signals when they were watching film clips. The film clips are carefully selected to induce different types of emotion, which are positive, negative, and neutral. Click here to know the details about the dataset.



SEED-IV
Responsive image Responsive image Responsive image Responsive image

The SEED-IV is an evolution of the original SEED dataset. The number of categories of emotions changes to four: happy, sad, fear, and neutral. In SEED-IV, we provide not only EEG signals but also eye movement features recorded by SMI eye-tracking glasses, which makes it a well-formed multimodal dataset for emotion recognition. Click here to know the details about the dataset.



SEED-VIG
Responsive image

The SEED-VIG dataset is oriented at exploring the vigilance estimation problem. We built a virtual driving system, in which an enormous screen is placed in front of a real car. Subjects can play a driving game in the car, as if they are driving in the real-world environment. The SEED-VIG dataset is collected when the subjects drive in the system. The vigilance level is labeled with the PERCLOS indicator by the SMI eye-tracking glasses. Click here to know the details about the dataset.



SEED-V
Responsive image Responsive image Responsive image Responsive image Responsive image

The SEED-V is an evolution of the original SEED dataset. The number of categories of emotions changes to five: happy, sad, fear, disgust and neutral. In SEED-V, we provide not only EEG signals but also eye movement features recorded by SMI eye-tracking glasses, which makes it a well-formed multimodal dataset for emotion recognition. Click here to know the details about the dataset.



Acknowledgement

This work was supported in part by grants from the National Key Research and Development Program of China (Grant No. 2017YFB1002501), the National Natural Science Foundation of China (Grant No. 61272248 and No. 61673266), the National Basic Research Program of China (Grant No. 2013CB329401), the Science and Technology Commission of Shanghai Municipality (Grant No. 13511500200), the Open Funding Project of National Key Laboratory of Human Factors Engineering (Grant No. HF2012-K-01), the Fundamental Research Funds for the Central Universities, and the European Union Seventh Framework Program (Grant No. 247619).