Responsive image Responsive image

The SJTU Emotion EEG Dataset (SEED), is a collection of EEG datasets provided by the BCMI laboratory, which is led by Prof. Bao-Liang Lu and Prof. Wei-Long Zheng. The name is inherited from the first version of the dataset, but now we provide not only emotion but also a vigilance dataset. As of December 2023, the cumulative number of applications and research institutions using SEED have reached more than 5800 and 1000, respectively. SEED series are open for the academic community. If you are interested in the datasets, take a look at the download page.

NEWS: SEED-FRA and SEED-GER datasets have been released! For a detailed description of the data files, please see the corresponding description page. The download access can be obtained with a request to the administrator.

NEWS: We have released a new version of SEED, which adds eye movement data for the existing 12 subjects. For a detailed description of the data files, please see the corresponding description page. The download access can be obtained with a request to the administrator.

Responsive image


SEED
Responsive image Responsive image Responsive image

The SEED dataset contains EEG and eye movement data of 12 subjects and EEG data of another 3 subjects. Data was collected when they were watching film clips. The film clips are carefully selected to induce different types of emotion, which are positive, negative, and neutral. Click here to know the details about the dataset.



SEED-IV
Responsive image Responsive image Responsive image Responsive image

The SEED-IV is an evolution of the original SEED dataset. The number of categories of emotions changes to four: happy, sad, fear, and neutral. In SEED-IV, we provide not only EEG signals but also eye movement features recorded by SMI eye-tracking glasses, which makes it a well-formed multimodal dataset for emotion recognition. Click here to know the details about the dataset.



SEED-VIG
Responsive image

The SEED-VIG dataset is oriented at exploring the vigilance estimation problem. We built a virtual driving system, in which an enormous screen is placed in front of a real car. Subjects can play a driving game in the car, as if they are driving in the real-world environment. The SEED-VIG dataset is collected when the subjects drive in the system. The vigilance level is labeled with the PERCLOS indicator by the SMI eye-tracking glasses. Click here to know the details about the dataset.



SEED-V
Responsive image Responsive image Responsive image Responsive image Responsive image

The SEED-V is an evolution of the original SEED dataset. The number of categories of emotions changes to five: happy, sad, fear, disgust and neutral. In SEED-V, we provide not only EEG signals but also eye movement features recorded by SMI eye-tracking glasses, which makes it a well-formed multimodal dataset for emotion recognition. Click here to know the details about the dataset.



SEED-FRA
Responsive image Responsive image Responsive image

The SEED-FRA dataset contains EEG and eye movement data of 8 French subjects with positive, negative and neutral emotional labels. Click here to know the details about the dataset.



SEED-GER
Responsive image Responsive image Responsive image

The SEED-GER dataset contains EEG and eye movement data of 8 German subjects with positive, negative and neutral emotional labels. Click here to know the details about the dataset.



Acknowledgement

This work was supported in part by grants from the National Key Research and Development Program of China (Grant No. 2017YFB1002501), the National Natural Science Foundation of China (Grant No. 61272248 and No. 61673266), the National Basic Research Program of China (Grant No. 2013CB329401), the Science and Technology Commission of Shanghai Municipality (Grant No. 13511500200), the Open Funding Project of National Key Laboratory of Human Factors Engineering (Grant No. HF2012-K-01), the Fundamental Research Funds for the Central Universities, and the European Union Seventh Framework Program (Grant No. 247619).