Responsive image Responsive image

SJTU Emotion EEG Dataset(SEED), is a collection of EEG dataset provided by the BCMI laboratory which is led by Prof. Bao-Liang Lu. The name is inherited from the first version of the dataset, but now we provide not only emotion, but also vigilance dataset. If you are interested about the datasets, take a look at the download page.

NEWS: Details of the stimulation materials used in SEED and SEED-IV are released, including the name of each clip, the emotion we want to stimulate, the website where you can see the whole video from which the clip is extracted, and the corresponding start and end time points of the online video. The download access can be obtained with request to Administrator.

Responsive image


SEED
Responsive image Responsive image Responsive image

The SEED dataset contains subjects' EEG signals when they were watching films clips. The film clips are carefully selected so as to induce different types of emotion, which are positive, negative, and neutral ones. Click here to know details about the dataset.



SEED-IV
Responsive image Responsive image Responsive image Responsive image

The SEED-IV is an evolution of the original SEED dataset. The category number of emotion change to four: happy, sad, fear, and neutral. In SEED-IV, we provide not only EEG signals, but also eye movement features recorded by SMI eye-tracking glasses, which makes it an well formed multi-modal dataset for emotion recognition. Click here to know details about the dataset.



SEED-VIG
Responsive image

The SEED-VIG dataset is orientated at exploring the vigilance estimation problem. We built a virtual driving system, in which a huge screen is placed in front of a real car. Subjects can play a driving game in the car, just as if driving in the real-world environment. The SEED-VIG dataset is collected when the subjects driving in the system. The vigilance level is labelled with the PERCLOS indicator by SMI eye-tracking glasses. Click here to know details about the dataset.



Acknowledgement

This work was supported in part by the grants from the the National Key Research and Development Program of China (Grant No. 2017YFB1002501), the National Natural Science Foundation of China (Grant No. 61272248 and No. 61673266), the National Basic Research Program of China (Grant No. 2013CB329401), the Science and Technology Commission of Shanghai Municipality (Grant No.13511500200), the Open Funding Project of National Key Laboratory of Human Factors Engineering (Grant No. HF2012-K-01), the Fundamental Research Funds for the Central Universities, and the European Union Seventh Framework Program (Grant No.247619).