135-class Emotional Facial Expression Dataset

Citation Author(s):
Keyu
Chen
Netease Fuxi AI Lab
Changjie
Fan
Netease Fuxi AI Lab
Wei
Zhang
Netease Fuxi AI Lab
Yu
Ding
Netease Fuxi AI Lab
Submitted by:
YU DING
Last updated:
Mon, 02/27/2023 - 01:37
DOI:
10.21227/8e31-3188
Research Article Link:
Links:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

The ability to perceive human facial emotions is an essential feature of various multi-modal applications, especially in the intelligent human-computer interaction (HCI) area. In recent decades, considerable efforts have been put into researching automatic facial emotion recognition (FER). However, most of the existing FER methods only focus on either basic emotions such as the seven/eight categories (e.g., happiness, anger and surprise) or abstract dimensions (valence, arousal, etc.), while neglecting the fruitful nature of emotion statements. In real-world scenarios, there is definitely a larger vocabulary for describing human inner feelings as well as their reflection on facial expressions. This dataset addresses the semantic richness issue in the FER problem, with an emphasis on the granularity of the emotion concepts. Particularly, we take inspiration from former psycho-linguistic research, which conducted a prototypicality rating study and chose 135 emotion names from hundreds of English emotion terms.

Based on the 135 emotion categories, the dataset collects a large-scale 135-class FER image dataset. The paper [1] demonstrates the accessibility of prompting FER research to a fine-grained level by conducting extensive evaluations on the dataset credibility and the accompanying baseline classification model. To the best of our knowledge, this is the first dataset aimed at exploiting such a large semantic space for emotion representation in the FER problem.

[1] K. Chen, X. Yang, C. Fan, W. Zhang and Y. Ding, "Semantic-Rich Facial Emotional Expression Recognition," in IEEE Transactions on Affective Computing, vol. 13, no. 4, pp. 1906-1916, 1 Oct.-Dec. 2022, doi: 10.1109/TAFFC.2022.3201290.

Instructions: 

The 135-class Emotional Facial Expression dataset, abbreviated Emo135, provides 135 emotion categories and 696,168 facial images in total. Each emotion category contains 994~12,794 facial images which are labeled with terms of emotions.

The total data is split into three files in the json format, namely 'training_raw_data', 'validation_raw_data' and 'testing_raw_data'. The 'training_raw_data' file contains 556,803 facial images; the 'validation_raw_data' file contains 69,560 facial images; the 'testing_raw_data' file contains 69,805 facial images. The split is corresponding to the relevant work [1]. The provided data is a little fewer  than that reported in [1], as a small amount of data which is grouped into incorrect labels is removed.

The above json files contain python dicts. Each dict refers to an image sample including 'name', 'url', 'description', and 'label'.  'name' refers to the image name; 'url' provides the download link;  'description' is a linguistic description of the image (Please refer to [1] for more details); 'label' is the annotated emotion category.

[1] K. Chen, X. Yang, C. Fan, W. Zhang and Y. Ding, "Semantic-Rich Facial Emotional Expression Recognition," in IEEE Transactions on Affective Computing, vol. 13, no. 4, pp. 1906-1916, 1 Oct.-Dec. 2022, doi: 10.1109/TAFFC.2022.3201290.

Comments

.

Submitted by Hemavardhini Ya... on Thu, 03/09/2023 - 01:51

.

Submitted by Ozlem Hakdagli on Thu, 06/15/2023 - 04:39

perfect

Submitted by alrero hosseini on Sun, 10/08/2023 - 03:57

I want dataset

Submitted by Dhivya S on Mon, 02/19/2024 - 23:10