IIITM Face Emotion

Citation Author(s):
Rishi Raj
Sharma
Defence Institute of Advanced Technology, India
K V
Arya
ABV - Indian Institute of Information Technology and Management, India
Submitted by:
Rishi Sharma
Last updated:
Fri, 04/14/2023 - 02:10
DOI:
10.21227/rers-ck04
Data Format:
Research Article Link:
Links:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

 

With the expansion of machine learning and deep learning technology, facial expression recognition methods have become more accurate and precise. However, in a real case scenario, the presence of facial attributes, weakly posed expressions and variation in viewpoint can significantly reduce the performance of those systems designed only for frontal faces without facial attributes. A facial landmark distance-based model is proposed in this paper to explore a new method that can effectively recognize emotions in oriented faces with facial attributes. The proposed model computes distance-based features utilizing the inter-spaces between facial landmarks generated by a face mesh algorithm from pre-processed images. These features are normalized and ranked to find the optimal features for classifying emotions. The experimental results exhibit that the proposed model can effectively classify different emotions in the IIITM Face dataset with an overall accuracy of 61% using the SVM classifier (vertically oriented with different facial attributes). The model also classifies emotions posed in front, up, and down orientations with 70%, 58%, and 55% accuracy, respectively. The efficacy test of the model on laterally oriented faces from the KDEF database results in an overall accuracy of 80%. Comparison with the existing CNN and facial landmark-based method reveals that the proposed model exhibits an improved recognition rate for the oriented viewpoints with facial attributes. Considering the results of the proposed model, it is apparent that vertical viewpoints, forcing level of expression, and facial attributes can limit the performance of emotion recognition algorithm in realistic situations.

Instructions: 

The IIITM Face Emotion dataset originates from IIITM Face Data. It consists of a total of 1928 images collected from 107 participants (87 male and 20 female). These images have been recorded in three different vertical orientations (i.e., Front, Up, and Down) while exhibiting six different facial expressions (Smile, Surprise, Surprise with Mouth Open, Neutral, Sad, and yawning). The original IIITM Face dataset provides information about various other attributes such as gender, presence of mustaches, beard, eyeglasses, clothes worn by the subjects, and density of their hair. The original IIITM Face dataset has been modified to study facial expressions in different orientations. IIITM Face Emotion dataset has only facial region segmented for all the subjects and then all images are resized to fixed dimensions [800 x 1000 pixels] with the aspect ratio of 4:5. This method ensures uniform scale for varying face positions for the same subject.

The nomenclature of each image is as follows: 'SUB XX EE O'

where,

XX -> denotes the subject ID

EE -> denotes the expressed emotion ( Smile -> SM, Surprise -> SU, Surprise with mouth open -> SO, Neutral -> NE, Sad-> SA, Yawning -> YN)

O  -> denotes the orientation (Front -> F, Down -> D, Up -> U)

For example. 'SUB1NEF' shows subject 1 depicting Neutral emotion in Front facing orientation.

More details is available at: https://www.sensigi.com/resources/research

For using The Dataset, please cite the following papers:

[1] U. Sharma, K. N. Faisal, R. R. Sharma, and K. V Arya, “Facial Landmark-Based Human Emotion Recognition Technique for Oriented Viewpoints in the Presence of Facial Attributes,” SN Comput. Sci., vol. 4, no. 3, p. 273, 2023. https://doi.org/10.1007/s42979-023-01727-y

[2] Arya, K. V., Verma, S., Gupta, R. K., Agarwal, S., & Gupta, P. (2020). IIITM Face: A Database for Facial Attribute Detection in Constrained and Simulated Unconstrained Environments. In Proceedings of the 7th ACM IKDD CoDS and 25th COMAD (pp. 185-189).

Comments

More details is available at: https://www.sensigi.com/resources/research

Submitted by Rishi Sharma on Mon, 04/03/2023 - 07:12