The mental states of feelings are represented by a person\'s emotions. At the moment, emotion recognition is an explosive topic. Various tools that are offered by some languages, algorithms, and systems can be used to detect emotion. For emotion detection, almost all detection systems employ baby faces. The accuracy of any emotion detection system at this moment is about 90%, which is due to the fact that emotion identification techniques are still far from being flawless. This study analyses emotion detection by image from the late 2000s to the present. The history, an overview, and a few stages of emotion detection by the image are presented in this study. The system could currently only identify six emotions: joy, sadness, fear, surprise, disgust, and anger.
Introduction
I. INTRODUCTION
We express what we feel in our emotions. Emotions depend on the situations where we are. The emotions of a human represent the mental states or feelings. Emotions are happiness, sadness, fear, surprise, disgust, anger, etc.Many systems can detect emotions. The technology of getting emotion is a very trending topic in various fields. The demand for emotion by voice or image is increasing daily. Many algorithmswere invented and the work of modification of previous algorithmsis going on. Many researchers had proposed an algorithm for an emotion detection system for better accuracy and accurate result.Onlysix emotions can be detected by the system. Mostly frontal faces are used for emotion detection. Facial images are most repeatedly used to detect emotions. The process of Emotion detection is not so simple because of the proper result. We need to follow complex steps for better results. For appropriate features from frontal images, we have to write a complex algorithm[1].
II. TIMELINE OF EMOTION DETECTION
1872
All humans, and animals, show emotions through behavior.
1980
Paul Ekman and Wallace V Friesen developed the first emotion facial action coding system to measure emotion.
1996
The first research paper was published on recognizing emotion in speech.
1997
Affective computing theory.
1998
Word net designed for NLP.
2005
Paper on emotion and opinion in text.
2013
The neural network was adopted in the NLP task.
2018
The pre-trainee model was developed.
Now
Emotion detection by image only six emotions.
III. PHASE 1 (1990-2000)
The first phase was the period of emotion detection by voice. Some papers were published thatwere based on emotion detection by image. The result of that kind of emotion detection system was not accurate which means only 2-3 emotions could detect by the system and the accuracy of this system was approximately 20-30%. There were many Emotions detection systems was proposed for better results. Liyanage and theirteam also proposed a systemthat will take the image from the video and detectthe emotioninthe image[2]. For better results and accuracy many researchers had designed multimodal. In a multimodal system,twoparametersare taken by the user for emotion detection namely image, and voice. With the help of these two parameters detect the emotion. Capture the feature by the image and take the pitch from the voice then conclude the result.
IV. PHASE 2 (2001-2010)
At the end of phase 1Emotion detection by image had begun. Many researchers have used frontal and infantface for emotion detection. Almost every research has usedthe frontal face for emotion detection.The algorithm comprises three main stages: the image processing stage, facial feature extraction stage, and emotion detection stage. In the image processing stage, the face region and facial component are extracted by using a fuzzy colour filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from a facial component in the facial feature extraction stage. In the emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experimental results that the proposed algorithm can detect emotion well[1]. After the successful work was done on frontal face emotion detection some researchers started detecting emotion ininfants’faces. The main reasonfor such kind of research was to find out the reasonfor infant crying. [3] Because the infant has only one tool for communicationwhich is crying.Only the infant’s mother can understand why he or she is crying.The image and the sound represent the same cry event. The image processing module determines the state of certain facial features, certain combinations of which determine the reason for crying. The sound processing module analyses the data for the fundamental frequency and the first two formants and uses k-means clustering to determine the reason for the cry. The decisions from the image and sound processing modules are then fused using a decision-level fusion system. The overall accuracy of the image and sound processing modules are 64% and 74.2%, respectively, and that of the fused decision is 75.2%.[3]
V. PHASE 3 (2011-2022)
Thus, the last few years have seen a conspicuous move into emotion detection systems. Python, deep learning, AI-based system, and machine learningwere also added tothe emotion detection system for better accuracy in results. The end of phase 2 mainly focused on the child’s face. Children’s faces were used for emotion detection.The main work of phase 2 was to find out the reasonforthe infant’s cry. Now we are coming to phase 3 in which many researchers focused on better accuracy and the implementation of new languages and algorithms for an accurate result.
Emotion classification in images has applications in the automatic tagging of images with emotional categories, automatically categorizing video sequences into genres like thriller, comedy, romance, etc.[4]only flicker app images were used in this research. Proposed a system that will show positive or negative according to the image. Like the image colour is light will positive image means the image is showing love or happiness. Another hand if the image’s colour is dark then it will show negative which means the image is showing sadness. The accuracy of the system was 75%. To achieve more accuracy voila-jones algorithm and KNN classifier were used. Experimental results show the efficiency of the proposed face and emotion reorganization system is 94.5 to 97 %.[5]. It is too difficult tasks to determine autistic child emotions by their facial expressions. For this task, two techniques were introduced namely SVM, and neural network. The experiment achieved different performances, and the overall accuracy was 90% which is achieved with local binary pattern + support vector with local binary pattern + neural networks[6]. For better results, deep learning is also used for emotion detection. The AI-based system was proposed. For this, achievement two datasets were used namely JAFFE and FERC-13.Results of the experiment show that the model proposed is better in terms of the results of emotion detection than previous models reported in the literature. The experiments show that the proposed model is producing state-of-the-art effects on both two datasets[7]. As we all know python language is one of the most popular languages at this time. It is also used for emotion detection. Python version 2.7 has a huge library of different methods like NumPy, pandas, OpenCV, etc. the overall accuracy of this work was nearly 83%[8].The geometry of the face concerning the neutral emotion can also be used to identify other emotions. The detected geometry shows that the proposed modified eyemap–mouthmap algorithm is efficient. The variations in the geometry varied concerning the emotion of a person. The use of different databases proves that the algorithm can be used for the detection of geometry for other databases also. Tuning the parameters of the algorithm produced optimal results for identifying emotion. Results also show that the proposed algorithm is also gendered independent and can be used for any gender. In the future, the proposed algorithm can be improved by the use of machine learning algorithms and using Tensor Flow with a graphics processing unit (GPU)[9]. The system has used high resolution but the input image is of low or poor quality. To fill this gap a new system was introduced Benchmarking commercial emotion detection systems. There are many emotion recognition systems available on the web like Amazon Rekognition, Baidu Research, Face++, Microsoft Azure, and Affectiva. For the improvement of this, they compared the systems’ accuracy in classifying images drawn from three standardized facial expression databases. In Experiment 2, first identified several common scenarios (e.g., partially visible face) that can lead to poor-quality pictures during smartphone use, and manipulated the same set of images used in Experiment 1 to simulate these scenarios. We used the manipulated images to again compare the systems’ classification performance, finding that the systems varied in how well they handled manipulated images that simulate realistic image distortion. Based on our findings, offer recommendations for developers and researchers who would like to use commercial facial emotion recognition technologies in their applications[10].
For the promotion of our, work Facebook is one of the most commonlyused. Facebook has a huge Ad library. FBAdLibrian, collects images from the Facebook Ad Library. The second, Pykognition, simplifies facial and emotion detection in images using computer vision. candidates most often display happiness and calm in their facial expressions, and they rarely attack opponents in image-based ads from their official Facebook pages. When candidates do attack, opponents are portrayed with emotions such as anger, sadness, and fear[11].
VI. WHERE WE ARE NOW
I found a major gap in that only six or seven emotions can be detected by the system. And the overall accuracy of the emotion detection system is 90% only. It is still unable to find complex emotions.
References
[1] Y. H. J. B. P. Moon Hwan Kim, \"Emotion Detection Algorithm Using Frontal Face Image,\" ICCA, p. 6, 2005.
[2] T. M. R. N. Liyanage C. DE SILVA, \"Facial Emotion Recognition Using Multi-modal Information,\" ICICS, p. 5, 1997.
[3] A. N. I. a. R. E. Y. Pritam Pal, \"EMOTION DETECTION FROM INFANT FACIAL EXPRESSIONS AND CRIES,\" IEEE, vol. 2, p. 5, 2006.
[4] A. G. Vasavi Gajarla, \"Emotion Detection and Sentiment Analysis of Images,\" p. 7, 2015.
[5] D. T. Dolly Reney, \"An Efficient Method to Face and Emotion Detection,\" IEEE, p. 7, 2015.
[6] P. Rani, \"Emotion Detection of Autistic Children Using Image Processing,\" IEEE, p. 4, 2019.
[7] A. K. R. S. D. Akriti Jaiswal, \"Facial Emotion Detection Using Deep Learning,\" IEEE, p. 5, 2020.
[8] A. G. M. S. Raghav Puri, \"Emotion Detection using Image Processing in Python,\" IEEE, p. 6, 2020.
[9] P. G. Allen Joseph, \"Facial emotion detection using modified eyemap–mouthmapalgorithm on an enhanced image and classification with tensorflow,\" springer, p. 11, 2020.
[10] K. Y. ·. C. W. ·. Z. S. ·. B. T. ·. T. D. ·. G. W. ·, \"Benchmarking commercial emotion detection systems using realistic distortions of facial image datasets,\" springer, p. 20, 2021.
[11] R. S. a. M. Bossetta, \"FBAdLibrarian and Pykognition: open science tools for the collection and emotion detection of images in Facebook political ads with computer vision,\" JOURNAL OF INFORMATION TECHNOLOGY & POLITICS , vol. 12, p. 12, 2022.