In this project emotion detection using its facial expressions will be detected. It can be derived from the live feed via system’s camera or any pre-existing image available in the memory. Emotions possessed by humans can be detect by machine and has a vast scope of study in the computer vision industry upon which several research have already been done. The work has been implemented using Anaconda (Jupyter Notebook) (3.10), Open-Source Computer Vision Library (OpenCV) and NumPy. The code check the video (testing dataset) is being compared to training dataset and thus emotion is predicted. The objective of this paper is to develop a system which can analyze the image and run time video and predict the expression of the person. The study proves that this project and code is workable and produces valid results.in this project we have make change to the accuracy of the running project by using the different models of python and deep learning.
Introduction
I. INTRODUCTION TO IMAGE PROCESSING
In order to get a good image and to get some useful information out of it, the method of Image Processing can be used. It is best way through which an image can be converted into its digital form subsequently performing various operations on it. This is a technique like emotion detection and recognition, in which the input given is an image and videos image, which is a collection of numbers ranging from 0 to 255 which denotes the corresponding pixel value.
Conversion of Color Image to Gray Scale
In this we have two method make a image RGB to grayscale
A. Average Method
In this method, the mean is given for the three colors i.e., Red, Blue & Green present in a color image. Thus, we have
Grayscale= (R+G+B)/3;
But sometime it happens that instead of the grayscale image we get the black image. This is because we have converted the image we get 33% each of Red, Blue & Green.
For solving the problem in first method we use a another method called Weighted Method or Luminosity Method.
B. Weighted or Luminosity Method
To overcome the problem of Average Method, we use Luminosity method. In this method, we decrement the presence of Red Color and increment the color of Green Color and the blue color has the percentage in between these two colors.
Thus, by the equation [8],
Grayscale= ((0.2 * R) + (0.55 * G) + (0.12 * B)).
We use this because of the wavelength patterns of all colors. Blue get the least wavelength while Red get the maximum wavelength.
II. REVIEW OF LITERATURE
M Murugappan, M Rison, R Nagarajan… - 2008 International …, 2008 - ieeexplore.ieee.org In recent years, the need for and importance of automatically recognizing emotions from EEG signals has grown with increasing role of brain computer interface applications.
R Plutchik - 2003 - psycnet.apa.org This is a comprehensive textbook for instructors teaching a college-level or graduate course on emotion. In this volume, the author has brought together materials that will stimulate students' interests and serve as a focus for thought-provoking discussion and learning experiences.
PC Petrantonakis… - IEEE Transactions on …, 2009 - ieeexplore.ieee.org Electroencephalogram (EEG)-based emotion recognition is a relatively new field in the affective computing area with challenging issues regarding the induction of the emotional states and the extraction of the features in order to achieve optimum classification performance.
T Canli, JE Desmond, Z Zhao, G Glover… - …, 1998 - journals.lww.com CURRENT brain models of emotion processing hypothesize that positive (or approach-related) emotions are lateralized towards the left hemisphere, whereas negative (or withdrawal-related) emotions are lateralized towards the right hemisphere.
G Chanel, J Kronegg, D Grandjean, T Pun - International workshop on …, 2006 – Springer The arousal dimension of human emotions is assessed from two different physiological sources: peripheral signals and electroencephalographic (EEG) signals from the brain.
P Ekman - Handbook of cognition and emotion, 1999 - books.google.com In this chapter I consolidate my previous writings about basic emotions (Ekman, 1984, 1992a, 1992b) and introduce a few changes in my thinking.
D Grocke, T Wigram - 2006 - books.google.com This practical book describes the specific use of receptive (listening) methods and techniques in music therapy clinical practice and research, including relaxation with music for children and adults, the use of visualisation and imagery, music and collage, song-lyric discussion, vibroacoustic applications, music and movement techniques, and other forms of aesthetic listening to music.
III. INTRODUCTION TO OPENCV
OpenCV is Open Computer Vision Library. It is a free for all extensive library which consists of more than 2500 algorithms specifically designed to carry out Computer Vision and Machine Learning related projects. These algorithms can be used to carry out different tasks such as Face Recognition, Object Identification, Camera Movement Tracking, Scenery Recognition etc. It constitutes a large community with an estimate of 47,000 odd people who are active contributors of this library. Its usage extends to various companies both, Private and Public.
A new feature called GPU Acceleration was added among the preexisting libraries. This new feature can handle most of the operations, though it’s not completely advanced yet. The GPU is run by using CUDA and thus takes advantages from various libraries such as NPP i.e., NVIDIA performance primitives. Using GPU is beneficial by the fact that anyone can use the GPU feature without having a strong knowledge on GPU coding. In GPU Module, we cannot change the features of an image directly, rather we have to copy the original image followed by editing it.
IV. STEPS INVOLVED IN INSTALLING PYTHON 2.7 AND THE NECESSARY PACKAGES
Let’s begin with a sample of image in either .jpg or .png format and apply the method of image processing to detect emotion out of the subject in the sample image. (The word ‘Subject’ refers to any living being out of which emotions can be extracted).
A. Importing Libraries
For successful implementation of this project, the following packages of Python 2.7 must be downloaded and installed: Python 2.7.x, NumPy, Glob and Random. Python will be installed in the default location, C drive in this case. Open Python IDLE, import all the packages and start working.
???????B. NumPy
NumPy is one of the libraries of Python which is used for complex technical evaluation. It is used for implementation of multidimensional arrays which consists of various mathematical formulas to process.
The array declared in a program has a dimension which is called as axis.
The number of axis present in an array is known as rank
For e.g. A= [1,2,3,4,5]
In the given array A 5 elements are present having rank 1, because of one-dimension property.
Let’s take another example for better understanding
B= [[1,2,3,4], [5,6,7,8]]
In this case the rank is 2 because it is a 2-dimensional array. First dimension has 2 elements, and the second dimension has 4 elements. [10]
???????C. Glob
Based on the guidelines specified by Unix Shell, the Glob module perceives the pattern and with reference to it, generates a file. It generates full path name.
Wildcards
These wildcards are used to perform various operations on files or a part of directory. There are various wildcards [5] which are functional out of which only two are useful: -
???????D. Random
Random Module picks or chooses a random number or an element from a given list of elements. This module supports those functions which provide access to such operations.
V. DIFFERENT EMOTIONS THAT CAN BE DETECTED OUT OF AN VIDEOS
A. Neutral
B. Happy
C. Anger
D. Disgust
E. Surprise
F. Fear
G. Sad
VI. STEPS INVOLVED TO PERFORM EMOTION DETECTION USING OPENCV-PYTHON:
After successfully installing all the necessary software’s, we must start by creating a Dataset. Here, we can create our own dataset by analysing group of images so that our result is accurate and there is enough data to extract sufficient information. Or we can use an existing database.
The dataset is then organized into two different directories. First directory will contain all the images and the second directory will contain all the information about the different types of emotions.
After running the sample images through the python code, all the output images will be stored into another directory, sorted in the order of emotions and its subsequent encoding.
Different types of classes can be used in OpenCV for emotion recognition, but we will be mainly using Fisher Face one.
Extracting Faces:OpenCV provides four predefined classifiers, so to detect as many faces as possible, we use these classifiers in a sequence
The dataset is split into Training set and Classification set. The training set is used to teach the type of emotions by extracting information from several images and the classification set is used to estimate the classifier performance.
For best results, the images should be of exact same properties i.e., size.
The subject on each image is analysed, converted to grayscale, cropped and saved to a directory
Finally, we compile training set using 80% of the test data and classify the remaining 20% on the classification set. Repeat the process to improve efficiency.
???????
Conclusion
1) The expected output of this project is that the accuracy of the project to capture facial The expression of a person using some tools of AI like python, machine learning, CNN, open cv etc.
2) The main purpose of this project is to make significant contribution to the environment and help people to recognise the human facial expression which can be easily understand the human feelings
3) Deep learning classification has been successfully applied to many EEG tasks, including motor imagery, seizure detection, mental workload, sleep stage scoring, event related potential, and emotion recognition tasks. The design of these deep learning studies varied significantly over the input formulization and network design.
4) Several public datasets were analysed in multiple studies, which allowed us to directly compare classification performances based on their design. Generally, CNN’s, RNN’s, and DBN’s outperformed other types of deep networks, such as SAE’s and MLPNN’s
5) Hybrid designs incorporating convolutional layers with recurrent layers or restricted Boltzmann machines showed promise in classification accuracy and transfer learning when compared against standard designs.
6) We recommend to check more in-depth research of these particularly the number and arrangement of different layers including RBM’s, recurrent layers, convolutional layers, and fully connected layers.
References
[1] Murugappan, M., Rizon, M., Nagarajan, R., Yaacob, S., Zunaidi, I., Hazry, D.: getting scheme for human emotion detection using EEG. In: International Symposium on Information Technology, ITSim 2008, vol. 2 (2008)
[2] Plutchik, R.: Emotions and life: perspectives from psychology, biology, and evolution, 1st edn. American Psychological Association, Washington, DC (2003)
[3] Petrantonakis, P.C., Hadjileontiadis, L.J.: Emotion recognition from images using higher order crossings. IEEE Transactions on Information Technology in Biomedicine 14(2), 186–197 (2010)
[4] Canli, T., Desmond, J.E., Zhao, Z., Glover, G., Gabrieli, J.D.E.: Hemispheric asymmetry for emotional stimuli detected with fMRI. NeuroReport 9(14), 3233–3239 (1998)
[5] Chanel, G., Kronegg, J., Grandjean, D., Pun, T.: Emotion assessment: Arousal evaluation using EEG’s and peripheral physiological signals (2006)
[6] Ekman, P.: Basic emotions. In: Dalgleish, T., Power, M. (eds.) Handbook of Cognition and Emotion. Wiley, New York (1999)
[7] Grocke, D.E., Wigram, T.: Receptive Methods in Music Therapy: Techniques and Clinical Applications for Music Therapy Clinicians, Educators and Students, 1st edn. Jessica Kingsley Publishers (2007)