In this project emotion detection using its facial expressions are going to be detected. These expressions are often derived from the live feed via system’s camera or any pre-existing image available within the memory. Emotions possessed by humans will be recognized and contains a vast scope of study within the computer vision industry upon which several research have already been done. The work has been implemented using Python (3.10), Open-Source Computer Vision Library (OpenCV) and NumPy. The run time video (testing dataset) is being compared to training dataset and thus emotion is predicted. the target of this paper is to develop a system which might analyze the image and run time video and predict the expression of the person. The study proves that this procedure is workable and produces valid results.in this project we\'ve got make change to the accuracy of the running project by using the various models of python and deep learning.
Introduction
I. INTRODUCTION TO IMAGE PROCESSING
In order to urge an enhanced image and to extract some useful information out of it, the tactic of Image Processing will be used. it's a really efficient way through which a picture are often converted into its digital form subsequently performing various operations thereon. this can be a method like signal processing, within which the input given may be a 2D image, which may be a collection of numbers starting from 0 to 255 which denotes the corresponding pixel value.
A. Conversion of Color Image to Gray Scale
There are two methods by which we can convert a color image to a gray scale image [8]:
Average Method
In this method, mean is taken of the three colours i.e. Red, Blue & Green present in an exceedingly color image. Thus, we get
Grayscale= (R+G+B)/3;
But what happens sometimes is rather than the grayscale image we get the black image. this is often because we within the converted image we get 33% each of Red, Blue & Green.
Therefore, to unravel this problem we use the second method called Weighted Method or Luminosity Method.
2. Weighted or Luminosity Method
To solve the matter in Average Method, we use Luminosity method. during this method, we decrement the presence of Red Color and increment the colour of Green Color and also the blue color has the share in between these two colors.
Thus, by the equation [8],
Grayscale= ((0.3 * R) + (0.59 * G) + (0.11 * B)).
We use this thanks to the wavelength patterns of those colors. Blue has the smallest amount wavelength while Red has the most wavelength.
II. REVIEW OF LITERATURE
M Murugappan, M Rison, R Nagarajan… - 2008 International …, 2008 - ieeexplore.ieee.org In recent years, the necessity for and importance of automatically recognizing emotions from EEG signals has grown in the increasing role of brain computer interface.
R Plutchik - 2003 - psycnet.apa.org this is often a comprehensive textbook for instructors teaching a college-level or graduate course on emotion. during this volume, the author has brought together materials that may stimulate students' interests and function attention for thought-provoking discussion and learning experiences.
PC Petrantonakis… - IEEE Transactions on …, 2009 - ieeexplore.ieee.org Electroencephalogram (EEG)-based emotion recognition may be a relatively new field within the affective computing area with challenging issues regarding the induction of the emotional states and the extraction of the features to achieve optimum performance.
T Canli, JE Desmond, Z Zhao, G Glover… - …, 1998 - journals.lww.com CURRENT brain models of emotion processing hypothesize that positive (or approach-related) emotions are lateralized towards the complex piece, whereas negative (or withdrawal-related) emotions are lateralized towards the right hemisphere.
G Chanel, J Kronegg, D Grandjean, T Pun - International workshop on …, 2006 – Springer The arousal dimension of human emotions is assessed from two different physiological sources: peripheral and electroencephalographic (EEG) signals from brain.
6.P Ekman - Handbook of cognition and emotion, 1999 - books.google.com during this chapter I consolidate my previous writings about basic emotions (Ekman, 1984, 1992a, 1992b) and introduce some changes in my thinking.
D Grocke, T Wigram - 2006 - books.google.com This practical book describes the actual use of receptive (listening) methods and techniques in music therapy clinical practice and research, including relaxation with music for teenagers and adults, the utilization of visualisation and imagery, music and collage, song-lyric discussion, vibroacoustic applications, music and movement techniques, and different types of aesthetic listening of music.
III. INTRODUCTION TO OPENCV
OpenCV is Open Computer Vision Library. it's a free for all extensive library which consists of over 2500 algorithms specifically designed to hold out Computer Vision and Machine Learning related projects. These algorithms are often accustomed do different tasks like Face Recognition, Object Identification, Camera Movement Tracking, Scenery Recognition etc. It constitutes an outsized community with an estimate of 47,000 odd those that are active contributors of this library. Its usage extends to varied companies both, Private and Public.
A new feature called GPU Acceleration was added among the preexisting libraries. This feature can work out with almost every operations, even though it’s not completely advanced yet. The GPU is pass using CUDA and thus takes advantages from various libraries like NPP i.e., NVIDIA performance primitives. Using GPU is helpful by the actual fact that anyone can use the GPU feature without having a powerful knowledge on GPU coding. In GPU Module, we cannot change the features of a picture directly, rather we've to repeat the first image followed by editing it.
IV. STEPS INVOLVED IN INSTALLING PYTHON 2.7 AND THEREFORE THE NECESSARY PACKAGES
Let’s begin with a sample of image in either .jpg or .png format and apply the tactic of image processing to detect emotion out of the topic within the sample image. (The word ‘Subject’ refers to any living being out of which emotions is extracted).
Importing Libraries For successful implementation of this project, the subsequent packages of Python 2.7 must be downloaded and installed: Python 2.7.x, NumPy, Glob and Random. Python are going to be installed within the default location, C drive during this case. Open Python IDLE, import all the packages and begin working.
NumPy: NumPy is one among the libraries of Python which is employed for complex technical evaluation. it's used for implementation of multidimensional arrays which consists of varied mathematical formulas to process. [9].
The array declared in a very program features a dimension which is named as axis.
The number of axis present in an array is understood as rank.
For e.g. A= [1,2,3,4,5].
In the given array A 5 elements are present having rank 1, due to one-dimension property.
Let’s take another example for better understanding.
B= [[1,2,3,4], [5,6,7,8]]
In this case the rank is 2 because it's a 2-dimensional array. First dimension has 2 elements, and also the second dimension has 4 elements. [10]
a. Glob Based on the rules specified by Unix Shell, the Glob module perceives the pattern and with regard to it, generates a file. It generates full path name.
Wildcards
These wildcards are wont to perform various operations on files or a component of directory. Only two wildcards are useful from the various functional wildcards[5]: -
b. Random: Random Module picks or chooses a random number or part from a given list of elements. This module only have functions which have access to such operations.
V. DIFFERENT EMOTIONS THAT CAN BE DETECTED OUT OF AN VIDEOS
Neutral
Happy
Anger
Disgust
Surpris
Fear
Sad
VI. STEPS TO USE EMOTION RECOGNITION USING OPENCV-PYTHON:
After successfully installing all the specified software’s, we must start by creating a Dataset. Here, we'll create our own dataset by analysing group of images so our result's accurate and there's enough data to extract sufficient information. Or we are ready to use an existing database.
The dataset is then characterised into two separate directories. First directory will contain all the photographs so the second directory will contain all the knowledge about the various varieties of emotions.
After running the sample images through the python code, all the output images are visiting be stored into another directory, sorted within the order of emotions and its subsequent encoding.
Differing kinds of classes is employed in OpenCV for emotion recognition, but we'll be mainly using Fisher Face one.
Extracting Faces: OpenCV provides four predefined classifiers, so to detect as many faces as possible, we use these classifiers during a sequence
The dataset is further characterised into Training set and Classification set. The training set is employed to point the type of emotions by extracting information from several images so the classification set is employed to estimate the classifier performance.
For best results, the photographs should be of tangible same properties i.e., size.
The topic on each image is analysed, converted to grayscale, cropped and saved to a directory.
Finally, we compile training set using 80% of the test data and classify the remaining 20% on the classification set. Repeat the strategy to spice up efficiency.
Conclusion
1) The expected output of this project is that the accuracy of the project to capture facial The expression of someone using some tools of AI like python, machine learning, CNN , open cv etc.
2) The most purpose of this project is to create significant contribution to the environment and help people to recognise the face expression that may be easily understand the human feelings
3) Deep learning classification has been successfully applied to several EEG tasks, including motor imagery, seizure detection, mental workload, sleep stage scoring, event related potential, and emotion recognition tasks. the look of those deep network studies varied significantly over input formulization and network design.
4) Several public datasets were analysed in multiple studies, which allowed us to directly compare classification performances supported their design. Generally, CNN’s, RNN’s, and DBN’s outperformed other styles of deep networks, like SAE’s and MLPNN’s
5) Hybrid designs incorporating convolutional layers with recurrent layers or restricted Boltzmann machines showed promise in classification accuracy and transfer learning when put next against standard designs.
6) We recommend more in-depth research of those combinations, particularly the amount and arrangement of various layers including RBM’s, recurrent layers, convolutional layers, and fully connected layers.
References
[1] Murugappan, M., Rizon, M., Nagarajan, R., Yaacob, S., Zunaidi, I., Hazry, D.: Lifting scheme for human emotion recognition using EEG. In: International Symposium on Information Technology, ITSim 2008, vol. 2 (2008)
[2] Plutchik, R.: Emotions and life: perspectives from psychology, biology, and evolution, 1st edn. American Psychological Association, Washington, DC (2003)
[3] Petrantonakis, P.C., Hadjileontiadis, L.J.: Emotion recognition from EEG using higher order crossings. IEEE Transactions on Information Technology in Biomedicine 14(2), 186–197 (2010)
[4] Canli, T., Desmond, J.E., Zhao, Z., Glover, G., Gabrieli, J.D.E.: Hemispheric asymmetry for emotional stimuli detected with fMRI. NeuroReport 9(14), 3233–3239 (1998)
[5] Chanel, G., Kronegg, J., Grandjean, D., Pun, T.: Emotion assessment: Arousal evaluation using EEG’s and peripheral physiological signals (2006)
[6] Ekman, P.: Basic emotions. In: Dalgleish, T., Power, M. (eds.) Handbook of Cognition and Emotion. Wiley, New York (1999)
[7] Grocke, D.E., Wigram, T.: Receptive Methods in Music Therapy: Techniques and Clinical Applications for Music Therapy Clinicians, Educators and Students, 1st edn. Jessica Kingsley Publishers (2007)