Sentiment is one of the few terms in the English language that lacks a clear definition yet is nonetheless comprehensible. Understanding others in society is greatly hampered by the inability to recognise facial expressions of emotion. We have opted to investigate video inputs and create a model that compiles data from sources and presents findings in an understandable manner. Modern artificial intelligence systems urgently require human sentiment detection to mimic and decipher face expressions. This might help people make informed decisions about things like purpose determination, offering advertising, and security-related dangers. The human eye can easily identify emotions in images or videos, but machines find it more challenging to do so, necessitating the employment of different image processing techniques for feature extraction.
Introduction
I. INTRODUCTION
The term sentiment analysis, often known as opinion mining, generally is frequently misunderstood. Fundamentally, it is a method for determining the intensity of the tone behind a series of words that kind of is employed to comprehend the mental and emotional states expressed in an online or electronic form, sort of contrary to popular belief. Classifying a person's mood and feelings from a specific kind of input into groups like happy, sad, and angry is the task at hand. Many different applications could greatly benefit from the usage of automatic sentiment estimation. For instance, a system for online purchasing can use sentiment analysis to categorise a customer's emotional state and present them with offers that are more appealing given their state of mind. It can also be utilised in healthcare applications; for example, a patient's mental health could be monitored and the best course of treatment or therapy suggested. It is helpful in other fields as well, such as educational technology. It for all intents and purposes is quite helpful in the online actually social networking environment since it allows us to particularly get a general idea of how widely popular certain topics are, which mostly is quite significant. Making machines behave as closely as feasible to real people has garnered increasing interest. To give each of these actions a hint of human emotion and to make them mimic human behaviour. Facial expression analysis is frequently employed in telecommunications, behavioural science, videogames, and other systems that rely on facial emotion decoding for communication in human-computer interaction. It has been stated that in order for there to be a true human-computer connection, the computer and person must interact naturally. There are many ways for people to convey their feelings and views. The best way to detect human emotions would be to look for visual signals that the person is exhibiting that feeling. Sentiment analysis particularly has a wide range of applications and kind of great promise, which really is fairly significant. The creation of generally such a system, which consists of three really primary subsystems, is described in this project. The first step kind of is to really gather data on various moods, the very second for the most part is to utilise deep learning to train the prediction model, and the third and very last step literally is to use the model to forecast the mood in a for all intents and purposes big way.
II. LITERATURE SERVEY
Sentiment Analysis of Images using Machine Learning Techniques. [1] Several different machine learning and deep learning methods, such as SVM, NB, Haar Cascade, LBPH, CNN, etc. Pictures from each user's comment are utilised to classify each comment as neutral, negative, or positive. It gives an accuracy of 80.46%.
A convolutional neural network (cnn) approach to detect face using Tensorflow and keras. [2] Using input photographs as input, a deep convolutional neural network may extract information (CNN). CNN is implemented using Keras, which is also used to align faces on input images, together with D-lib and OpenCV. The effectiveness of face recognition is measured using a unique dataset. KNN classifier achieves an accuracy of 95%. SVM classifier 97%.
Facial Emotion Detection Using Neural Network. [3] Machine learning techniques, deep learning models, and neural network algorithms are all used in emotion recognition. The eye and mouth region of a face can be recognised using the Viola-Jones method. It gives an accuracy of 81%.
Emotion Recognition Using Convolutional Neural Network (CNN). [4] KNN and CNN approaches, which are based on deep learning, are used to predict real-time human facial expressions (happy, sad, surprised, and disgusted). The research paper claims accuracy of about 85%.
Facial Emotion Detection Using Convolutional Neural Network. [5] Using CNN and geometric features generated from the geometry of the face and other features like the brows, mouth, nose, lips, and eyes, facial emotion detection is possible. It has accuracy of about 62% only.
Optimal Facial Feature Based Emotional Recognition Using Deep Learning Algorithm. [6] Deep learning algorithms such as CNN, SVM, and ECNN are used to recognise emotions such 4 (Neutral, Happy, Sad, Surprise, Cry, and Horror). Utilises Geometric Feature-Based Approach and RGB image as well. Accuracy differs for each algorithm CNN 79.98 %, SVM 85–90 %, ECNN 97%.
Emotion Recognition from Facial Expression using Deep Learning. [7] Deep RNN-like LSTM and bi-directional LSTM modelled for audio visual characteristics with spectrograms representation are used for facial sentiment recognition. Accuracy rate of about 39% is achieved. Emotion Recognition and Drowsiness Detection using Digital Image Processing and Python. [8] To anticipate emotions and sleepiness, artificial intelligence and digital image processing techniques like (OpenCV, Dlib, and SciPy package with CNN) are used. Accuracy is about 50% to 60%. Driver Drowsiness Detection. [9] Different methodologies such as perclos, camshift, haar training, viola jones algorithm. Driver Drowsiness Detection Using Machine Learning. [10] From a series of webcam images, it uses OpenCV to extract the face and Region of Interest (ROI), or eyes. The Eye Aspect Ratio (EAR) method is employed. Through the use of EAR and machine learning techniques, a performance accuracy of roughly 90–95% was attained.
Real Time Drivers Drowsiness Detection and alert System by Measuring EAR. [11] The Algorithm used here is Eye Aspect Ratio (EAR). The Accuracy here is quite good enough
III. PROBLEM STATEMENT
Sentiment Analysis Using Deep learning seeks to create an application that can do real-time sentiment analysis using live camera or webcam inputs to detect a person's sentiment. The programme will generate results in the categories of Angry, Happy, Sad, Surprise, Neutral, and Drowsiness. It will use facial expression recognition algorithms to accurately analyse and classify emotions in real time. The system will manage differences in face expressions, provide durability in varied contexts, and give quick and precise results.
IV. PROPOSED METHODOLOGY
In the middle of the 1980s, Paul Ekman initially used the term FER. Since then, the researchers have employed numerous machine learning methods to identify the seven fundamental emotions, including random forest classifiers and artificial neural networks. They also alleged positive and fruitful outcomes. These days, automated human emotion recognition is crucial for security and surveillance applications. The researchers are working hard to further examine this area in order to enhance its performance. While implementing the FER, a number of issues such occlusion in datasets and over-fitting of models must be addressed. According to the authors' knowledge and the literature they have read, there is no survey that fully compares FER techniques from an AI standpoint. Motivated by the aforementioned reality, we give a thorough study on FER employing AI techniques in which we have investigated the most cutting-edge machine learning and DL (DL) methodologies with their benefits and drawbacks
Conclusion
In this project, we used facial recognition to do emotive analysis on human faces. Sentimental analysis has many uses, including security, analysing employee mindsets, determining patients\' states of mind, and performing investigations.
It can be utilised for a variety of examination reasons. We employed haar cascades for feature extraction and classification. It offers a varied backdrop for the upcoming days. Eight sentiment characteristics—\"Angry,\" \"Happy,\" \"Sad,\" \"Surprise,\" \"Neutral,\" \"Disgust,\" \"Fear,\" and \"Drowsiness\"—can be recovered from human faces after implementation, and additional features may be extracted in the future. In order for brands to sell and build their businesses, there is a growing demand for consumer sentiment data; having access to this data will provide the brands an advantage.Limitations and Future Scope:
1) Emotional analysis has practical applications in addition to being restricted to enthusiasts.
2) Increasing the precision of recommendation engines like Netflix, YouTube, etc.
3) Automating the mechanism for making recommendations by analysing the content on blogs, streaming services, and other places where they are needed.
4) Products for virtual reality (VR) and augmented reality (AR) have better user interfaces
References
[1] Y. Gherkar, P. Gujar, A. Gaziyani and S. Kadu, \"Sentiment Analysis of Images using Machine Learning Techniques.,\" In ITM Web of Conferences, vol. 44, 2022.
[2] R. Jose, \"A convolutional neural network (cnn) approach to detect face using tensorflow and keras.,\" International Journal of Emerging Technologies and Innovative Research, ISSN, pp. 2349-5162, 2019.
[3] M. F. Ali, M. Khatun and N. A. Turzo, \"Facial emotion detection using neural network.,\" the international journal of scientific and engineering research., 2020.
[4] N. S. Badrulhisham and N. A. Mangshor, \"Emotion Recognition Using Convolutional Neural Network (CNN).,\" In Journal of Physics: Conference Series, vol. 1962, no. 1, 2021.
[5] P. R. Dachapally, \"Facial emotion detection using convolutional neural networks and representational autoencoder units,\" arXiv preprint arXiv, vol. 1706.01509., 2017.
[6] T. K. Arora, P. K. Chaubey, M. S. Raman, B. Kumar, Y. Nagesh, P. K. Anjani, H. M. Ahmed, A. Hashmi, S. Balamuralitharan and B. Debtera, \"Optimal Facial Feature Based Emotional Recognition Using Deep Learning Algorithm,\" Computational Intelligence and Neuroscience, 2022.
[7] N. Roopa, \"Emotion recognition from facial expression using deep learning,\" International Journal of Engineering and Advanced Technology (IJEAT) ISSN, pp. 2249-8958, 2019.
[8] R. Pramoda, P. S. Arun, S. A. B and B. Bharath , \"Emotion Recognition and Drowsiness Detection using Digital Image Processing and Python,\" International Journal of Scientific Research in Science and Technology, vol. 8, no. 3, pp. 1037-1043, 2021.
[9] V. Kiran, R. Raksha, R. Anisoor, K. Varsha and N. Nagamani, \"Driver Drowsiness Detection,\" Int J Eng Res Technol (IJERT), vol. 8, no. 15, pp. 33-35, 2020.
[10] M. Rode, S. AnirudhBethi , B. Kondaveti , T. Gadipally and N. Katroth , \"Driver Drowsiness Detection Using Machine Learning,\" International Journal for Research in Applied Science & Engineering Technology (IJRASET), vol. 10, no. 11, pp. 1117-1120, 2022.
[11] A. Goraya and G. Singh, \"Real time drivers drowsiness detection and alert system by measuring EAR,\" International Journal of Computer Applications, vol. 181, no. 25, pp. 38-45, 2018.