Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Janhavi Deshpande
DOI Link: https://doi.org/10.22214/ijraset.2023.57029
Certificate: View Certificate
Automated emotion detection, is a diverse field with a multitude of applications ranging from software engineering to web customization, education, etc. Several methods and approaches have been devised for automatic emotion recognition, which has taken inspiration from human/natural emotion recognition. I have studied and discussed categorical and dimensional models, which can be subdivided into Circumplex, PANA, vector, and Plutchik\'s models, for defining a myriad of emotions under varied circumstances. I have stratified the approaches used in emotion detection by trifurcating them into lexicon-based, statistical, and hybrid methods. And, I have presented information on different types of classifiers and classes of neural networks that fall under the category of statistical methods, in a systematized way. I have observed that Support Vector Machines provide the most accurate and clear-cut outcome.
I. INTRODUCTION
Humans vary immensely when it comes to identifying and recognizing emotions. This ability, along with the biological and physiological processes involved, is known as emotional perception. Emotional perception is subjective to environmental influence and is believed to be a crucial component of social interactions. It is because of this ability that we can understand a person’s internal state and feelings and communicate efficiently. Though emotions are perceived through many ways -auditory, visual, olfactory, taste, and physiological sensory processes, the primary mode of perception is the visual system. Information about emotions is received by people using emotional cues like facial expressions as well as bodily postures. The face is said to provide cues about one’s subjective emotional state. For achieving accurate results, a multitude of different methods, bringing together knowledge and concepts from disciplines such as ML, speech processing, signal processing, etc.
The use of a combination of analysis of human expressions from multimodal forms such as audio, video, and physiology leads to the amelioration of emotion recognition [1]. According to Psychological theory, there are six basic emotions, happiness, sadness, fear, disgust, and anger in addition to the neutral facial expression [2]. Then there are compound emotions that deal with primary and complementary emotions (e.g., happily sad, sadly angry). Classification of facial expressions depends on the algorithms. There are separate algorithms for simple and compound emotions. Knowledge-based techniques, statistical methods, and hybrid approaches are the three main techniques that can be used to classify these emotions [3].
To detect certain emotion types, knowledge or lexicon-based techniques make use of domain knowledge and semantic and syntactic characteristics of a language. These can be further classified into corpus-based and dictionary-based approaches [4]. Then there are statistical approaches in which different supervised and unsupervised machine learning algorithms are implemented. These yield more precise results than the former [3]. Support Vector Machines (SVM), Naïve Bayes, and Maximum Entropy are all examples of machine learning algorithms [5]. Deep Learning is an unsupervised form of ML and is extensively used in emotion recognition [6-8]. Some commonly used deep learning algorithms are- Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN), and Extreme Learning Machine (ELM) which are different architectures of Artificial Neural Networks (ANN) [5]. Next, Hybrid approaches have complementary characteristics from knowledge-based techniques and statistical approaches, being a combination of the two [3].
The main objective of this paper is to present a precise overview of the different models used to define emotions as well as the approaches used to detect emotions- both lexicon-based and machine learning.
The highlights of the paper are as follows:
II. CLASSIFICATION
Emotions have been largely defined using two types of models - Categorial and Dimensional. Let us review these quickly.
A. Categorial Models
Categorial models refer to emotions as discrete categories. According to the theory of discrete emotions, every human being has a small number of core emotions, which are the same for everyone regardless of cultural differences or ethnicity. These basic or core emotions are believed to be distinguishable by a person’s biological processes and facial expressions, hence called ‘discrete’ [9]. Most of the basic emotion theories have one theme in common - that there should be functional signatures that will help us distinguish and differentiate between these emotions. That is by looking at a person’s brain activity and/or physiology, we should be able to make out the person’s feelings [5].
The initial theory that was proposed in [2] concluded six basic emotions, which are - anger, disgust, fear, happiness, sadness, and surprise. Later an expanded list of basic emotions was proposed [10] which are not encoded in facial expressions including a plethora of positive and negative emotions such as - amusement, contempt, contentment, embarrassment, excitement, guilt, pride in achievement, satisfaction, relief, sensory pleasure, and shame.
B. Dimensional Models
Dimensional models are used to define emotions according to where they lie in two or three dimensions. They adopt two dimensions- valence or hedonic tone which is the property that specifies whether affects/feelings are positive, negative, or neutral [11], and arousal - which is the psychological and physiological state of being awoken [12]. The most common dimensional models are the ones reviewed below.
C. Circumplex Model
According to this model, emotions are apportioned in a 2-D space which is circular, as shown in Figure 1 in which arousal and valence represent the vertical and horizontal axes respectively. Neutral valence and a medium level of arousal are shown in the centre of the circle [13]. The developers of this model say that their model represents core/most elementary emotions or feelings. They typically examine emotional facial expressions, emotional words as well as affective states [14].
These approaches can be chiefly grouped into the following categories:
A. Knowledge-Based Approaches
Also known as lexicon-based approaches, these rely on a sentiment lexicon, which is a collection of precompiled and known terms of sentiment. During the emotion classification process, different knowledge-based resources are put to use like –
These knowledge-based resources can further be classified as dictionary-based and corpus-based. Dictionary-based approaches expand the initial list of opinions or emotions by searching for the synonyms and antonyms of an opinion or a seed word in a dictionary. Alternatively, corpus-based approaches expand the initial database by finding other words with context-specific characteristics similar to a given seed list of opinions or emotion words in a large corpus. The corpus-based approach is further classified into statistical and semantic approaches [9].
The statistical approach is used for finding co-occurrence patterns or seed opinion words. This could be done by deriving posterior polarities by using the co-occurrence of adjectives in a corpus [23]. If the corpus is not large enough, it poses the problem of unavailability of root words. This problem is overcome by using the entire set of indexed documents on the web as a corpus for dictionary construction [19].
The semantic approach makes use of different principles to compute the similarity between words and gives sentiment values directly. It gives similar sentiment values to semantically close words, for example, wordnet [9].
B. Statistical Approaches
As illustrated in [9], statistical methods involve the use of numerous supervised and unsupervised models. Examples of supervised machine learning algorithms include decision-tree classifiers, rule-based classifiers, probabilistic classifiers (like Naïve Bayes, Bayesian Networks, Maximum Entropy) as well as Linear classifiers (like Support Vector Machines (SVM) and neural networks).
This is a supervised learning approach that is employed regularly in data mining, statistics, and machine learning. The training data space is decomposed hierarchically, in which a condition or predicate on the attribute value is used to divide the data. This predicate is the absence or presence of one or more words. There are other kinds of predicates that depend on the similarity of documents to correlate sets of terms [9]. This may be used for further division of documents. There are several kinds of splits like- the Single Attribute split, Similarity-based multi-attribute split, Discriminant-based multi-attribute split, etc. [9].
There are two types of decision trees- classification and regression tree models [9].
a. Classification tree models: Classification trees are those tree models where the variable can take a discrete set of values, i.e., Classification analysis is done when the predicted outcome to which the data belongs is discrete.
b. Regression tree models: In regression tree models, the target variable can take continuous values, which means that this type of analysis is done when the predicted outcome can be considered as a real number.
One of the biggest advantages of this method is that it is highly intelligible and simple to understand and interpret. It performs well with large datasets and provides accurate results with flexible modelling. It has built-in feature selection [9].
2. Rule-based Classifiers
As described in [9], Rule-based classifiers are those that identify, evolve, or learn rules to store manipulate, or apply. In these classifiers, the data space is modelled with a set of rules. Rule-based approaches include artificial immune systems, linear classifier systems, association rule learning, or any other method which covers contextual knowledge, and relies on a set of rules. Although these rules can be generated using several criteria, two of the most recurrently used criteria are confidence and support. The former refers to the number of instances in the training dataset, pertinent to the rule. The latter refers to the conditional probability that the right-hand side of the rule is satisfied only if the left-hand side is satisfied.
The fundamental difference between decision-tree and rule-based classifiers is that decision-tree is a strict hierarchical partitioning of the data space, whereas overlaps in the decision space are allowed in the rule-based classifiers [9].
3. Probabilistic Classifiers
Probabilistic classifiers involve the usage of mixture models, wherein the mixture model assumes that each class forms a component of the mixture, with each mixture model being a generative model. Hence, they are also called generative classifiers [9].
a. Bayesian Network
It is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph. It is also known as Bayes Net, Bayes Network, Belief Network, or Decision Network. These networks are ideal for taking an already occurred event and predicting the likelihood that the event occurred due to any of the several possible known causes [24,25]. They perform three main inference tasks as outlined below –
Inferring unobserved variables and answering probabilistic queries about them.
b. Naïve Bayes
These are simple but highly scalable probabilistic classifiers. It evaluates the posterior probability of a class, based on the word distribution in the document. In this classifier, we assume that features are independent. It makes use of the Bayes theorem to predict whether a given feature belongs to a particular label or not [9].
These are used for -
c. Multinomial Logistic Regression (Maximum Entropy)
This is known by a diverse set of names including multiclass LR, polytomous LR multinomial logit (mlogit), as well as maximum entropy (MaxEnt). This type of model is used when the dependent variable has 3 or more possible outcomes [9].
Highlights of this technique are -
d. Linear classifiers
As elaborated in [26,27], Linear classifiers achieve the goal of statistical classification by making a classification decision based on the linear combination of the characteristics. These classifiers take less time to train and use while maintaining accuracy levels comparable to non-linear classifiers. There is linear predictor p which is a separating hyperplane. Let us consider a linear predictor p. Its output is the output of the linear classifier. Linear classifiers are frequently used where there is an issue with the speed of classification since it is often the fastest classifier.
e. Support Vector Machines
As illustrated in [5,9], its job is to maximize the margin between the decision hyperplane and the examples in the training set.
The main purpose of Support Vector Machines is to determine the linear separators in search space that can best separate the varied classes. SVMs are supervised learning models that can be used in both -
4. Neural Networks
Next, Artificial Neural Networks (ANN), namely, Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), and Extreme Learning Machines (ELM), which are all widely used, fall under the category of deep learning algorithms. Deep Learning comes under the unsupervised family of machine learning and deep learning approaches are very popular in the domain of emotion recognition [5,9].
a. Artificial Neural Networks
Artificial Neural Networks (ANNs) are composed of artificial neurons, each with inputs and a single output that can be sent to multiple other neurons. These artificial neurons are conceptually derived from biological neurons. ANNs are known for their ability to reproduce and model non-linear processes, which is why they are extensively employed in emotion recognition [5,9]. Some of the architectures of ANN:
Inspired by biological processes [9], these are among the most popular deep learning models, specifically designed to process pixel data and have applications in image and video recognition, image classification, etc. With CNNs, convolution layers filter the image to produce a feature map. Next, this map is input to layers that are fully connected and according to the output of the FE classifier, the facial expression is recognized as belonging to a particular class.
This is an architecture of another class of ANN called the Recurrent Neural Network (RNN) [5,9]. A common Long Short-Term Memory (LSTM) unit is composed of a cell and three gates- input gate, output gate, and forget gate. Unlike standard feedforward neural networks, it has feedback connections and can process not only single data points but also entire data sequences. This is what makes LSTM ideal for processing and predicting data, speech recognition, speech activity detection, human activity detection, etc.
Extreme Learning Machines or ELMs [5] are feedforward artificial neural networks used often for classification, feature learning, etc. They can have single or multiple layers of hidden nodes. The parameters of the hidden nodes need to be tuned. Some studies have shown that these can outperform support vector machines in both classification and regression applications.
C. Hybrid Approaches
As stated in [3,9,18,28], these bring together characteristics from both techniques and are very common with sentiment lexicons. They are computationally complex, but they are known to have superior classification performance contrary to situations wherein we employ knowledge-based and statistical approaches independently. They play a crucial role in the majority of methods. The role played by knowledge-based resources like SenticNet, which combines both linguistic and statistical elements like Sentic computing and iFeel, is indispensable in the emotion classification process.
IV. RESULTS AND DISCUSSION
In the first instance, the basic 6-emotions theory put forth in [29], proposed that human beings show 6 basic emotions which can be linked to facial expressions. This list was further expanded to 15 emotions ranging from envy and pride to nostalgia. Next, researchers at the University of California, Berkley identified many more emotions - 27 to be precise, which were, in turn, modelled into a ‘map’ [2,10,29,30].
Then comes the category of dimensional approaches - a concept that contrasts the former. Contrary to what the theory of basic emotions says (that different emotions arise from separate neural systems), these theories opine that there’s a common and interconnected neurophysiological system that’s responsible for all affective states [31].
We have discussed 4 of the most used classification models - the circumplex model, the vector model, the PANA model, and Plutchik’s model [3,31].
We have already discussed the lexicon-based, statistical-based, and hybrid approaches in detail in the above sections, now let us have a look at the situations in which they prove to be most useful and their accuracy levels.
A. Accuracy Levels
Let us now review the accuracy results for these three categories as observed through various experiments –
TABLE I
A comparison of the accuracy levels of different approaches (in descending order)
Sr. No. |
Support Vector Machines (SVM) |
||
Algorithm |
Accuracy (in %) |
Year |
|
1 |
SVM with information gain feature extraction [33] |
91.15 |
2009 |
2 |
SVM with features based on unigram [34] |
82.9 |
2002 |
3 |
SVM with a linear kernel [35] |
80.29 |
2011 |
4 |
Speaker-dependent SVM with thresholding fusion [36] |
75.67 |
2015 |
5 |
SVM which has features like WordNet affect, General Inquirer, etc. [37] |
73.89 |
2007 |
As can be seen from the Table I the information gain feature extraction method gives the most precise results.
2. ZBC
Let us now review the accuracy levels of the Naïve Bayes Classifier through Table II.
TABLE III
A comparison of the accuracy levels of different algorithms of the Naïve Bayes classifier
Sr. No. |
Naïve Bayes Classifier (NBC) |
||
Algorithm |
Accuracy (in %) |
Year |
|
1 |
NBC and Naïve search [38] |
~85 |
2012 |
2 |
NBC with features based on unigram [34] |
78.7 |
2002 |
3 |
Facebook query, language query, etc. [39] |
NA |
2013 |
4 |
ERR-based NBC [40] |
NA |
2014 |
5 |
Multinomial NBC with features [41] |
NA |
2014 |
Here we can see that Naïve Bayes Classifier (NBC) and Naïve Search algorithm provide accurate results with an accuracy percentage being ~85\%.
3. Hybrid
Let us review the accuracy levels for Hybrid approaches in the descending order of accuracy outlined in Table III below.
TABLE IIIII
A comparison of different hybrid algorithms
Sr. No. |
Hybrid Approaches |
||
Algorithm |
Accuracy (in %) |
Year |
|
1 |
SVM and CRF with applied rules [42] |
91 |
2015 |
2 |
Hybrid SVM, 100 folds [43] |
89 |
2004 |
3 |
Multinomial NBC with greedy search [44] |
85 |
2013 |
4 |
NBC and SVM using information gain and Chi-square methods [45] |
71 |
2014 |
5 |
Keyword-spotting and rule-based methods [46] |
NA |
2013 |
It is evident from the above table that SVM and CRF applied rules work best with 91\% accuracy. CRF, also known as Conditional Random Fields, is a class of statistical modelling methods frequently used for structured prediction.
V. ACKNOWLEDGMENT
The author acknowledges the significant contribution of all the researchers on this subject.
When it comes to dimensional models, I believe, the least effective model is Plutchik’s wheel of emotions, the reason being, it is too simplistic and may fail to incorporate other, bigger emotional nuances within it. Moreover, the vector and PANA models work better, if the stimuli we use are similar to events, autobiographical memories, etc. The keyword-based or lexicon-based approach is the most used one and precisely detects emotions at the basic word level. The statistical methods require the use of a training set to detect emotions. One of these approaches is the SVM which is a binary classification technique. It uses several algorithms which were developed over time like the SVM with linear kernel, speaker-dependent SVM with thresholding fusion, SVM with features based on unigrams, etc. but the most accurate results are provided by SVM with information gain feature extraction. Then we have the Naïve Bayes classifier. If we vary the representation of the input text, we get different types of NBCs, the most accurate algorithm being NBC and Naïve Search. Finally, we have the hybrid approach wherein we combine different approaches; and if we look at the data, the most accurate algorithm is the one that combines SVM and CRF with applied rules. Hence, we can see that different approaches and algorithms work best under different circumstances and conditions. However, as is evident from Table I Support Vector Machines seem to perform best among the studied approaches.
[1] S. Poria, E. Cambria, R. Bajpai, and A. Hussain, “A Review of affective computing: From unimodal analysis to multimodal fusion,” Information Fusion, vol. 37, pp. 98–125, 02 2017. [2] P. Ekman, “Facial expressions of emotion: New findings, new questions,” Psychological Science, vol. 3, no. 1, pp. 34–38, 1992. [Online]. Available: https://doi.org/10.1111/j.1467-9280.1992.tb00253.x [3] E. Cambria, “Affective computing and sentiment analysis,” IEEE Intelligent Systems, vol. 31, pp. 102–107, 03 2016. [4] Z. Madhoushi, A. R. Hamdan, and S. Zainudin, “Sentiment analysis techniques in recent works,” in 2015 Science and Information Conference (SAI), 2015, pp. 288–291. [5] S. Sun, C. Luo, and J. Chen, “A review of natural language processing techniques for opinion mining systems,” Information Fusion, vol. 36, pp. 10–25, 2017. [6] P. Mahendhiran and K. Subramanian, “Deep learning techniques for polarity classification in multimodal sentiment analysis,” International Journal of Information Technology & Decision Making, vol. 17, 03 2018. [7] H. Yu, L. Gui, M. Madaio, A. Ogan, J. Cassell, and L.-P. Morency, “Temporally selective attention model for social and affective state recognition in multimedia content,” 10 2017, pp. 1743–1751. [8] N. Majumder, S. Poria, A. Gelbukh, and E. Cambria, “Deep learning based document modeling for personality detection from text,” IEEE Intelligent Systems, vol. 32, pp. 74–79, 03 2017. [9] A. Hassan Yousef, W. Medhat, and H. Mohamed, “Sentiment analysis algorithms and applications: A survey,” Ain Shams Engineering Journal, vol. 5, 05 2014. [10] P. Ekman, Basic Emotions. Wiley, 1999, pp. 4–5. [11] J. Vazard, “Feeling the unknown: Emotions of uncertainty and their valence,” Erkenntnis, 07 2022. [12] N. Remmington, L. Fabrigar, and P. Visser, “Reexamining the circumplex model of affect,” Journal of personality and social psychology, vol. 79, pp. 286–300, 09 2000. [13] J. Russell, “A circumplex model of affect,” Journal of Personality and Social Psychology, vol. 39, pp. 1161–1178, 12 1980. [14] D. Rubin and J. Talarico, “A comparison of dimensional models of emotion: Evidence from emotions, prototypical events, autobiographical memories, and words,” Memory (Hove, England), vol. 17, pp. 802–8, 09 2009. [15] R. Plutchik, “Chapter 1 - a general psychoevolutionary theory of emotion,” in Theories of Emotion, R. Plutchik and H. Kellerman, Eds. Academic Press, 1980, pp. 3–33. [16] “The nature of emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice,” American Scientist, vol. 89, no. 4, pp. 344–350, 2001. [17] R. Plutchik, The Emotions, 1991. [18] A. Balahur, J. M. Hermida, and A. Montoyo, “Detecting implicit expressions of emotion in text: A comparative analysis,” Decision Support Systems, vol. 53, no. 4, pp. 742–753, 2012, 1) Computational Approaches to Subjectivity and Sentiment Analysis 2) Service Science in Information Systems Research : Special Issue on PACIS 2010. [19] G. A. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K. J. Miller, “Introduction to WordNet: An On-line Lexical Database*,” International Journal of Lexicography, vol. 3, no. 4, pp. 235–244, 12 1990. [Online]. Available: https://doi.org/10.1093/ijl/3.4.235. [20] E. Cambria, Q. Liu, S. Decherchi, F. Xing, and K. Kwok, “SenticNet 7: A commonsense-based neurosymbolic AI framework for explainable sentiment analysis,” in Proceedings of the Thirteenth Language Resources and Evaluation Conference. Marseille, France: European Language Resources Association, Jun. 2022, pp. 3829–3839. [Online]. Available: https://aclanthology.org/2022.lrec-1.408. [21] H. Liu and P. Singh, “Conceptnet—a practical commonsense reasoning tool-kit,” BT technology journal, vol. 22, 06 2004. [22] A. Balahur, J. M. Hermida, A. Montoyo, and R. Mu˜noz, “Emotinet: A knowledge base for emotion detection in text built on the appraisal theories,” in Natural Language Processing and Information Systems, R. Mu˜noz, A. Montoyo, and E. M´etais, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 27–39. [23] M. Klenner, S. Petrakis, and A. Fahrni, “Robust compositional polarity classification,” in Proceedings of the International Conference RANLP-2009. Borovets, Bulgaria: Association for Computational Linguistics, Sep. 2009, pp. 180–184. [Online]. Available: https://aclanthology.org/R09-1034 [24] I. Ben-Gal, Bayesian Networks. John Wiley & Sons, Ltd, 2008. [25] N. Friedman, D. Geiger, and M. Goldszmidt, “Bayesian network classifiers,” Machine Learning, vol. 29, pp. 131–163, 11 1997. [26] C. Cortes and V. Vapnik, “Support vector networks,” Machine Learning, vol. 20, pp. 273–297, 1995. [27] V. N. Vapnik, The nature of statistical learning theory. Springer-Verlag New York, Inc., 1995. [28] M. Ara´ujo, P. Gonc¸alves, M. Cha, and F. Benevenuto, “ifeel: a system that compares and combines sentiment analysis methods,” 04 2014, pp. 75–78. [29] P. Ekman, “Are there basic emotions?” Psychological Review, vol. 99, no. 3, pp. 550–553, 1992. [30] P. Ekman, “An argument for basic emotions,” Cognition and Emotion, vol. 6, no. 3-4, pp. 169–200, 1992. [31] J. Posner, J. A. Russell, and B. S. Peterson, “The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology,” Development and Psychopathology, vol. 17, pp. 715 – 734, 2005. [32] H. Avetisyan, O. Bruna, and J. Holub, “Overview of existing algorithms for emotion classification. uncertainties in evaluations of accuracies.” Journal of Physics: Conference Series, vol. 772, no. 1, p. 012039, nov 2016. [Online]. Available: https://dx.doi.org/10.1088/1742-6596/772/1/012039 [33] W. Zheng and Q. Ye, “Sentiment classification of Chinese Traveler reviews by support vector machine algorithm,” in 2009 Third International Symposium on Intelligent Information Technology Application, vol. 3, 2009, pp. 335–338. [34] B. Pang, L. Lee, and S. Vaithyanathan, “Thumbs up? sentiment classification using machine learning techniques,” EMNLP, vol. 10, 06 2002. [35] R. Burget, J. Karasek, and Z. Sm´ekal, “Recognition of emotions in Czech newspaper headlines,” Radioengineering, vol. 20, pp. 39–47, 03 2011. [36] S. Gupta, A. Mehra, and Vinay, “Speech emotion recognition using svm with thresholding fusion,” in 2015 2nd International Conference on Signal Processing and Integrated Networks (SPIN), 2015, pp. 570– 574. [37] S. Aman and S. Szpakowicz, “Identifying expressions of emotion in text,” 09 2007, pp. 196–205. [38] M. M. Itani, R. N. Zantout, L. Hamandi, and I. Elkabani, “Classifying sentiment in arabic social networks: Na¨?ve search versus na¨?ve bayes,” in 2012 2nd International Conference on Advances in Computational Tools for Engineering Applications (ACTEA), 2012, pp. 192–197. [39] C. Troussas, M. Virvou, K. Espinosa, K. Llaguno, and J. Caro, “Sentiment analysis of facebook statuses using naive bayes classifier for language learning,” vol. 4, 07 2013, pp. 1–6. [40] S. Shaheen, W. El-Hajj, H. Hajj, and S. Elbassuoni, “Emotion recognition from text based on automatically generated rules,” in 2014 IEEE International Conference on Data Mining Workshop, 2014, pp. 383–392. [41] S. Yoshida, J. Kitazono, S. Ozawa, T. Sugawara, T. Haga, and S. Nakamura, “Sentiment analysis for various sns media using naïve bayes classifier and its application to flaming detection,” in 2014 IEEE Symposium on Computational Intelligence in Big Data (CIBD), 2014, pp. 1–6. [42] D. S. Nair, J. P. Jayan, R. R.R, and E. Sherly, “Sentiment analysis of Malayalam film review using machine learning techniques,” in 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI), 2015, pp. 2381–2384. [43] T. Mullen and N. Collier, “Sentiment analysis using support vector machines with diverse information sources.” 01 2004, pp. 412–418. [44] N. Chirawichitchai, “Sentiment classification by a hybrid method of greedy search and multinomial na¨?ve bayes algorithm,” in 2013 Eleventh International Conference on ICT and Knowledge Engineering, 2013, pp. 1–4. [45] X. Sun and C. Li, “Hybrid model based sentiment classification of Chinese micro-blog,” in 2014 International Conference on Audio, Language and Image Processing, 2014, pp. 358–361. [46] U. Krcadinac, P. Pasquier, J. Jovanovic, and V. Devedzic, “Synesketch: An open source library for sentence-based emotion recognition,” IEEE Transactions on Affective Computing, vol. 4, no. 3, pp. 312–325, 2013.
Copyright © 2023 Janhavi Deshpande. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET57029
Publish Date : 2023-11-26
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here