Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Asha Sukumaran
DOI Link: https://doi.org/10.22214/ijraset.2023.56566
Certificate: View Certificate
Face finding and face analysis have been employed as the basis of many researchers in recent years. The outline of an object gives the structure and look of an object. In applications like face recognition, a form of the face acts as a significant role. This paper reviews different methods adopted in different works to frame the shape of the human face that would be useful for many other future works. A list of existing machine learning and deep learning algorithms for face shape analysis is discussed and reviewed. It also discusses the result and accuracy of different methods that act as intermediate steps in achieving the shape of a human face and also a number of suggestions for achieving optimization in the routine of the work. A review of various performance measures such as time taken, error rate, classification rate, and memory are discussed.
I. INTRODUCTION
Face discovery and analysis is an exceedingly significant application of machine learning (ML). It has a vital role in applications like security systems, future prediction, criminal personality prediction, hair stylist recommendation, eyeglass design, and race detection. Image processing is practical in all these areas. But in present days machine learning is a vigorous region of research owing to its advancement in techniques. So now, the researcher’s interest is in combining image processing with ML to attain a fast and accurate result.
The shape of an entity has importance in many works. The shape gives the structure and appearance of an entity. In the medical field by knowing the shape and seeing the alterations, it is used for deciding numerous diseases. Like that human faces vary in shape and size.
Sometimes people may look alike and differentiating them becomes difficult. So when recognition of faces with their features is difficult, the outline of the face helps to predict it [1] [2] [3].
There are different shapes of the face such as round, rectangle, square, oval, diamond, heart, etc. According to experts shape can be identified geometrically. For persons having oval face shape face length is longer than the width of cheekbones and the forehead is larger than the jawline. For round faces, cheekbones and face length have similar measurements and they are bigger than the forehead and jawline. In a square outline, every dimension is similar. For a rectangle, the face extent is larger and all other measurements are similar. Deciding face shape is a complex task. Atomizing this task is our goal. This article provides the significance of face shape prediction to be used in various applications like morphological predictions. This also reviews some methodologies employed by different authors.
II. REVIEW ON EXISTING APPROACHES
Several recent existing approaches comprising deep learning, and ML techniques related to face shape analysis are discussed and analyzed. Gavrilescu et al. [4] proposed a three-layered neural network-based architecture for forecasting the 16 individual aspects of facial features examined by the Facial Action Coding System. The projected design is assembled on three layers: a base where the facial features are mined from the capture frame. By this capture frame, a multi-state face representation and the strength point of 27 Action Units (AUs) are calculated. The second layer is a mediator point, where an AU movement plan is built. This encloses all AUs’ strength points obtained from the base coating in a frame-by-frame mode. Finally, the top coating comprises of 16 feed-forward neural networks (FNN) skilled through backpropagation. This examines the patterns in the AU movement chart and calculates scores from 1 to 10, envisaging every 16 personality traits.
Rodriguez et al. [5] described pain assessment, which is rooted in facial features only, where the presentation can be enhanced by feeding the raw frames to Deep Learning (DL) models, outperforming the most recent state-of-the-art results while also openly facade the trouble of imbalanced data. This approach first used a Convolutional Neural Network (CNN) to study facial features from VGG_Faces, which are afterward connected with a long short-term memory (LSTM) to develop the temporal relation between video frames.
Sivaram et al. [6] gave a method that employs Recurrent Neural Network (RNN) and Deep Neural Network (DNN) to obtain face outlines. Originally, CNN was designed to obtain the Landmark assessment of appearances. At that time, an FNN was utilized for localities seem where a segment-rooted seeking technique was observed. By exploiting LSTM CNN-RNN, the core evaluation is further responsible which constructing the associated segment rooted pursuit doable and precise.
Danelakis et al. [7] introduced a retrieval methodology that robotically notices precise facial landmarks and employs them to produce a descriptor. This descriptor is the product of three sub-descriptors that detain topological with geometric data of the 3D face scans. The proposed recovery systems benefit from the Dynamic Time Warping method to evaluate descriptors equivalent to dissimilar 3D facial successions.
Hariri et al. [8] tried to recognize the characteristics of personality by determining the components of facial features for instance shape of the face, ear, eye distance, and forehead length. Here the output of the code matches the character of the person by approximately 72%.
Facial landmark discovery and trail from record sequences are described in [9]. They used 26 landmarks in the face area to evaluate facial expressions. By a scale invariant feature rooted vectors these landmarks are professed in the initial input frame. These landmarks are followed throughout the record sequence by way of Multiple Differential Evolution –Markov Chain (DE-MC). To discover the variations in the facial look, they proposed a kernel correlation investigation technique to maximize the resemblance criterion between the goal points and the candidate points. This method achieves an average rate of 90.8% performance in a diverse community data set.
Paper [10] proposed a technique for extraction of facial features, for example, nostrils, pupils, mouth edges, and the resembling from active images. This method achieved high position accuracy at a low cost by merging shape extraction using separable filters with pattern matching rooted on the subspace method.
Automatic hairstyle recommendation in a face image is recommended to automatically tell about how a person looks while wearing the selected hairstyle [11]. It uses the Multi-kernel learning system and Vector concatenation methods.
In 2018, Petpairote et al. [12] used a best-matched face shape pattern for developing a personalized face neutralization technique. Closed eyes could be identified and opened up by deploying the best-matched face shape pattern. This system executes well in enhancing the face discovery accuracy and minimizes errors.
A. Reviews on Face Detection Algorithms
Face is the main part of the human being which reflects a person’s mind. The study of faces is important for many purposes like identification, personality prediction, race detection, criminology, etc. However, analyzing a face from an image is somewhat tedious. Automatic face analysis eliminates the intervention of humans from doing all these tasks. For any application first, the faces from an image have to be discovered. There are different methods for the detection of faces like sliding windows, Haar features, and Histogram of Gradient (HOG). Viola Jones objects detection uses Haar features for face discovery. It is formed by Paul Viola and Michael Jones [13]. This algorithm has different stages such as creating Integral Image, Haar feature assortment, Adaboost guidance, and cascade classifiers. It is very fast and accurate. Haar features encode the dissimilarity in typical intensities among two rectangular areas. Based on Viola Jone's work many extensions were given in Boosting and the dimension of Haar feature set [14]. HOG is an algorithm that looks at each single pixel of an image one at a time. For all pixels, the algorithm looks at its surrounding pixel. The arrows were drawn where the image was darkened. To calculate HOG vertical and horizontal gradients need to be computed by filtering images using appropriate kernels [15].
Ma et al. [16] introduced a common Statistical Shape Modeling (SSM) that is proficient to form nonlinear circulation and is vigorous to outliers in training data. Without dropping generality and guessing sparsity in nonlinear circulation, a Robust Kernel Principal Component Analysis (RKPCA) for SSM is given with the intent of assembling a low-rank nonlinear subspace where outliers are not needed.
Obulesu et al. [17] gave a Facial Expression recognition (FER) scheme by extracting unique and robust face features. This work proposed Cross Diagonal Neighborhood Patterns (CDNP) for sole FE. The CDNP features are additionally practiced by the Gray Level Co-occurrence Matrix (GLCM). The derivative CDNP-GLCM features are thrown to CNN to train various expressions.
???????B. Reviews on Feature Extraction Algorithms
The FE technique for face shape recognition is reviewed. From the detected face, features are extracted for supplementary processing. In the case of face identification, the entire features of the face like nose, eye, mouth, etc. should be detected and these features are selected for recognition purposes. In the case of shape prediction, there is no necessity to mine the entire facial image. Extract only the landmark points. The accuracy and count of landmark points can vary in various applications. However, landmarks on the face rim cannot accurately localize manually or automatically. This can be localized with the help of a primary landmark point such as the eye, mouth, or nose tip. The accuracy of landmark localization is enhanced as the count of landmark points increases [18] [19] [20].
Two categories of land marking detection methods are model-based and Texture-based methods. Model-based methods learn face shapes from labeled training images and then try to fit the shape model to an unknown face. But texture-based methods find these points without the supervision of a model. Active Appearance Model (AAM) and Active Shape Model (ASM) are examples of the model-based method.
AAM provides a mode to discover an associated point set on an image. It is rooted in shape and appearance. Appearance is made of all pixels on the image in that shape. It encloses a statistical model of the shape and gray-level appearance of the object of interest.
Alabort-i-Medina et al. [21] various approaches to models were reviewed. This also reviewed a brain model of that can deform elastically [22]. Additionally, a more expensive, viscous flow deformation model [23] was also reviewed. The Eigen faces generated from Principal Component Analysis (PCA) were employed in face images and it was identified that PCA could not recognize changes of shape accurately [14]. The 3D model gray-level surface [24] helps in the amalgamation of outline and appearances but fails in classification. A combination of physical and statistical modes of dissimilarity with a 3D grey-level surface was found to be a valuable model [25]. To construct face images [26] i.e. a type of shape and local grey level look, ASM [27] was utilized. Models were produced by mingling a model of shape dissimilarity with a model of look dissimilarity in the shape-normalized frame. For this [21] used face images marked with points on the main features to explain the shape of that object.
Where by varying the weight ‘b’ novel shape examples could be produced. There are many attempts to improve classical ASM. In [28] improvement is done by increasing the width of the search profile to decrease noise and grouping the landmarks to avoid shape change of mouth. On testing ASM and AAM on the face image and brain structure, it is found that ASM is more rapid and accomplishes more accurate feature point position than the AAM. AAM provides an enhanced match to the texture [29].
???????C. Reviews on Classification Algorithms
After FE, the classification of face-shape images is done based on the mined features. This subsection reviews some of the existing classification approaches. Classification is the procedure of placing the noticed things into predefined classes by comparing input image features with the objective image features. Classification can be finished based on training samples used; pixel information used, and counts of outputs for each spatial element [30], [31]. There are diverse types of classifiers such as Naive Bayes, Decision tree, Logistic regression, K-nearest neighbor, neural network, DL, Support vector Machine (SVM), etc.
SVM is a discriminative classifier where separation is done by a hyperplane. These are supervised learning models using certain methods that compare the grouping objects based on their feature distance. Here good classification accuracy can be attained by maximizing the space amid the classes by choosing an optimal point of separation. SVM is fundamentally a binary classifier, but it can also extend to a multi-classifier. In [32] linear and nonlinear SVM training models are employed in face identification. In comparison, of training samples in diverse orientations of face images it finds that the nonlinear training machine is better than a linear machine in classification.
KNN (K nearest neighbor) is a classification model. It is a supervised learning technique. KNN is used in [33] to classify color face images. They performed a comparison of classification by KNN only and then by both PCA and KNN and found that KNN is a good classifier.
Artificial Neural Network (ANN) comprises of a set of layers. Each layer consists of neurons. The neurons of each layer are connected to neurons of a new layer and have a weight. Neurons are called nodes. This acts like the human brain takes the decision by activating each node using the activation function. ANN-based human FER method is described in [34]. It classifies the facial look using a Feedforward, back Propagation neural network.
DL techniques are used in many applications. It is like a neural network with more layers between input and output. It is found that as the layer count increases, the accuracy of that task increases. There is a different architecture; we can have in DL rooted in the nature of the problem. In Image processing the most used architecture is CNN. It is proposed by Yann LeCun to use some features of the visual cortex. It can process a raw image and can give more accurate results by passing through different intermediate layers of NN with each layer having a different task.
Choi et al. [35] proposed FER using CNN, one of the DL technologies. The proposed structure has general classification performance for any environment or subject. For this purpose, we collect diverse databases and organize the database into six expression classes such as ‘expressionless’, ‘happy’, ‘sad’, ‘angry’, ‘surprised’, and ‘disgusted’.
Pre-processing and data augmentation methods are related to progress training efficiency and classification outcome. In the existing CNN organization, the optimal arrangement that best states the features of six facial expressions is established by regulating the feature maps count of the convection layer and the nodes count of the fully - connected layer. Table 1 tabulates these methods' advantages and disadvantages. Murugappan, M., and A. Mutawa [36] described a triangulation technique for extracting geometric features set to categorize six expressions by computer-created markers. The subject’s face is recognized by using Haar features. An arithmetical form was given to eight virtual markers located in a described location on the subject’s face in a computerized manner. Five triangles are created by influencing eight markers’ locations as an edge of every triangle. Afterward, the eight markers were repeatedly trailed via LucasKanade optical flow algorithm while subjects’ coherent facial expressions.
In [37] [38], used integration of methods in feature extraction and classification to obtain accurate results for classifying face shape and race. In [37] a model that used ASM-AAM for feature extraction and a combination of classifiers such as CNN and NN for classification was built. An optimization technique was incorporated in feature selection and weight optimization in the classifier using a Fitness Sorted Grey Wolf Update (FS-GU) algorithm. A comparison of the model with AAM and ASM was separately performed and was identified that the AAM-ASM features work well in prediction. Similarly, the performance of the FSGU optimization algorithm was compared with other existing optimization algorithms, and was proved that the FSGU gives better accuracy than others.
In [38] race from facial features was predicted by extracting it by a combination of features such as maximally stable extremal regions (MSER), Color, and speeded-up robust transform (SURF), and with a deep belief neural network (DBN) classifier. An optimization algorithm for feature selection called the lion mutated and updated dragon algorithm (LMUDA), was developed. It is also identified that more accurate results in classification and recognition can be achieved by integrating different methodologies together parallel or sequentially together. This is known as ensemble learning (EL) and is classified into homogeneous EL as well as heterogeneous EL. The result of EL can be obtained by bagging, boosting, voting, stacking, etc. Some of the works related to EL are [39] [40] [41] and [42].
Table 1: Comprehensive Review
Paper |
Method |
Advantages |
Disadvantages |
Face detection algorithms review |
|||
[10]
|
new 3D face recognition method |
generic and other features types were added to the framework |
Less discrimination power of covariance descriptors |
[14]
|
set of six Local Binary Pattern (LBP)-like features derived |
High performance with low complexity |
- |
Feature extraction algorithms review |
|||
[21]
|
AAM |
Provided better results with reduced cost function |
Does not well fit to parametric models |
[24]
|
An automatic detection approach using cone-beam computed tomography (CBCT) |
high localization accuracy and short computation time |
- |
Classification algorithms review |
|||
[35]
|
Unique and robust facial expression classification method |
Extract all expression features |
Do not provide better results for dynamic results |
[36] |
Triangulation Method |
High classification rate |
- |
Table 1 reviews the pros and cons of some of the existing approaches. It is obvious that the pros and cons are calculated by parameters such as accuracy, performance, complexity, automatic detection, extraction of all face features, computation time, etc. Based on these parameters the results of some existing approaches are reviewed and are given in the subsequent section. Additionally, the review of the comparison of experimental results is made in the subsequent subsection.
III. RESULTS
Many researchers tried to classify the face shape using existing approaches like SVM, PCA, NN, etc. But at present times also it is found to be a tedious task due to the complexities of face shape. So this work reviews methods that may lead to a more accurate result and for these, some comparisons are made. The results are discussed below with the image downloaded from adonis@eee.upd.edu.ph.https://drive.google.com/file/d/1KZKaex8jVRs_i58g1sVyR3ZN5nupxDzU/view?usp=sharing and also from Google search.
In the case of face detection as in Fig. (1), a comparison on schemes like Haar and LBP has been made. From this review, it is analyzed of that the Haar algorithm provides a higher true positive rate i.e. accuracy than LBP. However, the computation time is high for Haar at 2.327 seconds when compared with LBP at 0.029 seconds. Also, the Memory used by Haar is as low as 134. 6054 when compared to LBP which uses 241.1875 memory. These details are tabulated in table (2). Since accuracy is considered as an important parameter for classification Haar is identified as the best choice for face discovery. Viola Jones's face detection algorithm using Haar features is a successful algorithm for face detection. In the case of several of the blurred images like in Fig. (2) LBP face detection technique fails. But Haar gives more accurate results than LBP.
The classification accuracy also depends on the brightness or intensity value of the image. So pre-processing operations such as resizing and Histogram equalization is done with the classification images as in fig. (3). It is reviewed that the image without pre-processing will provide only less accuracy. Consequently, after pre-processing, it has been found that face detection provides high classification accuracy as in Fig. (4).
FE is done followed by pre-processing to mine the facial features. In the case of shape extraction landmark points are set correctly on the face rim as illustrated in Fig. (5). These coordinate points and other features are to be extracted. This is done with the aid of algorithms such as ASM and AAM. The comparison within FE algorithms with parameters like error rate and time taken is made in Table 3. This table information describes that ASM provides a high error rate of 3.32 which is high when compared to AAM error rate of 2.89. Additionally, the time taken by ASM is also high as 0.12 sec when compared to AAM with 0.09 sec. This shows that the AAM FE algorithm provides better results when compared to ASM.
For classification, some ML algorithms like SVM [32], KNN [33], and DL algorithms like CNN [35] are reviewed. It is reviewed that both ML and DL algorithms provided better classification results. From this review, it is found that the selection of methodologies in each step is important for a better result.
Also, a selection of classifiers and their optimized performance are important. The comparison of these algorithms is made in Table 4. In the table, it is obvious that CNN the DL algorithm provides a better classification rate of 96.62 % when compared with ML algorithms SVM at 84.16% and KNN at 83.24%. Consequently, the execution time is also better for CNN with 0.1534 sec other than SVM with 0.1956 sec and KNN with 0.2347 sec. This shows that a DL algorithm provides better accuracy when compared to ML algorithms.
Table 4: Comparison of classification methods
|
SVM |
KNN |
CNN |
Classification rate |
84.16 |
83.24 |
96.62 |
Time taken (sec) |
0.1956 |
0.2347 |
0.1534 |
This paper thus made a review of different methods to get face shape. It is found that Haar-based features are best for face detection and the pre-processing stage increases the accurate detection from Table 2. Features extracted from the rim of the face image can be employed to decide the shape of the face either geometrically or by using classifiers. It is reviewed that the AAM algorithm performs well other than ASM. The selection of the classifier is another important parameter. The DL algorithm provided better classification results. Thus the performance metrics given reviews about the pros and cons of face shape prediction. It is finally reviewed that in the future, the classifier performance can be increased further by including an optimization technique with DL approaches. A. Conflict of Interest The authors have no conflicts of interest to declare.
[1] Keser, Vafa, Jean-François Boisclair Lachance, Sabrina Shameen Alam, Youngshin Lim, Eleonora Scarlata, Apinder Kaur, Tian Fang Zhang, \"Snap29 mutant mice recapitulate neurological and ophthalmological abnormalities associated with 22q11 and CEDNIK syndrome\". Communications biology, vol. 2, no. 1, pp. 1-11, 2019. [2] Hutchings, Rosalind, Romina Palermo, Olivier Piguet, and Fiona Kumfor, \"Disrupted face processing in frontotemporal dementia: a review of the clinical and neuroanatomical evidence\". Neuropsychology Review, vol. 27, no. 1, pp. 18-30, 2017. [3] Liu, Haifeng, Dong Liu, Xiaoyan Sun, Feng Wu, and Wenjun Zeng, “On-line fall detection via a boosted cascade of hybrid features”, In 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), 2017. [4] Gavrilescu, Mihai, and Nicolae Vizireanu, \"Predicting the Sixteen Personality Factors (16PF) of an individual by analyzing facial features\". EURASIP Journal on Image and Video Processing, no. 1, pp. 1-19, 2017. [5] Rodriguez, Pau, Guillem Cucurull, Jordi Gonzàlez, Josep M. Gonfaus, Kamal Nasrollahi, Thomas B. Moeslund, and F. Xavier Roca, \"Deep pain: Exploiting long short-term memory networks for facial expression classification\". IEEE transactions on cybernetics, 2017. [6] Sivaram, M., V. Porkodi, Amin Salih Mohammed, and V. Manikandan, \"Detection of Accurate Facial Detection Using Hybrid Deep Convolutional Recurrent Neural Network\". ICTACT Journal on Soft Computing, vol. 9, no. 2, 2019. [7] Danelakis, Antonios, Theoharis Theoharis, Ioannis Pratikakis, and Panagiotis Perakis, \"An effective methodology for dynamic 3D facial expression retrieval\". Pattern Recognition, vol. 52, pp. 174-185, 2016. [8] Hariri, Walid, Hedi Tabia, Nadir Farah, Abdallah Benouareth, and David Declercq, \"3D face recognition using covariance-based descriptors\". Pattern Recognition Letters, vol. 78, pp. 1-7, 2016. [9] Yu, Xiang, Junzhou Huang, Shaoting Zhang, and Dimitris N. Metaxas, \"Face landmark fitting via optimized part mixtures and cascaded deformable model\". IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 11, pp. 2212-2226, 2015. [10] Pasupa, Kitsuchart, Wisuwat Sunhem, and Chu Kiong Loo, \"A hybrid approach to building face shape classifier for hairstyle recommender system\". Expert Systems with Applications, vol. 120, pp. 14-32, 2019. [11] Petpairote, Chayanut, Suthep Madarasmi, and Kosin Chamnongthai, \"Personalized?face neutralization using best?matched face shape with a neutral?face database\". IET Computer Vision, vol. 12, no. 3, pp. 252-260, 2018. [12] Wang, Shengzheng, Dacheng Tao, and Jie Yang, \"Relative attribute SVM+ learning for age estimation\". IEEE transactions on cybernetics, vol. 46, no. 3, pp. 827-839, 2015. [13] Chaudhari, Monali Nitin, Mrinal Deshmukh, Gayatri Ramrakhiani, and Rakshita Parvatikar, \"Face detection using Viola Jones algorithm and neural networks\", In 2018 Fourth International Conference on Computing Communication Control and Automation, 2018. [14] Görres, Joseph, Michael Brehler, Jochen Franke, Sven Y. Vetter, Paul A. Grützner, Hans-Peter Meinzer, and Ivo Wolf, \"Articular surface segmentation using active shape models for intraoperative implant assessment\". International journal of computer assisted radiology and surgery, vol. 11, no. 9, pp. 1661-1672, 2016. [15] Liu, Li, Paul Fieguth, Guoying Zhao, Matti Pietikäinen, and Dewen Hu, \"Extended local binary patterns for face recognition\". Information Sciences, vol. 358, pp. 56-72, 2016. [16] Ma, Jingting, Anqi Wang, Feng Lin, Stefan Wesarg, and Marius Erdt, \"A novel robust kernel principal component analysis for nonlinear statistical shape modeling from erroneous data\". Computerized Medical Imaging and Graphics, vol. 77, pp. 101638, 2019. [17] Obulesu, A., and R. Keerthi, “Facial expression classification using Cross Diagonal Neighborhood Pattern”, In Journal of Physics: Conference Series, 2019. [18] Yuan, Xiaohui, Longbo Kong, Dengchao Feng, and Zhenchun Wei, \"Automatic feature point detection and tracking of human actions in time-of-flight videos\". IEEE/CAA Journal of Automatica Sinica, vol. 4, no. 4, pp. 677-685, 2017. [19] Al-dahoud, Ahmad, and Hassan Ugail, “A method for location-based search for enhancing facial feature detection”, In Advances in Computational Intelligence Systems, 2017. [20] Alabort-i-Medina, Joan, and Stefanos Zafeiriou, \"A unified framework for compositional fitting of active appearance models\". International Journal of Computer Vision, vol. 121, no. 1, pp. 26-64, 2017. [21] Luo, Linbo, Qi Wan, Jun Chen, Yongtao Wang, and Xiaoguang Mei, “Drone image stitching guided by robust elastic warping and locality preserving matching”, In IGARSS IEEE International Geoscience and Remote Sensing Symposium, 2019. [22] Wang, Hao, Xiaoqing Jin, Ye Zhang, and Jinhui Wang, \"Single?subject morphological brain networks: connectivity mapping, topological characterization, and test–retest reliability\", Brain and behavior, vol. 6, no. 4, pp. e00448, 2016. [23] Machidon, Alina L., Octavian M. Machidon, and Petre L. Ogrutan, “Face Recognition Using Eigen faces, Geometrical PCA Approximation and Neural Networks”, In 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), 2019. [24] Ayensa-Jiménez, Jacobo, Mohamed H. Doweidar, Jose A. Sanz-Herrera, and Manuel Doblaré, \"An unsupervised data completion method for physically-based data-driven models\", Computer Methods in Applied Mechanics and Engineering, vol. 344, pp. 120-143, 2019. [25] Zhang, Jiansong, and Nora M. El-Gohary, \"Integrating semantic NLP and logic reasoning into a unified system for fully-automated code checking\" , Automation in construction, vol. 73, pp. 45-57, 2017. [26] Morel-Forster, Andreas, Thomas Gerig, Marcel Lüthi, and Thomas Vetter, “Probabilistic fitting of active shape models”, In International Workshop on Shape in Medical Imaging, 2018. [27] Lu, Huchuan, and Fan Yang, “Active shape model and its application to face alignment”, In Subspace Methods for Pattern Recognition in Intelligent Environment, 2014. [28] Iqtait, M., F. S. Mohamad, and M. Mamat, “Feature extraction for face recognition via Active Shape Model (ASM) and Active Appearance Model (AAM)”, In IOP Conference Series: Materials Science and Engineering, 2018. [29] Li, Ying, Haokui Zhang, Xizhe Xue, Yenan Jiang, and Qiang Shen, \"Deep learning for remote sensing image classification: A survey\". Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 8, no. 6, pp. e1264, 2018. [30] Rajesh, K. M., and M. Naveenkumar, “A robust method for face recognition and face emotion detection system using support vector machines”, In 2016 International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques, 2016. [31] Panda, Bichitra, and Chandra Sekhar Panda, \"A review on brain tumor classification methodologies\", International Journal of Scientific Research in Science and Technology, pp. 346-359, 2019. [32] Eyupoglu, Can, “Implementation of color face recognition using PCA and k-NN classifier”, In 2016 IEEE NW Russia Young Researchers in Electrical and Electronic Engineering Conference (EIConRusNW), 2016. [33] Giannopoulos, Panagiotis, Isidoros Perikos, and Ioannis Hatzilygeroudis, “Deep learning approaches for facial emotion recognition: A case study on FER-2013”, In Advances in hybridization of intelligent methods, 2018. [34] Zhang Zhang, Ting, Ri-Zhen Qin, Qiu-Lei Dong, Wei Gao, Hua-Rong Xu, and Zhan-Yi Hu, \"Physiognomy: Personality traits prediction by learning\", International Journal of Automation and Computing, vol. 14, no. 4, pp. 386-395, 2017. [35] Choi, In-kyu, Ha-eun Ahn, and Jisang Yoo, \"Facial expression classification using deep convolutional neural network\", Journal of Electrical Engineering and Technology, vol. 13, no. 1, pp. 485-492, 2018. [36] Murugappan, M., and A. Mutawa, \"Facial geometric feature extraction based emotional expression classification using machine learning algorithms\", Plos one, vol. 16, no. 2, pp. e0247131, 2021. [37] Sukumaran, Asha, and Thomas Brindha, \"Optimal feature selection with hybrid classification for automatic face shape classification using fitness sorted Grey wolf update\", Multimedia Tools and Applications, pp. 1-22, 2021. [38] Sukumaran, Asha, and Thomas Brindha, \"Nature-inspired hybrid deep learning for race detection by face shape features\", International Journal of Intelligent Computing and Cybernetics, 2020. [39] Moustafa, Nour, Benjamin Turnbull, and Kim-Kwang Raymond Choo, \"An ensemble intrusion detection technique based on proposed statistical flow features for protecting network traffic of internet of things\", IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4815-4830, 2018. [40] Tsai, Chih-Fong, and Wei-Chao Lin, \"Feature Selection and Ensemble Learning Techniques in One-Class Classifiers: An Empirical Study of Two-Class Imbalanced Datasets\", IEEE Access, vol. 9, pp. 13717-13726, 2021. [41] Xue, Dan, Xiaomin Zhou, Chen Li, Yudong Yao, Md Mamunur Rahaman, Jinghua Zhang, Hao Chen, Jinpeng Zhang, Shouliang Qi, and Hongzan Sun, \"An application of transfer learning and ensemble learning techniques for cervical histopathology image classification\", IEEE Access, vol. 8, pp. 104603-104618, 2020. [42] Zhao, Qing, Guohong Yao, Faheem Akhtar, Jianqiang Li, and Yan Pei, \"An Automated Approach to Diagnose Turner Syndrome Using Ensemble Learning Methods\". IEEE Access, vol. 8, pp. 223335-223345, 2020.
Copyright © 2023 Asha Sukumaran. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET56566
Publish Date : 2023-11-07
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here