Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Harsh J. Baldha
DOI Link: https://doi.org/10.22214/ijraset.2023.51421
Certificate: View Certificate
The process of categorizing facial pictures or videos into specified age groups is known as age group classification. Due to its many applications, including in recruitment, security, health, and social robots with intelligence, it is a crucial task. Before executing feature extraction using the deep convolution neural network (DCNN) technique, the developed methodology preprocesses the input image. Following the feature selection process, this network extracts D-dimensional characteristics from the source face image. A hybrid particle swarm optimisation (HPSO) method is used to choose the features that make the face distinctive and recognisable. Age and gender are categorised by Support Vector Machine (SVM). The age and gender categories are used in the nutrition recommendation system. Real-world images demonstrate excellent performance by achieving good prediction results and computation time, and the suggested system performs exceptionally well when tested using classification rate, precision, and recall using the Adience dataset and UTKface dataset.
I. INTRODUCTION
A person's face can reveal a variety of characteristics such as gender, age, emotion, and ethnicity. Due to their widespread application and use cases, facial gender and age prediction has garnered a lot of attention among these traits. Facial Gender Recognition is described as putting a person's sex into the appropriate class (male, female) based on their facial features. The ability to forecast a person's biological age or age group, such as kid, adult, or senior citizen, automatically is known as facial age prediction. Real-time commercial applications for facial gender prediction and age estimate include non-invasive forensic profiling of victim/criminal profiles, monitoring of a particular gender and age group, human-computer interaction, law enforcement, access control, and interactive systems, among others. It can be used for surveillance and access control to restrict entry into restricted areas for people of a certain sex or age group, to allow website access based on a certain age group or a certain gender, in web-application access, and access control of physical zones (smoking zone, restroom), as well as risky zones like theme parks, etc. [2, 3]. For instance, vending machines in Japan employ facial adult assessment (age) to suggest drinks (alcohol, smoke packets) to clients [1].
It can be utilized for demographic analysis or any access violation in the crowd for commercial CCTV applications. For instance, access to particular genders is restricted in train compartments (or seats), metros, buses, washrooms, and hostels. Passengers or visitors can also be automatically directed and checked for legal violations. These forecasts can also be utilized for business planning, sales and marketing strategies, and identifying the proportion of visitors that fit particular demographics (male, female, juvenile, young, and adult). It can be applied to targeted advertising on electronic noticeboards for demographics that are dynamically changing in terms of age and gender [4]. The customization of services like automated healthcare systems (Robotic nurses) in healthcare facilities can also be done using facial gender and age estimation prediction algorithms [5]. Gender and age estimation has recently been added to smartphones as amusing features. These can be utilized for automatically reorganizing albums to manage features like rearranging, retrieving, and deleting the captured photos based on the selection of age and gender. In systems based on human-computer interaction, a person's face attribute such as gender or age is recognized automatically during an analysis of their physiological behavior. In biometric systems, gender recognition can also be used to decrease the database's search index. With age and gender as face attributes, it also improves the reliability of person identification [6].
These estimates can also be utilized in forensic art for information retrieval to determine which lost people will match a face recognition application the best [7]. Additionally, using facial age synthesis can be used to create an updated facial image of a deceased family member or missing child. For predicting face attributes, there are two methods. 1. Single Task Learning (STL) or Single Attribute Learning (SAL) 2. MTL (multi-task learning) or MAL (multi-attribute learning). Each attribute/class (gender or age) is trained or forecasted independently using the SAL/STL-based technique, with no relationships between the various attributes. While the MAL (MTL) approach uses a shared parallel model to learn multiple attributes (for gender and age prediction). Face, voice, gait analysis (running, jogging, etc.), facial photos, fingerprints, hand skin, and handwriting can all be used to estimate a person's gender. While anthropology studies of the face or bones can be used to predict age.
Due to its easy visibility (uncovered clothing), collectability, acceptability, and universality, the face is the best suitable attribute. The two categories of current state-of-the-art techniques for recognizing facial age and gender are (a) the traditional hand-crafted feature engineering approach and (b) the deep learning-based approach [8-10].
The emphasis of the suggested technique is on extracting age-specific traits from facial images, followed by age classification. Using ageing indicators found in a face photograph, one can determine a person's age. Skin changes can also be used to determine an adult's age.
Age identification [8] is a difficult procedure that depends on a person's gender, race, ethnicity, way of life, physical characteristics, and other external factors. As the actual age differs from the predicted age, accurate facial age prediction remains difficult. Classifications including kid, teen, adolescent, intermediate, and senior citizens are included in several public age recognition datasets [11].
The dataset or a live camera serves as the input source for the suggested system. To get it ready for more processing, preprocessing is done. To obtain the crucial characteristics, DCNN is applied to the preprocessed image. The next step uses hybrid particle swarm optimization (HPSO) to pick the features. The age of a person is divided into 8 age groups as follows: "0-2," "4-6," "8-13," "15-20," "25-32," "38-43," "48-53," and "60+" using support vector machines (SVM). The gender has two classes (male and female). For feature extraction, DCNN is used, which retrieves distinguishing features to learn pertinent properties. The image's best features are chosen using HPSO. The accuracy and computation speed are improved when DCNN and HPSO are combined. Age and gender are classified by SVM. Experiments on the Adience dataset and real images show that the model is reliable and outperforms the standard scheme in terms of classification rate, precision, and recall.
II. APPLICATION DOMAINS FOR AGE ESTIMATION
There are a variety of significant real-world uses for describing differences in facial appearance throughout the ages. When a person's age needs to be ascertained, computer-based age estimation is helpful. Age estimation has a variety of use cases, some of which are as follows:
A. Simulation of Age
Characterizing facial features at various ages can be a useful tool for modelling or mimicking one's age at a certain period. Estimated ages at various points in time could be used to study a person's ageing pattern, which could help in simulating the person's face look at an unknown age. [12, 13] have further information on facial ageing simulation. An unseen look could be created and utilized to find missing people by simulating ageing trends at various ages. Unseen appearances could be reproduced by examining ageing patterns at various ages.
B. Security and Surveillance
Age estimation can be utilized in the surveillance and monitoring of bars and vending machines for alcohol and cigarettes to prevent minors from acquiring these products and to limit kids' access to adult content on the internet and in movies [14, 15]. By keeping an eye on a certain age group that is predisposed to the vice, age estimation can also be useful in reducing ATM money transfer fraud [16]. Age estimation can also be used to increase the reliability and accuracy of facial recognition, enhancing national security. A robotic nurse and doctor expert system that provides personalized medical care can also incorporate age estimation. For interaction with patients from various age groups, for example, a personalized avatar can be automatically chosen from a database based on preferences.
C. Content Access
Age estimation can be used to limit children's access to inappropriate content due to the explosion of various content on TV and the Internet. A TV may have a camera put on it to record who is watching so that, if children are there when inappropriate content is being broadcast, the TV would turn off.
D. Missing Persons
Age simulation takes the job of age estimation one step further in helping locate missing people. Older adults can be recognized from their prior photographs using age simulation as a means of identification.
E. Electronic Customer Relationship Management (ECRM)
The term "ECRM" refers to the use of Internet-based technology, including websites, emails, forums, and chat rooms, for the effective management of unique client encounters and one-on-one communication with them. Customers of various ages may have varying tastes and expectations in terms of a product [17]. In order to tailor their products and services to the needs and tastes of clients in various age groups, businesses may utilize automated age estimation to track market trends. How to get and examine relevant personal data from all client groups without violating their right to privacy is the issue at hand. A camera can take photographs of customers while also automatically estimating their age groups and collecting demographic information via automatic age estimation.
F. Biometrics
A soft biometric called age estimation using faces [18] can be used to supplement biometric methods like face recognition, fingerprints, or iris recognition in order to increase the accuracy of identification, verification, or authentication. To increase the accuracy of hard (primary) biometric systems, age estimates can be used in age-invariant face recognition [19], iris recognition, hand geometry recognition, and fingerprint recognition [20].
III. LITERATURE REVIEW
The deep neural network (DNN) was the first deep learning technology to be used in a machine learning algorithm [21-24]. DNN takes too long to train and has a problem with overfitting. Deep belief networks (DBN) and Restricted Boltzmann machines (RBMs) were used to improve DNN during learning [25]. Because RBM is used in DBN, learning is completed more quickly than in DNN. The RBMs are DBMs that are layered and have unguided connections between layers [26-29].
S. D. Sapkal and M. D. Malkauthekar [30] presented the experimental analysis of face image classification. For categorization, facial photos from two classes and three classes are used, each with a variety of emotions and angles. Results for two classes and three classes are compared using the Fisher Discriminant method, and matching is done using Euclidian distance. A Neural Network-based upright invariant frontal face detection system was presented by G. Mallikarjuna Rao et al., [31] to classify the gender using facial information. The accuracy is dependent on geometric and pixel-based facial traits. The cyclic shift invariance approaches and the pi-sigma neural network are what give categorization its robustness.
To overcome the problem of light variation using trigonometric functions, Anil Kumar Sao and B. Yegnannarayna [32] suggested analytic phase-based representation for face recognition. Template-matching eigenvalues are used to determine the weights to be applied to the projected coefficients. Shape from Shading (SFS) was suggested for gender classification by Jing Wu et al. [33]. To differentiate between the genders of the test faces, linear discriminant analysis (LDA) is employed based on the Principal Geodesic Analysis parameters. The SFS technique is used to enhance the classification performance analysis for grayscale face photographs.
Age group classification using face photos captured in various lighting scenarios was given by Kazuya Ueki et al. [34]. A technique was put forth by Carmen Martinez and Olac Fuentes [35] to increase precision when just a small number of labelled instances are available. The dimensionality of the image space is reduced using the Eigenface methodology, and unlabeled data is classified using ensemble methods. For each class, ensemble unlabeled data selects the three or five instances that most closely resemble that class. To increase accuracy, these instances are added to the training set, and the procedure is repeated until there are no more examples to categorize. K-nearest-neighbour, Artificial Neural Networks (ANN), and locally weighted linear regression learning were used in the tests.
Based on factors including the geometric arrangement and luminance of facial photos, Ryotatsu Iga et al. [36] created an algorithm to predict Gender and Age using (SVM). The position of the face is found using the graph-matching approach with GWT. For gender estimation, GWT traits such as geometric arrangement, hair colour, and moustache are considered. Age is estimated using GWT traits such as texture patches, wrinkles, and flaps. Min-Max Modular Support Vector Machine (M3- SVM) was suggested by Hui-Cheng Lain and Bao- Liang Lu [37] to predict age. To extract features from face photos, GWT's facial point detection algorithm and the retina sampling method are employed. In M3-SVM, gender information within age samples is classified using the task decomposition technique.
Using face image data, Young H. Kwon and Niels Da Vitoria Lobo [38] demonstrated graphical categorization. To distinguish between young individuals and elders, the main facial features—eyes, nose, mouth, chin, virtually the top of the head, and sides of the face—are estimated using ratios. The wrinkle index calculation is utilized in secondary feature analysis to differentiate between elderly people and children and young adults. The three classifications of babies, young adults, and seniors are determined by a combination of primary features and secondary features on the face.
A technique to categorize Age groups was proposed by Wen Bing Horng et al. in [39]. Face and back-propagation features are extracted using the Sobel edge operator. The classification of facial photographs into babies, young adults, middle-aged adults, and old people is done using neural networks. When identifying baby images, the Network analyzes the geometric features of facial images without wrinkles. The second Network divides adults into three groups based on the wrinkle characteristics of an image.
The automatic simulation of the ageing effect on the face recognition system was proposed by Andreas Lanitis et al. [40]. The system is used to forecast the facial features of people or kids who have been missing for a while. For people who supply age-progressive photographs based on mean absolute error and standard deviation over years, the three formulations of linear functions, quadratic functions, and cubic functions are used for ageing functions. The difference in classification performance between low-resolution and the corresponding higher-resolution images is 1%, according to Baback Moghaddam and Ming-Hsuan's [41] appearance-based method to classify gender from facial images using nonlinear SVMs. They compared their performance with traditional classifiers and modern techniques Radial Basis Function (RBF) networks and large ensemble-RBF classifiers.
Face Recognition Using Contour Matching was proposed by S.T. Gandhe et al. in [42]. When matching face recognition images, the contour of the face is taken into account. An investigation on gender classification using automatically identified and aligned faces was given by Erno Makinen and Roope Raisamo [43]. Gender categorization techniques: Discrete Adaboost with Harr-like features, Multilayer Neural Network with image pixels as input, SVM with image pixels as input, SVM with LBP features as input, and SVM with multilayer neural networks are all taken into consideration. The best classification rate was attained by SVM using picture pixels as input. LBP was suggested by Guillaume Heusch et al. [23] as a face authentication image preprocessing for lighting variations. Face authentication uses LDA and HMM algorithms. Pan-tilt-zoom (PTN) and quick face identification algorithms for face detection and tracking were proposed by Jang-Seon Ryu and Eung-Tae Kim [26]. In a Digital Voice Recorder (DVR) system, face recognition is performed using a DCT-based HMM method. For age-invariant face recognition, Unsang Park et al. [37] suggested an automated ageing simulation technique. For position and expression invariance, a 3D deformation model for the crania of the face is created. The age modelling method simulates shape and texture information to mimic adult ageing and growth patterns. Face recognition systems presented by Alice J. O'Toole et al. [19] outperform human matching faces despite variations in lighting. Seven facial recognition algorithms were put to the test against humans to determine if pairs of face images taken in various lighting situations were of the same person or different persons.
IV. METHODOLOGY
Preprocessing, feature extraction, feature selection, and age and gender classification are all aspects of the research process that are covered in more detail below.
A. Image Preprocessing
It significantly improves the accuracy of feature retrieval and the results of picture investigation. For a face recognition pipeline, this combination of enrichments and enhancements is necessary. Therefore, noise subduals, contrast enrichments, and removal of unfavourable effects on detention like blurring by motion effects and colour alterations are all part of image processing tasks [44].
B. Noise Removal Using Mean Filter
A method for enhancing and changing images is filtering. Although their primary goal is to reduce noise, these effects can also be used to emphasize certain qualities. In image processing, 2D filtering methods are frequently seen as an extension of the 1D signal processing theory. The choice of filter is typically influenced by the type of work as well as the nature and features of the data. A mean filter is a simple linear filter that may be used to level pictures effortlessly and spontaneously. It helps lessen the degree of pixel-to-pixel intensity variation. It is frequently applied to reduce visual noise. Mean filtering's main idea is to replace each pixel's value in a picture with the mean value of its surrounding pixels, including itself. It has the capacity to eliminate image pixels that are inappropriate for the given context. It is constructed around a kernel that represents the shape and size of the neighbourhood under test for mean computation. Although a 5x5 square kernel could be used for extreme flattening, a 3x3 square kernel is frequently used [44].
C. Face Detection and Alignment Using Landmark Localisation
Any face recognition process must start with facial detection. A face detection technique aids in locating any facial features in an image. A face detection system must be impervious to alterations in posture, lighting, mood, scale, skin tone, occlusions, disguises, make-up, and other factors.
Using the Dlib library, the suggested method locates the 68 landmark points on the face. The nasal tip, ear margins, lip borders, eye shapes, and other features are examples of facial key points. For face orientation, which is necessary for facial registration, specific face landmarks must be present. The centre of the face and the position of the eyes are used for face alignment [67]. Based on these variables, the input picture is scaled and cropped, with the image size set at 110x110. Biometrics, gender classification, and age estimation all depend on facial alignment and recognition [43].
D. Deep Convolutional Neural Network (DCNN)
A deep neural network architecture known as DCNN aids in the extraction of distinctive and differentiating features from the preprocessed input. It aids in shrinking the image's original size and displays it in the scaled-down form in a smaller space. Even with the smaller dimension, the DCNN procedure's features produce results that are identical to the source image. Studies show that DCNN [45] successfully captures multilayer neural network-based visual characteristics. DCNN [45] is incredibly sophisticated in learning the features of the image and has a number of applications in face recognition. The suggested work extracts age and gender features from the face using DCNN, and the results are encouraging.
The created DCNN architecture is used to address the proposed age and gender recognition challenge. This network has a six-layer architecture made up of two fully connected layers and five convolutional layers. Deep learning network in the DCNN carries out feature extraction and classification tasks. Based on the landmarks identified, the input image sent to the proposed system is preprocessed and cropped to size 110x110. Due to the matrices' inclusion of zero padding, the input to the system is 5x112x112. A dropout layer, ReLU, batch normalization (BN), and max-pooling are all included with each of the five convolutional layers. The first fully connected layer, followed by ReLU, BN, dropout, and a following fully connected layer, emerges after the fifth set of convolutional layers. The second completely linked layer of the network outputs 512 features. The dropout ratio for the dropout layer is set to 0.5. The stride and filter sizes for the conv1 layer are 4x4 and 7x7, respectively. All of the max-pool layers have a 3x3 filter and a 2x2 stride. The final convolution layer has a 3x3-size filter, while the conv2 layer has a 5x5-size filter. To normalize 512 features, the features are supplied to the softmax function. A hypothetical representation of the suggested DCNN architecture is shown in Figure 1.
In order to help with categorization, feature extraction using DCNN recovers the distinctive, precise, and informative qualities included in a person's face image.
E. Particle Swarm Optimization (PSO)
PSO [22] is a pretty well-known optimization method for selecting the best solution from a range of potential options. The cooperative behavior of a swarm of birds served as the inspiration for the optimization technique known as PSO. When swarms are looking for food, they are capable of estimating how far the food will be from where they are right now. PSO is made up of a variety of parts. Every element is represented as a point, and an optimal value is sought. Based on their individual optimal positions as well as the positions of the swarm or neighboring neighbors, these components move across the search space. Every solution in PSO is referred to as a "swarm," while the possible solutions are referred to as "particles."
By contrasting their own best solution with the best solution of their neighbor, they shift their viewpoints. Every particle remembers the location where its optimum answer was found. This algorithm has so far been applied to training artificial neural networks. The PSO algorithm is used in this technique to choose relevant features for classification problems. Additionally, it offers a comparatively superior candidate solution or features.
F. Hybrid Particle Swarm Optimization (HPSO)
Hybrid PSO, a novel feature selection method that combines PSO and the genetic algorithm (GA), simulates the particles [9-12]. PSO swiftly reaches the local optimum, which cannot be avoided in the search space and prematurely converges at the earliest possible point. As a result, PSO obtains the local optimal region. PSO and GA are integrated to get around this problem. Sharing information among the particles, which aids in computation processes, is a benefit of combining GA and PSO. By executing a cross-over operation on the global best particles produced from PSO, the hybrid PSO is proposed. One disadvantage of stochastic approaches is that their performance depends on the given problem. Therefore, in order to demonstrate high performance, various parameter settings are required. The variations in speed in relation to inertia led researchers to the conclusion that PSO is problem-dependent. Consequently, hybrid PSO can help to avoid this.
The best particles obtained from the PSO are used in hybrid PSO's crossover operation. By applying fitness calculations to each particle and comparing them to other particles to determine their best values or attributes, the best particles are discovered. To find the world's best particles, these individual best values are compared to those of other particles. These global top particles are afterwards provided to crossover as input. As a result, the locations and velocity are updated and the results are received.
G. Age and Gender Classi?cations
Identity refers to the features that set one face apart from another. Age, gender, face expressions, and facial landmarks can all be factors. The proposed method takes identify into account along with age and gender categories. Based on the input image, the suggested technique uses classification to ascertain a human's age and gender. The classification process makes use of SVM to categorize age and gender [15]. Understanding the attributes available in a picture and doing classification are made easier using SVM. In order to classify images into two classes for gender categorization and eight classes for age categorization, SVM creates an ideal hyperplane in multidimensional space. The HPSO results are represented in a multidimensional space. Using the maximum marginal hyperplane (MMH), the classes can be separated.
V. RESULTS
Publicly accessible datasets like Adience and real images are employed for experimentation in order to judge the efficacy of the current study. The age and gender of an input facial image are determined through experiments.
The Python TensorFlow framework is used to implement the proposed system. OpenCV is used to load the input images, and the dataset is divided into train and test sets. The first step in creating an image of size 112 112 would be to execute image preprocessing on each image in the dataset. dlib and OpenCV are used to find and extract facial landmarks. The dlib package is used to estimate the locations of 68 coordinate points that map to the structures on the face. Alignment and localization of keypoints are next handled. The TensorFlow framework is used in Python to build and implement deep convolutional neural networks (DCNN). Starting at 32, the filter size doubles with each convolutional layer until it reaches 512.
Max pool layer is made up of filters that are 2 by 2 and have a 2 stride. A 0.5 dropout rate has been established. Real-time images and publicly accessible data sets like Adience are used to evaluate how effective the suggested approach is.
The Adience dataset [1] is made up of images that were frequently uploaded by a smartphone to Flickr. Age and gender identification are the two main uses of the Adience dataset, which consists of benchmarks of face images. The collection comprises shots that were not carefully planned or positioned as well as images with varied degrees of appearance, noise, stance, and lighting. This collection includes photographs of people of all ages in the following eight categories: 0–2, 4-6, 8–13, 15-20, 25–32, 38–43, 48–53, and 60+. It also includes images of both sexes. Table 1 displays the Adience face dataset, the distribution of faces by age groups, and the total number of images for each category for both men and women. Real-world pictures are also used, such as pictures of people taken with a webcam in real time and pictures that may be found online.
Table 1: Dataset
Gender |
0-2 |
4-6 |
8-13 |
15-20 |
25-32 |
38-43 |
48-53 |
60+ |
Total |
Male |
990 |
1018 |
889 |
656 |
2903 |
1156 |
212 |
566 |
8390 |
Female |
756 |
1335 |
1100 |
893 |
2901 |
1009 |
344 |
522 |
8860 |
For automated approaches, feature extraction is a crucial phase that aids in removing distinctive characteristics from the supplied image. By translating a multidimensional space into a space with fewer dimensions, it aids in dimensionality reduction. Comparisons are made between the proposed DCNN-based feature extraction method and various methods already in use, including SIFT [31], histograms of directed gradients [32], LBP [22], and ICA [18]. Figure 3 displays the empirical results of several feature extraction techniques.
The Adience dataset and a few real-world photos were used to evaluate the developed age and gender classification method. The VGG network architecture, which consists of a basic network with 33 convolutional layers stacked one on top of the other, was first described by Simonyan and Zisserman [11]. It includes softmax classifier, two fully connected layers, and max pooling. The total number of hidden layers (weight layers) is represented by the 16 and 19 in the VGG network. The VGG network suffered from convergence, required a long training period, and has a massive design. The inception network was first introduced by Szegedy et al. [14]. With different convolutional layers of sizes 1x1, 3x3, and 5x5, this network extracts multilevel features. By statistically comparing the suggested system to the VGG16, VGG19, and InceptionV3 models, recall, precision, and classification rate is used. Table 2 shows that the established technique produces improved results in recall, precision, and classification.
Table 2: Performance evaluation of the implied model in comparison to current models
Model |
Classi?cation rate (%) |
Precision (%) |
Recall (%) |
VGG16 |
89.19 |
87.23 |
87.11 |
VGG19 |
90.50 |
89.28 |
89.99 |
Inception V3 |
93.00 |
93.22 |
91.30 |
Proposed age classi?cation module |
95.96 |
96.88 |
96.78 |
Proposed gender classi?cation module |
97.22 |
97.55 |
96.19 |
Proposed age and gender module |
98.34 |
98.57 |
97.94 |
Age and gender are important considerations for a variety of applications. The scientific community has shown a greater interest in determining gender and age from facial images. Based on age and gender identification from a face image, this research offers a ground-breaking dietary advice method. The current paper in this context introduces a recommender system that automatically captures the face and determines an individual\'s age and gender without any physical contact. Incorporating classification, HPSO, and DCNN in this paper. The age and gender recognition methods used by the proposed system outperform those already in use in terms of accuracy and processing efficiency, according to experiments. The development of a group recommendation system for users in public spaces is envisaged for the future.
[1] Allison C. Lamont, Steve Stewart-Williams and John Podd, “Face Recognition and Aging: Effects of Target Age and Memory Load,” Journal of Memory and Cognition, vol. 33 (6), pp. 1017-1024, September 2005. [2] Young H. Kwon and Niels Da Vitoria Lobo, “Age Classification from Facial Images,” Journal of Computer Vision and Image Understanding, vol. 74, no. 1, pp. 1-21, April 1999. [3] Wen-Bing Horng, Cheng-Ping Lee and Chun-Wen Chen, “Classification of Age Groups based on Facial Features,” Journal of Science and Engineering, vol. 4, no. 3, pp. 183-192, 2001. [4] Andreas Lanitis, Chris J. Taylor, and Timothy F. Cootes, “Toward Automatic Simulation of Aging Effects on Face Images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 4, PP. 442-455, April 2002. [5] Baback Moghaddam and Ming-Hsuan, “Learning Gender with Support Faces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, pp. 707-711, May 2002. [6] Ara V. Nefian and Monson H. Hayes, “An Embedded HMM based approach for Face Detection and Recognition,” Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 6, pp. 3553-3556, 1999. [7] Zehang Sun, George Bebis, Xiaojing Yuan, and Sushil J. Louis, “Genetic Feature Subset Selection for Gender Classification: A Comparison Study,” IEEE Workshop on Applications of Computer Vision, pp.165-170, 2002. [8] Ming-Hsuan Yang and Baback Moghaddam, “Support Vector Machines for Visual Gender Classification,” Fifteenth International Conference on Pattern Recognition, vol. 1, pp. 5115-5118, 2000. [9] Shyh-Shiaw Kuo and Oscar E. Agazzi, “Keyword Spotting in Poorly Printed Documents using Pseudo 2-D Hidden Markov Models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 8, pp. 842-848, August 1994. [10] Praseeda Lekshmi.V and M. Sasikumar, “RBF Based Face Recognition and Expression Analysis,” Proceedings of World Academy of Science, Engineering and Technology, vol. 32, pp. 589-592, August 2008. [11] S. T. Gandhe, K. T. Talele and A. G. Keskar, “Face Recognition using Contour Matching,” International Journal of Computer Science, May 2008. [12] Erno Makinen and Roope Raisamo, “Evaluation of Gender Classification Methods with Automatically Detected and Aligned Faces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 3, pp. 541-547, March 2008. [13] Xiaoguang Lu and Anil K. Jain, “Deformation Modeling for Robust 3D Face Matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 8, pp. 1346-1357, August 2008. [14] Guillaume Heusch, Yann Rodriguez and Sebastien Marcel, “Local Binary Patterns as an Image Preprocessing for Face Authentication,” Proceedings of the Seventh International Conference on Automatic Face and Gesture Recognition, pp. 6-14, April 2006. [15] Jang-Seon Ryu and Eung-Tae Kim, “Development of Face Tracking and Recognition Algorithm for DVR (Digital Video Recorder),” International Journal of Computer Science and Network Security, vol. 6, no. 3A, pp. 17-24, March 2006. [16] Unsang Park, Yiying Tong and Anil K. Jain, \"Face Recognition with Temporal Invariance: A 3D Aging Model,” Eighth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 1-7, September 2008. [17] Hussein Rady (2011), “Face Recognition using Principle Component Analysis with Different Distance Classifiers”, IJCSNS International Journal of Computer Science and Network Security, Vol. 11, No. 10, Pp. 134–144. [18] S. Sankarakumar, Dr.A. Kumaravel & Dr.S.R. Suresh (2013), “Face Detection through Fuzzy Grammar”, International Journal of Advanced Research in Computer Science and Software Engineering, Vol. 3, No. 2. [19] Lanitis, C. J. Taylor, and T. F. Cootes, “Toward automatic simulation of aging effects on face images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 4, pp. 442–455, Apr. 2002. [20] X. Geng, Z.-H. Zhou, and K. Smith-Miles, “Automatic age estimation based on facial aging patterns,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 12, pp. 2234– 2240, Dec. 2007. [21] X. Geng, Z.-H. Zhou, Y. Zhang, G. Li, and H. Dai, “Learning from facial aging patterns for automatic age estimation,” in Proc. 14th Annu. [22] Sun Z., Bebis G., Yuan X. and Louis S.J., “Genetic feature subset selection for gender classification: a comparison study”, in IEEE Proceedings on Applications of Computer Vision, pag. 165-170, 2002. [23] Nakano M., Yasukata F. and Fukumi M., “Age and gender classification from face images using neural networks”, in Signal and Image Processing, 2004. [24] Kim H.-C., Kim D., Ghahramani Z. and Bang S.Y., “Appearance based gender classification with Gaussian processes”, in Pattern Recognition Letters, vol. 27, iss. 6, pag. 618-626, April 2006. [25] P. Viola and M. Jones. Robust Real-time Object Detection. International Journal of Computer Vision, 57(2):137– 154,2002. 2, 4. [26] G. Yangand, T. S. Huang, “Human face detection in complex background,” Pattern Recog., vol. 27, no. 1, 1994, pp. 53–63 [27] Bodhe S, Kapse P and Singh A 2019 Real-time Age-Invariant Face Recognition in Videos Using the ScatterNet Inception Hybrid Network (SIHN) International Conference on Computer Vision Workshop (IEEE/CVF) pp 1112–20 [28] Zhang Y, Liu L, Li C and Loy C C 2017 Quantifying Facial Age by Posterior of Age Comparisons The British Machine Vision Conference [29] Salihbašic A and Orehovacki T 2019 Development of Android Application for Gender, Age and Face Recognition using OpenCV 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics pp 1635–40 [30] Choi S E, Jo J, Lee S, Choi H, Kim I J and Kim J 2017 Age face simulation using aging functions on global and local features with residual images Expert Syst. Appl. 80 pp 107–25 [31] Tian Q and Chen S 2018 Joint gender classification and age estimation by nearly orthogonalizing their semantic spaces Image Vis. Comput. 69 pp 9–21 [32] Duan M, Li K, Yang C and Li K 2018 A hybrid deep learning CNN–ELM for age and gender classification Neurocomputing 275 pp 448–61 [33] Rafique I, Asad M, Hamid A, Awais M, Naseer S and Yasir T 2019 Age and Gender Prediction using Deep Convolutional Neural Networks International Conference on Innovative Computing (IEEE) [34] Boussaad L and Boucetta A 2020 An effective component-based age-invariant face recognition using Discriminant Correlation Analysis J. King Saud Univ. - Comput. Inf. Sci. [35] Zhang K E, Gao C E, Guo L, Sun M, Yuan X, X. Han T, Zhao Z and Li B 2017 Age Group and Gender Estimation in the Wild With Deep RoR Architecture The Chinese Conference on Computer Vision vol 5 (IEEE Access) pp 22492–503 [36] Zhang H, Geng X, Zhang Y and Cheng F 2019 Recurrent age estimation Pattern Recognit. Lett. [37] M.S. Shakeel and K.-M. Lam 2019 Deep-feature encoding-based discriminative model for age- invariant face recognition Pattern Recognit 93 pp 442–57 [38] Taheri S and Toygar Ö 2019 On the use of DAG-CNN architecture for age estimation with multi stage features fusion Neurocomputing 329 pp 300–10 [39] Liu N A, Zhang F A N and Duan F 2020 Facial Age Estimation Using a Multi-Task Network Combining Classification and Regression IEEE Access vol 8 (IEEE) pp 92441–51 [40] Chen S, Zhang C, Dong M, Le J and Rao M 2017 Using Ranking-CNN for Age Estimation. Conference on Computer Vision and Pattern Recognition (IEEE) pp 742–51 [41] Huang Y, Wang Y, Tai Y, Liu X, Shen P, Li S, Li J and Huang F 2020 CurricularFace: Adaptive Curriculum Learning Loss for Deep Face Recognition Conference on Computer Vision and Pattern Recognition [42] Amos B, Ludwiczuk B and Satyanarayanan M 2016 OpenFace?: A general-purpose face recognition library with mobile applications pp 1–18 [43] Cheng J, Li Y, Wang J, Yu L and Wang S 2019 Exploiting Effective Facial Patches for Robust Gender Recognition Tsinghua Science and Technology 24 pp 333–45 [44] Masood S, Gupta S, Gupta S and Ahmad M 2018 Prediction of human ethnicity from facial images using neural networks Data Engineering and Intelligent Computing [45] Gong S, Liu X and Jain A K 2020 Jointly De-biasing Face Recognition and Demographic Attribute Estimation European Conference on Computer Vision (Springer, Cham) pp 330–47
Copyright © 2023 Harsh J. Baldha. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET51421
Publish Date : 2023-05-02
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here