Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Badal Pokharel, Shweta Bandhekar
DOI Link: https://doi.org/10.22214/ijraset.2023.57412
Certificate: View Certificate
Efficient communication is crucial in human interaction, yet individuals with language and hearing challenges encounter obstacles. This article centers on Nepali Sign Language (NSL) and introduces the Hand Gesture Smart Recognition initiative, with the goal of translating NSL into English. The research explores hand gesture recognition technology, emphasizing the cultural subtleties inherent in both vision-based and glove-based approaches. Recent advancements, such as the integration of compact models and CNNs, signify advancements in gesture recognition. The article offers valuable insights into the development and application of this technology, fostering optimism regarding the future potential of culturally suitable human-computer interaction technologies.
I. INTRODUCTION
A. Sign Language Communication
Effective human interaction relies on communication, and for individuals facing speech and hearing challenges, sign language becomes a crucial means of expression. A substantial portion of the population lacks proficiency in sign language, leading to a dependency on translators.
Recent findings from the Nepal Census reveal that approximately 23.63% of the population, equivalent to 2,295,249 individuals, encounters difficulties in hearing or speaking. Globally, there are more than 290 sign languages catering to the communication needs of those with hearing impairments. However, applications translating sign language into Nepali Sign Language (NSL) are scarce. The Hand Gesture Smart Recognition initiative targets the translation of NSL into English words, offering visual representations of signs with associated meanings. The primary goal is to alleviate communication challenges faced by individuals dealing with hearing or speaking difficulties.
B. Hand Gestures
Nonverbal communication through gestures is a ubiquitous form of expression, involving visible bodily actions that complement or replace spoken words. Hand gestures, in particular, play a significant role in conveying ideas or messages. Recognizing the importance of hand gestures, especially in sign language, has sparked interest in hand gesture recognition technology.
Hand gesture recognition is crucial for various applications, including interactive human-machine interfaces and virtual environments. In the context of sign language translation, the complexity of gestures conveying essential communication information and emotions necessitates a comprehensive approach. Existing recognition systems often focus on either local finger configurations or global body configurations, presenting challenges in accurately interpreting genuine sign language gestures.
Traditional methods, such as hidden Markov models (HMM) for dynamic hand gesture recognition, have been complemented by 3D convolutional neural networks (3DCNN) to model spatiotemporal data directly. While 3DCNN offers hierarchical representations, its parameter-intensive nature poses challenges in training. As an alternative, domain adaptation on pretrained instances is favoured over training a 3DCNN from scratch.
The recognition of hand gestures plays a crucial role in human-machine interaction, with applications spanning various fields like robotics, communications, and medical assistance. This review investigates the progression of methods for hand gesture recognition, specifically delving into both vision-based and glove-based approaches.
II. OBJECTIVE
The Hand Gesture Smart Recognition initiative aims to develop a real-time vision-based system for the recognition of Nepali Sign Language (ASL) alphabets and numbers as shown in Fig 1. The primary objectives of this initiative are to evaluate the feasibility of a vision-based system for sign language recognition and to assess the effectiveness of hand features applicable to real-time sign language recognition systems.
The chosen methodology involves the utilization of a single camera, and participants are required to adhere to specified limitations and distance ranges during system operation. The system places a significant emphasis on accurately defining hand posture, specifically with a bare palm, ensuring that it remains unobstructed by any external objects. Additionally, the design of the system is tailored for indoor use, taking into account the typical environmental conditions found indoors.
A pivotal aspect of the project is the endeavour to create a cost-effective system capable of converting sign language into text format. This is a crucial step in enhancing communication accessibility for the deaf and mute community. The emphasis on cost-effectiveness aligns with the project's overarching goal of providing an inclusive and efficient solution to address communication barriers faced by individuals with hearing and speech impairments.
III. LITERATURE SURVEY
Recognizing hand gestures holds significant importance in human-machine interaction, finding diverse applications in fields like robotics, communications, and medical assistance. This review is dedicated to exploring the development of methods for hand gesture recognition, specifically emphasizing vision-based and glove-based approaches.
Within the domain of vision-based methods, various techniques have been proposed and experimentally assessed. For instance, back in 1994, J. Davis and M. Shah introduced model-based approaches utilizing state machines and vector models to identify coordinated hand movements [1]. In 1996, Y. Cui and J.J. Weng presented non-HMM-based systems that achieved an impressive 93.1% recognition rate for 28 distinct movements, even in intricate scenarios [2]
Various technological approaches shed light on the diverse landscape of hand gesture recognition. An advanced system capable of real-time recognition of single-hand gestures, featuring a vocabulary of 46 gestures, demonstrates its effectiveness in controlling a windowed operating system and executing operations with minimal errors [3]. In a distinct domain, Hidden Markov Models (HMMs) gain prominence for recognizing sentence-level American Sign Language (ASL) with an impressive 99.2% word accuracy, showcasing their efficiency in handling intricate hand gestures [4]. Another facet is explored through the development of a 3D deformable Point Distribution Model of the human hand, emphasizing real-time tracking and the potential for enhanced computational efficiency [5]. In the broader context, hand gesture recognition emerges as a pivotal field in computer vision, especially in human-computer communication (HCI). Vision-based methodologies, diverging from sensor-based approaches, are thoroughly examined, covering acquisition, detection, pre-processing, representation, feature extraction, and recognition. This comprehensive exploration delves into databases, recent advancements, applications, and classifier schemes, providing a nuanced understanding and paving the way for further research in this dynamic and evolving area [6].
The introduction of the flexible object recognition and modelling system (FORMS) presents a robust methodology for the representation and identification of animate objects based on their silhouettes. This system adopts a formalism that addresses the inverse problem of object recognition across three complexity levels: primitives, mid-grained shapes, and objects constructed through a grammar-based approach. Noteworthy is its resilience and adaptability, demonstrating stability in diverse scenarios such as noise, the absence or addition of parts, and variations in articulation and viewpoint [7]. Additionally, a contribution involving a statistical and variational approach named region competition is outlined. This method, derived from minimizing a generalized Bayes/minimum description length criterion, proves versatile by converging to a local minimum and is applicable to multiband segmentation, encompassing grey level, colour, and texture images. The integration of a novel color model further enhances segmentation based on object albedos and facilitates the detection of highlight regions [8]. The paper introduces a distinctive technique for describing and recognizing human gestures, emphasizing the representation of gestures as ordered sequences of basic motions. Utilizing optical flow for estimating motion direction, the real-time recognition system is highlighted for its simplicity in assigning gesture models and achieving high recognition rates without specific performer analysis [9]. Another introduced system focuses on hand gesture recognition designed for continuous recognition against a stationary background. Employing real-time hand tracking, Fourier descriptor-based feature extraction, hidden Markov model (HMM) training, and gesture recognition, the system attains a recognition rate exceeding 90% for 20 different gestures [10]. Lastly, a proposed real-time vision system for hand gesture recognition in visual interaction environments utilizes general-purpose hardware and low-cost sensors. The effectiveness of this approach is demonstrated in recognizing 30 hand gestures from Chinese sign language, achieving a commendable 90% average recognition rate [11].
In the realm of computer vision, addressing the intricate challenges of real-time hand pose estimation and shape classification, especially in the context of sign language recognition, has been a noteworthy endeavor. A novel method, grounded in an object recognition by parts approach, has emerged, utilizing a sophisticated 3D hand model featuring 21 distinct parts. This innovative approach integrates a random decision forest (RDF) trained on synthetic depth images to execute per-pixel classification, assigning each pixel to a specific hand part. Impressively, the system achieves real-time processing of depth images captured by Kinect, circumventing the dependence on temporal information. Furthermore, an SVM-based recognition module dedicated to the ten digits of American Sign Language (ASL) showcases an outstanding recognition rate of 99.9% when operating on live depth images [12]. In the realm of Human-Computer Interface (HCI), a vision-based gesture recognition system tackles the challenge of detecting gestures from varying angles. Leveraging a combination of the Affine Transform and Discrete Fourier Transform (DFT), this system achieves an average recognition rate of 95.28% and 90.30% for gestures executed at ±30° and ±45°, respectively [13]. Another notable approach centers on gesture recognition utilizing a YOLOv3 and DarkNet-53 convolutional neural network, delivering heightened accuracy in complex environments without the need for additional preprocessing. The proposed model demonstrates commendable recognition rates, with figures such as 97.68%, 94.88%, 98.66%, and 96.70% for accuracy, precision, recall, and F-1 score, respectively [14]. Exploring the integration of wearable devices equipped with surface electromyography for monitoring muscle activity unveils new avenues for hand gesture recognition. A biosensing system featuring adaptive learning capabilities excels in real-time gesture classification, model training, and updating under diverse conditions, achieving an impressive accuracy rate of 97.12% for recognizing 13 distinct hand gestures [15]. Lastly, a highly efficient deep convolutional neural networks approach emerges as a compelling solution for hand gesture recognition. Employing transfer learning, this approach attains remarkable recognition rates, specifically 98.12%, 100%, and 76.67% for signer-dependent mode across three datasets, and 84.38%, 34.9%, and 70% for signer-independent mode [16].
Sign language systems are vital for aiding communication among the hearing impaired and vocal disabled in India. This study presents an innovative vision-based gesture recognition system tailored for signer independence, proficient in recognizing various facets of Indian Sign Language (ISL) from live video. The system optimizes computation speed with Zernike moments for key frame extraction and introduces an improved method for co-articulation elimination in fingerspelling alphabets. With preprocessing, feature extraction, and classification stages, the system utilizes skin color segmentation for sign extraction and employs Support Vector Machine (SVM) for gesture classification. Experimental outcomes showcase the system's effectiveness, achieving 91% accuracy for finger-spelling alphabets and 89% for single-handed dynamic words, surpassing some existing methods [17]. Another gesture recognition contribution focuses on a human-computer interface adept at recognizing Indian sign language gestures, addressing challenges related to hand involvement and overlapping. Successful recognition of alphabets and numbers employs Principal Component Analysis (PCA) and neural networks, with proposed enhancements for robustness, considering fingertip numbers and their distances from the hand centroid [18]. Additionally, the study explores hand gesture recognition in Human-Computer Interaction (HCI), specifically for Nepali Sign Language identification. Using a radical approach, Freeman chain code, Vertex Chain Code (VCC), blob analysis, and k-NN classification, the system's accuracy is validated through a confusion matrix approach, emphasizing the efficiency of the proposed methods [19].
Recent strides in the field of gesture recognition technologies incorporate diverse methodologies, featuring the utilization of YOLO v3 and DarkNet-53 convolutional neural networks to achieve lightweight and precise gesture recognition without the need for extensive preprocessing [14]. Wearable biosensors, equipped with in-sensor adaptive learning capabilities, significantly enhance gesture classification performance in varying conditions, achieving an impressive accuracy of 97.12% for 13 hand gestures and maintaining accuracy at 92.87% for 21 gestures [15]. The focus in automatic hand gesture recognition has shifted towards deep convolutional neural networks, showcasing efficiency with recognition rates of 98.12%, 100%, and 76.67% across three datasets in signer-dependent mode, and 84.38%, 34.9%, and 70% in signer-independent mode [16]. Furthermore, innovative systems tailored for diverse sign languages, such as Nepali Sign Language (NSL) and Bangla Sign Language (BdSL), have emerged. The NSL translation system, leveraging a 2D Convolutional Neural Network, achieves precise translation of static hand gestures into meaningful words, demonstrating higher accuracy for a model recognizing 5 signs [20]. A CNN-based system designed for American Sign Language (ASLA) showcases outstanding predictability, achieving 99.38% accuracy even under variable conditions, presenting potential applications in the medical field [21]. In Bangladesh, a real-time BdSL interpreter harnesses deep machine learning models, showcasing an exceptional accuracy of 99.99% with ResNet18, making a substantial contribution to the field and providing a valuable dataset for further exploration [22]. These collective advancements underscore the continuous progress and adaptability of gesture recognition systems across various contexts and sign languages.
In conclusion, this literature review presents a thorough perspective on hand gesture recognition methods, charting their development from initial techniques to recent technological progress. The exploration of vision-based and glove-based approaches, coupled with the recognition of sign languages in diverse contexts, underscores the extensive applicability and potential significance of these systems. The ongoing advancements in this field create promising prospects for the development of more advanced and culturally sensitive human-computer interaction technologies.
IV. COMPARATIVE STUDY
Sr. No |
Paper Title |
Aurthor(s) |
Methodology |
Advantage |
Disadvantage |
1 |
Gesture Recognition: A Chaotic Dynamics Approach [1994] |
J. Davis, M. Shah |
Model-Based Approaches, State Machines, Vector Models |
Early groundwork in coordinated hand movement identification [1] |
Dependency on Lighting Conditions, Complexity in Advanced Models like CNN |
2 |
Dependency on Lighting Conditions, Complexity in Advanced Models like CNN |
Dependency on Lighting Conditions, Complexity in Advanced Models like CNN |
Dependency on Lighting Conditions, Complexity in Advanced Models like CNN |
Dependency on Lighting Conditions, Complexity in Advanced Models like CNN |
Dependency on Lighting Conditions, Complexity in Advanced Models like CNN |
3 |
Real-time Tracking and Recognition of Human Hand [2002] |
Lockton R., Fitzgibbon A. W. |
Real-Time Tracking, Recognition |
Real-time tracking and recognition of human hand [3] |
Challenges in Dynamic Environments, Real-time Processing
|
4 |
Real-time American Sign Language Recognition [1995] |
Starner T., Pentland A. |
Hidden Markov Models |
Real-time recognition from video using Hidden Markov Models [4] |
Sensitivity to Variability in Sign Language Gestures, Limited Training Data Availability |
5 |
Shape from Silhouette [1996] |
Hogg D., Heap T. |
Shape from Silhouette |
Shape reconstruction from silhouette images [5] |
Sensitivity to Occlusions, Limited Depth Information |
6 |
Vision-Based Hand Gesture Recognition: A Comprehensive Review [2021] |
Sharma P., Bhuyan M. |
Real-time Signal Recognition, Hidden Markov Models, Deformable Models |
Amplified manual control within the realm of vision-based gesture recognition [6] |
Challenges in Complex Scenarios, Dependence on Specific Algorithms |
7 |
FORMS: A Flexible Object Recognition and Modelling System [1996] |
Song Chun Zhu, Yuille A. L. |
FORMS Framework, PCA, Image Grammar |
Efficient representation of shapes using FORMS framework [7] |
Limited Flexibility in Capturing Dynamic Hand Movements, Resource-Intensive Development |
8 |
Region Competition: Unifying Snakes, Region Growing, and Bayes/MDL for Multi-band Image Segmentation [1996] |
Zhu S. C., Yuille A. L. |
Region Competition, Image Segmentation |
Unifying snakes, region growing, and Bayes/MDL for multi-band image segmentation [8] |
Challenges in Handling Complex Scenes, Limited Generalization to Diverse Imagery |
9 |
Gesture Description and Recognition through Optical Flow Analysis [1998] |
Nishikawa S., Ohnishi N., Miyazaki F. |
Optical Flow Analysis |
Gesture description and recognition through optical flow analysis [9] |
Sensitivity to Motion Variations, Limited Discriminative Power |
10 |
Real-time Hand Gesture Recognition System Using HMM [2003] |
Chen H. T., Fu L. C., Huang C. L. |
Hidden Markov Models |
Real-time hand gesture recognition system using HMM [10] |
Challenges in Handling Real-world Variations, Sensitivity to Background Noise |
11 |
Vision System for Chinese Sign Language Recognition [2005] |
Teng X., Wu D., Yu Z., Liu Y. |
Vision System, Chinese Sign Language |
Vision system for Chinese sign language recognition [11] |
Challenges in Handling Diverse Sign Language Gestures, Limited Scalability |
12 |
Real Time Hand Pose Estimation Using Depth Sensors [2013] |
Keskin C., K?raç F., Kara Y. E., Akarun L. |
3D Metacarpal Adaptation Algorithm |
Real-time hand pose estimation with outstanding recognition rate [12] |
Challenges in Cultural Adaptation, Varied Perspectives in Gesture Recognition |
13 |
Gesture Recognition for Multiple Viewing Angles [2015] |
Baruah N., Talukdar A., Sarma K. K. |
Gesture Recognition, Multiple Viewing Angles |
Gesture recognition for multiple viewing angles [13] |
Challenges in Handling Varied Perspectives, Limited Generalization to Unseen Angles |
14 |
Compact Gesture Recognition Model using CNN YOLO v3 and DarkNet-53 [2021] |
Mujahid S. M., Ahmed S. I., Azam M. A., Hassan M. M. |
CNN YOLO v3, DarkNet-53 |
Compact gesture recognition model with CNN YOLO v3 and DarkNet-53 [14] |
Challenges in Handling Varied Hand Gestures, Limited Adaptability to Dynamic Environments |
15 |
Wearable Electromechanical Surface Biosensor for Hand Gesture Recognition [2020] |
Moin A., R. P. |
Wearable Biosensors, Electromechanical Surface |
Wearable electromechanical surface biosensor for hand gesture recognition [15] |
Challenges in User Comfort, Limited Recognition Precision in Certain Gestures |
16 |
A Novel Deep Convolutional Neural Network Approach for Hand Gesture Recognition [2020] |
Al-Hammadi M. M., Al-Ali A. R., Hamed R. I. |
Deep Convolutional Neural Network |
Novel deep convolutional neural network approach for hand gesture recognition [16] |
Challenges in Training Deep Models, Limited Explainability of Network Decisions |
17 |
Image-Based Gesture Recognition for Indian Sign Language Using CNN [2019] |
Athira P. K., Sruthi P. S., Lijiya P. |
Image-Based Gesture Recognition, CNN |
Image-based gesture recognition for Indian Sign Language using CNN [17] |
Challenges in Handling Diverse Indian Sign Language Gestures, Limited Generalization to Other Languages |
18 |
Human-Computer Interface Framework for Indian Sign Language Gesture Recognition [2021] |
Deora S., Bajaj M. |
HCI Framework, Indian Sign Language |
Human-computer interface framework for Indian Sign Language gesture recognition [18] |
Challenges in Adapting to Dynamic Environments, Limited Robustness to Varied Lighting Conditions |
19 |
Hand Gesture Recognition for Nepalese Sign Language Using Freeman and Vertex Sequence Coding [2018] |
Thapa R., Sunuwar L., Pradhan G. |
Freeman and Vertex Sequence Coding |
Hand gesture recognition for Nepalese Sign Language using Freeman and Vertex Sequence Coding [19] |
Challenges in Handling Diverse Nepalese Sign Language Gestures, Limited Generalization to Other Contexts |
20 |
Smart Handsigns: An Application of CNN in Sign Language Recognition [2018] |
Sipai S., Mali D., Mali R., Panday P. S. |
CNN, Sign Language Recognition |
Smart Handsigns: An application of CNN in sign language recognition [20] |
Challenges in Handling Diverse Sign Languages, Limited Adaptability to Real-time Scenarios |
21 |
CNN-based Human-Computer Interface for American Sign Language [2022] |
Ahmed Kasapbasi N., Köse H., Kaya H., Durak B. |
CNN, Human-Computer Interface |
CNN-based human-computer interface for American Sign Language [21] |
Challenges in Background Adaptation, Varied Cultural Contexts |
22 |
Real-time Bangla Sign Language (BdSL) Interpreter using Convolutional Neural Network [2022] |
Kanchon K. Podder Muhammad E.H. Chowdhury Anas M. Tahir Zaid Bin Mahbub Md Shafayet Hossain Muhammad Abdul Kadir. |
Convolutional Neural Network |
Real-time Bangla Sign Language (BdSL) interpreter using convolutional neural network [22] |
Challenges in Handling Diverse Bangla Sign Language Gestures, Limited Generalization to Other Languages |
This comparative study provides insights into the diverse methodologies, advantages, and disadvantages of various hand gesture recognition methods. The evolution from early model-based approaches to recent advancements in deep learning and CNN applications demonstrates the continuous progress in this field. Each approach has its strengths and weaknesses, and ongoing research aims to address challenges and enhance the applicability of gesture recognition systems across different scenarios and cultural contexts.
The progression of hand gesture recognition technology encompasses a variety of methodologies, transitioning from early model-based approaches to contemporary deep learning techniques. This evolution extends beyond technological advancements, delving into cultural and linguistic dimensions, particularly in the recognition of sign languages such as Nepali Sign Language (NSL). The implementation of the Hand Gesture Smart Recognition initiative, translating NSL into English words, directly confronts communication challenges faced by the deaf community in Nepal, thereby enhancing accessibility. The literature review covers applications ranging from robotics to medical assistance, introducing significant developments like the FORMS framework and the 3D Metacarpal Adaptive Algorithm, contributing to heightened accuracy and real-time recognition models. Cultural sensitivity assumes a prominent role in the exploration of sign languages such as Indian Sign Language and NSL, underscoring the importance of tailoring gesture recognition systems according to linguistic nuances. In summary, the ongoing development of hand gesture recognition technology holds promising prospects for advanced and culturally sensitive human-computer interaction technologies. These innovations have the potential to eliminate communication barriers, foster inclusion, and align with the broader goal of improving access for individuals with speech and hearing difficulties.
[1] Davis J. & Shah M. (1994). Gesture Recognition: A Chaotic Dynamics Approach. International Journal of Computer Vision, 51-66. [2] Cui Y. & Weng J. (1996). Learning Shape Models for Hand Shape Recognition from Depth Images. Proc. IEEE International Conference on Image Processing, (pp. 33-36). [3] Lockton R. & Fitzgibbon A. W. (2002). Real-time gesture recognition using deterministic boosting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (pp. 122-129). [4] Starner T. & Pentland A. (1995). Real-time American Sign Language Recognition from Video Using Hidden Markov Models. Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition, (pp. 189-194). [5] Hogg D. & Heap T. (1996). Towards 3D hand tracking using a deformable model. Proceedings of the 2nd European Conference on Computer Vision, (pp. 237-246). [6] Sharma P. & Bhuyan M. (2021). Methods, Databases and Recent Advancement of Vision-Based Hand Gesture Recognition for HCI Systems: A Comprehensive Review. Journal of Ambient Intelligence and Humanized Computing, 13155-13177. [7] Song Chun Zhu. & Yuille A. L. (1996). FORMS: A flexible object recognition and modelling system Proceedings of the 4th European Conference on Computer Vision, (pp. 640-650). [8] Zhu S. C. & Yuille A. L. (1996). Region Competition: Unifying Snakes, Region Growing, and Bayes/MDL for Multi-band Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, (pp. 884-900). [9] Nishikawa S. Ohnishi N. & Miyazaki F. (1998). Description and recognition of human gestures based on the transition of curvature from motion images. Pattern Recognition Letters, 931-939 [10] Chen H. T. Fu L. C. & Huang C. L. (2003). Hand gesture recognition using a real-time tracking method and hidden Markov models, 1215-1226. [11] Teng X. Wu D. Yu Z. & Liu Y. (2005). A hand gesture recognition system based on local linear embedding. Proceedings of the 2nd International Conference on Machine Learning and Cybernetics, (pp. 1328-1333). [12] Keskin C. K?raç F. Kara Y. E. & Akarun L. (2013). Real Time Hand Pose Estimation Using Depth Sensors. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (pp. 67-74). [13] Baruah N. Talukdar A. & Sarma K. K. (2015). A robust viewing angle independent hand gesture recognition system. International Journal of Advanced Computer Research, 150-154. [14] Mujahid S. M . Ahmed S. I. Azam M. A. & Hassan M. M. (2021). Real-Time Hand Gesture Recognition Based on Deep Learning YOLOv3 Model. Journal of Ambient Intelligence and Humanized Computing, 8115-8132. [15] Moin A., R. P. (2020). A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition. Biosensors and Bioelectronics. [16] Al-Hammadi M. M. Al-Ali A. R. & Hamed R. I. (2020 Hand Gesture Recognition for Sign Language Using 3DCNN. Journal of Computer Science and Technology, 15-26. [17] Athira P. K. Sruthi P. S. & Lijiya P. (2019). A Signer Independent Sign Language Recognition with Co-articulation Elimination from Live Videos: An Indian Scenario. International Journal of Innovative Technology and Exploring Engineering,, 2062-2067. [18] Deora S. & Bajaj M. (2012). Human-Computer Interface Framework for Indian Sign Language Gesture Recognition. International Journal of Advanced Computer Science and Applications,, 232-240 [19] Thapa R. Sunuwar L. & Pradhan G. (2018Finger Spelling Recognition for Nepali Sign Language. Journal of Electrical Engineering and Automation,, 14-22. [20] Sipai S. Mali D. Mali R. & Panday P. S. (2018). Two Dimensional (2D) Convolutional Neural Network for Nepali Sign Language Recognition. International Journal of Computer Applications, 23-26. [21] Ahmed Kasapbasi ,Ahmed Eltayeb ,Ahmed Elbushra, Omar Al-Hardanee , Arif Yilmaz. (2022). A CNN based human computer interface for American Sign Language recognition for hearing-impairedindividuals.. Journal of Artificial Intelligence and Systems, 45-58. [22] Kanchon K. Podder Muhammad E.H. Chowdhury Anas M. Tahir Zaid Bin Mahbub Md Shafayet Hossain Muhammad Abdul Kadir. (2022). Bangla Sign Language (BdSL) Alphabets and Numerals Classification Using a Deep Learning Mode. Journal of Electrical and Computer Engineering Innovations, 1-11.
Copyright © 2023 Badal Pokharel, Shweta Bandhekar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET57412
Publish Date : 2023-12-07
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here