Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Prof. K. K. Sukhadan, Vaishnavi D. Bakhade, Gauri S. Thakare, Komal G. Dhanbhar, Aboli C. Deshmukh
DOI Link: https://doi.org/10.22214/ijraset.2024.58758
Certificate: View Certificate
Sign language serves as a primary mode of communication for the Deaf and hard-of-hearing community. This paper presents a Sign Language Recognition System (SLRS) designed to facilitate seamless communication between individuals proficient in sign language and those who may not share this proficiency. The system employs a multi-faceted approach, integrating computer vision, machine learning, and signal processing techniques to accurately interpret and recognize sign language gestures. The methodology involves data collection from diverse signing styles, preprocessing to enhance data quality, feature extraction capturing key aspects of sign language expressions, and the selection and training of machine learning models. The system aims to represent sign language gestures effectively, providing a foundation for real-time recognition. Integration into hardware or software platforms ensures practical applications in various settings, including education, employment, and public spaces.
I. INTRODUCTION
A. Basic Definition
The increased public acceptance and funding for international projects emphasises the necessity of sign language. The desire for a computer-based solution is significant in this age of technology for deaf people. However, researchers have been attacking the problem for quite some time now and the results are showing some promise. Although interesting technologies become available for voice recognition, there is currently no commercial solution for sign recognition on the market. The goal is to make computers understand human language and develop a user-friendly human-computer interface (HCI). More than 63 million people in India suffer from hearing and speech impairments.
Sign language detection is a project implementation for designing a model in which a web camera is used to capture images of hand gestures which is done by open CV human to human connection, open dialogue, that must imply the correlation between the user and the machine. Gesture analysis is a scientific field that can recognize gestures such as hand, arm, head, and even structural motions that usually entail a certain posture and/or motion. Using hand gestures, the individual may send out more information in a shorter amount of time. Several approaches were explored to apply computer vision ideas to the real-time processing of gesture outputs. The Computer Vision study concentrates on gesture recognition in the open CV framework using the Python language. Language is a huge part of communication. Gesture is a vital and meaningful mode of communication for the visually impaired person. So here is the computer based method for regular people to understand what the differently abled individual is trying to say. This allows the identification of gestures, which overcomes the boundaries and limitations of earlier systems.
This model uses a pipeline that takes input through a web camera from a user who is signing a gesture and then by extracting different frames of video, it generates sign language possibility for each gesture.
B. Related Work
With the continuous development in Information technology, the ways of interaction between computers and Humans have also evolved. There has been a lot of work done in this field to help deaf and able-bodied people communicate more effectively. Because sign language is a collection of gestures and postures, any effort to recognize sign language falls under the purview of human computer interaction. Sign Language Detection is categorized into two parts. The first category is the Data Glove approach, in which the user wears a glove with electromechanical devices attached to digitize hand and finger motion into processable data.
In sign language, a simultaneous structure exists with a parallel temporal and spatial configuration. Based on these characteristics, the syntax of sign language sentences is not as strict as in spoken language. The formation of a sign language sentence includes or refers to time, location, person, and base. In spoken languages, a letter represents a sound. For the deaf nothing comparable exists.
Hence people, who are deaf by birth or became deaf early in their lives, have very limited vocabulary of spoken language and face great difficulties in reading and writing. The problem can be solved and systematically organized into three similar approaches which are, firstly using static image recognition techniques and preprocessing procedures and, secondly by using deep learning models.
State of art techniques focused on utilizing the deep learning models to get better accuracy and less execution time. Model using special hardware components such as a depth camera has been used to get information about the depth variation in the image to find an additional feature for comparison and then developed a Convolutional Neural Network (CNN) for getting the results. Feature Extraction using SIFT and classification using Neural Networks (CNN) was developed, to get the desired results. In one of the models, the images were converted into an RGB scheme, the information was developed using the motion depth channel, and finally, 3D recurrent convolutional neural networks (3DRCNN) were developed in the working system. HMM models for sign language recognition are another prominent technique. Since all these papers use different datasets of different numbers and types of images, the comparison among these becomes a tedious task. It was found that most of the projects utilized images that were nearly free of noise.
II. OBJECTIVES
The objective of a sign language recognition system is to enable communication between individuals who use sign language and those who may not be familiar with it. Sign language is a visual-gestural language used by the Deaf community to convey information and express thoughts and emotions. The main goals of a sign language recognition system include:
Developing an effective sign language recognition system involves utilizing technologies such as computer vision, machine learning, and gesture recognition to accurately interpret and translate sign language gestures into written or spoken language. The ultimate goal is to bridge the communication gap and promote inclusivity for the Deaf community.
III. PROPOSED METHODOLOGY
IV. METHODOLOGY
Developing a sign language recognition system involves a multi-step methodology that combines computer vision, machine learning, and signal processing techniques. Here is a general methodology for creating a sign language recognition system:
V. ADVANTAGES
A Sign Language Recognition System offers several advantages, particularly in facilitating communication and improving accessibility for the Deaf and hard-of-hearing community. Here are some key advantages:
In conclusion, a Sign Language Recognition System represents a transformative and inclusive technological solution with the potential to significantly improve communication and accessibility. The systems ability to interpret and translate sign language gestures into written or spoken language address longstanding barriers and open doors to more seamless interactions across various domains. The advantages of such systems are manifold, ranging from fostering inclusive communication to enhancing educational support, enabling efficient interpreting and promoting cultural preservation. Real time communication, technological integration and the potential for increased independence contribute to a more inclusive and accessible society. However, the development of effective sign languages recognition systems comes with challenges, including the need for robust datasets, nuanced understanding of diverse signing styles, and continue to different contexts. As technology advances, sign language recognition systems have the potential to become integral components of various application, from educational tools to workplace communication solution. Their positive impact extends beyond individual interactions, contributing to a broader societal shift toward greater inclusivity and understanding of diverse communication modalities. Continued collaboration between researchers, developers and the Deaf community will play a crucial role in shaping the future of sign language recognition systems in shaping the future sign language recognition systems and further breaking down communication barriers.
[1] Mekala, P.; Gao, Y.; Fan, J.; Davari, A. Real-time sign language recognition based on neural network architecture. In Proceedings of the IEEE 43rd Southeastern Symposium on Systems. Theory, Auburn, AL, USA, 14–16 March 2011. [2] Huang, J., Zhou, W., & Li, H. (2015). Sign Language Recognition using 3D convolutional neural networks. IEEE International Conference on Multimedia and Expo (ICME) (pp. 16). Turin: IEEE. [3] Computer Vision and Pattern Recognition (pp. 1933-1941). Las Vegas Valley: IEEE Swapnil Athavale and Mona Deshmukh. DynamicS hand gesture recognition for human computer interaction; a SScomparative study. International Journal of Engineering Research and General Science, 2(2):38– 55, 2014. [4] Harshith C, Karthik R. Shastry, Manoj Ravindran, M.V.V.N.S Srikanth, Naveen Lakshmikhanth, “Survey on various gesture recognition Techniques for interfacing machines based on ambient intelligence”, International Journal of Computer Science & Engineering Survey (IJCSES) Vol.1, No.2, (November 2010). [5] Chen L, Lin H, Li S (2012) Depth image enhancement for Kinect using region growing and bilateral filter. In: Proceedings of the 21st international conference on pattern recognition (ICPR2012). IEEE, pp 3070–3073. [6] Rublee, Ethan & Rabaud, Vincent & Konolige, Kurt & Bradski, Gary (2011).ORB: an efficient alternative to SIFT or SURF. Proceedings of the IEEE International Conference on Computer Vision. [7] F. Yasir, P. W. C. Prasad, A. Alsadoon and A. Elchouemi. (2015). SIFT-based approach on Bangla sign language recognition. IEEE 8th International Workshop on Computational Intelligence and Applications (IWCIA), Hiroshima. [8] Jin, C. M., Omar, Z., & Jaward, M. H. (2016). A mobile application of American sign language translation via image processing algorithms. 2016 IEEE Region 10 Symposium (TENSYMP). [9] Jalal, M. A. (2018). American Sign Posture Understanding with Deep Neural Networks. International Conference on Information Fusion (FUSION) (p. 7). IEEE.
Copyright © 2024 Prof. K. K. Sukhadan, Vaishnavi D. Bakhade, Gauri S. Thakare, Komal G. Dhanbhar, Aboli C. Deshmukh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET58758
Publish Date : 2024-03-03
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here