Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Himanshu Tambuskar, Gaurav Khopde, Snehal Ghode, Sushrut Deogirkar, Er. Manisha Vaidya
DOI Link: https://doi.org/10.22214/ijraset.2023.49007
Certificate: View Certificate
Communication is a very important part of our Human life to express feelings and thoughts. People like the Deaf & Dumb always face difficulty as they cannot speak in their regional languages. Language performs a very important role in communication, it can be verbal i.e. using words to speak, read and write or non-verbal using facial expressions and sign language. So, people like the Deaf and Dumb have the only choice to speak in Sign language means non-verbal. However, Sign language is a very important mode of their community. But it is difficult for people who are unaware of sign language. Hence, here is a system “Sign Language Recognition System Using Open CV and Convolutional Neural Network”. We proposed a system that converts sign language to their appropriate alphabet, words in a standard language to make easily understood by all. We also make some default gestures that we daily use in our day-to-day life. The project works on a learning algorithm, it requires the collection of datasets which includes images of each alphabet, and digits to train the model. For the classification of the image
I. INTRODUCTION
The Deaf and dumb which not able to speak and hear properly for such people only have one mode of communication which is non-verbal. It can be with the help of sign language, facial expressions, gestures, and electronic devices. It would be difficult for them to explain what they want to convey to normal people. It is difficult and expensive to find an experienced interpreter on a regular basis.
We were aiming to develop a system that converts sign language into text format with the help of a vision-based approach so it becomes cost-effective. Sign language consists of a variety of hand transformations, orientations, facial expressions, and hand movements that are used to transmit messages. Every sign is allocated to a particular alphabet and meaning. Some languages are found globally such as American Sign Language (ASL), British Sign Language (BSL), Japanese sign language (JSL), and so on.
Normal people never try to learn sign language to communicate with deaf and dumb people. This leads to the isolation of deaf and dumb people this isolation can be removed with the help of a computer. If a computer can be programmed in such a manner to translate sign language into text format. From this paper, you will get information about, how we create a system, what the requirement, and what kind of data we used for the training and testing of a system, it gives information about the previous research done on sign language, In the end, it contains a conclusion.
II. OBJECTIVE
The goal of the Sign Language Recognition Project, is a real-time vision-based system, is to determine the American Sign Language represented by the alphabet shown in Fig. 1. The prototype's goals were to assess the feasibility of a vision-based system for sign language recognition and, concurrently, to evaluate and choose hand features that could be used to machine learning algorithms to enable real-time sign language recognition systems.
The adopted approach simply makes use of one camera and is based on the preceding notions:
III. LITERATURE SURVEY
From the above literature survey, the author uses different techniques to implement and develop the model which is based on a vision-based approach, sensors, MOPGRU ( Mediapipe optimized gated recurrent unit) [4], CNN(Convolutional Neural Networks) [1][2][3] and [5], which is used for image recognition and tasks that involve the processing of pixel data. LSTM is used to learn, process, and classify sequential data because these networks can learn long-term dependencies between time steps of data. It is observed that CNN is the most frequently used algorithm in the above papers since is it used for model building.
The above article demonstrates some of the techniques listed in building a sign language recognition model that converts hand signs into their corresponding alphabets and digits based on standard languages such as American Sign Language, Indian Sign Language, Japanese Sign Language, and Turkish Sign Language. After a Closer look at the above research Paper of Sign Language Recognition System it is observed that the most widely used data acquisition component were camera and Kinect. Most of the work on sign language recognition systems has been performed for static characters that have been already captured and isolated sign respectively.it has been observed that the majority of work has been performed using single handed signs for different sign language systems. It has been found that the most of the work has been performed using Convolutional neural networks which is used for image recognition and tasks that involve for image processing.
[1] Ahmed Kasapbasi ,Ahmed Eltayeb ,Ahmed Elbushra, Omar Al-Hardanee , Arif Yilmaz “A CNN based human computer interface for American Sign Language recognition for hearing-impairedindividuals.”,2022.https://www.sciencedirect.com/science/article/pii/S2666990021000471?via%3Dihub [2] Kanchon K. Podder Muhammad E.H. Chowdhury Anas M. Tahir Zaid Bin Mahbub Md Shafayet Hossain Muhammad Abdul Kadir, “Bangla Sign Language (BdSL) Alphabets and Numerals Classification Using a Deep Learning Model”,2022 https://www.mdpi.com/1424-8220/22/2/574 [3] Pooja M.R Meghana M Praful Koppalkar Bopanna M J Harshith Bhaskar Anusha Hullali, “Sign Language Recognition System”, 2022. ijsepm.C9011011322. [4] Bekhzod Olimov, Shraddha M. Naik, Sangchul Kim, Kil-Houm Park & Jeonghong Kim “An integrated mediapipe?optimized GRU model for Indian sign language recognition”, 2022. https://www.nature.com/articles/s41598-022-15998-7 [5] Satwik Ram Kodandaram, N. Pavan Kumar, Sunil Gl,“Sign Language Recognition”,2021. https://www.researchgate.net/publication/354066737_Sign_Language_Recognition [6] Mathieu De Coster, Mieke Van Herreweghe, Joni Dambre, “Isolated Sign Recognition from RGB Video using Pose Flow and Self-Attention”,2021. CVPRW_2021 [7] Arpita Haldera , Akshit Tayadeb, “Real-time Vernacular Sign Language Recognition using MediaPipe and Machine Learning”, 2021. IJRPR462. [8] Songyao Jiang, Bin Sun, Lichen Wang, Yue Bai, Kunpeng Li and Yun Fu, “Sign Language Recognition via Skeleton-Aware Multi-Model Ensemble”, 2021. 2110.06161v [9] Ishika Godage, Ruvan Weerasignhe and Damitha Sandaruwan “Sign Language Recognition For Sentence-Level Continues Signing”, 2021. csit112305 [10] N. Mukai, N. Harada, and Y. Chang, \"Japanese Fingerspelling Recognition Based on Classification Tree and Machine Learning,\" 2017 Nicograph International (NicoInt), Kyoto, Japan, 2017, pp. 19-24.doi:10.1109/NICOInt.2017 [11] Jayshree R. Pansare, Maya Ingle, “Vision-Based Approach for American Sign Language Recognition Using Edge Orientation Histogram”, International Conference on Image, Vision and Computing, pp.86-90, 2016. [12] Nagaraj N. Bhat, Y V Venkatesh, Ujjwal Karn, Dhruva Vig, “Hand Gesture Recognition using Self Organizing Map for Human-Computer Interaction”, International Conference on Advances in Computing, Communications, and Informatics, pp.734-738, 2013.
Copyright © 2023 Himanshu Tambuskar, Gaurav Khopde, Snehal Ghode, Sushrut Deogirkar, Er. Manisha Vaidya. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET49007
Publish Date : 2023-02-05
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here