Sign language recognition system is important nowadays. With the development of these systems, there will be easy communication between speech impaired people and normal people. The focus of developing sign language recognition is to create a communication bridge between speech impaired people and normal people for conveying information. This paper presents a survey on recent technology of Sign language recognition both static and dynamic. It shows all methods that were used for Sign language recognition in the different research papers.
Introduction
I. INTRODUCTION
The idea of sign language interpretation is to develop a technique that can identify predefined sign gestures/movements and use them to transfer information. In sign language translator hand gesture recognition method is used, a camera captures the human body movements and communicates the data to a computer that uses the gestures as input to give output as text or speech. The idea of developing a sign language recognition procedure is to develop an interaction between gifted people and normal people and the recognized signs are used to communicate meaningful information. The principal components of a sign gesture recognition process are data acquisition, hand localization, hand characteristics recognition, and gesture identification. The gesture is an indication of physical behavior or emotional expression. It consists of body gestures and hand gestures. It divides into two categories: static gesture and dynamic gesture. Static hand gesture recognition is performed without any additional devices. A person commands the machine using his bare hands, the person’s hand’s images are captured. and analyzed to determine the meaning of hand gestures. To recognize static gestures a common classifier or template match-er is used. A dynamic movement is supposed as a route between the first phase and the last phase. Dynamic gesture recognition characterizes hand movements. It has four features: - velocity, shape, location, and orientation. The use of this system in areas/places like:hospital, malls, bus, and railway ticket counters, etc.
II. LITERATURE REVIEW
Sharma[1]In this paper data set was compiled using stationary signs from a pre-existing, open-source GitHub repository. This paper uses the CNN Algorithm which makes use of Keras’s deep learning library, offering high accuracy. The model accuracy ranges of 85-95% for the training data-set and in the range of 75-85% for the validation dataset.Jinsu and Rajesh[2]Developed a sign language translation system that could translate Indian sign language to English, Malayalam. The system converts sign language with the help of a flex sensor attached to a glove, Arduino UNO with microcontroller and output displayed on android phone with the help of Bluetooth HC05. The system had an accuracy of 90 percent.
Fernandes[3]Developed recognition system for ISL and BSL which consists of both hardware and software approaches. hardware approach consists of gloves with flex sensors and accelerometer, system accuracy was not good. later they developed a software model consisting of code to capture an image; a custom data-set was created by performing preprocessing on an image such as RGB image to the grayscale image than into inverse binary image by using a suitable threshold value. Used CNN algorithm for predicting and achieved an accuracy of 99.98 percentage. Truong, Yang and Tran [4]created a system for ASL (American sign language) to text and speech .firstly in this system pre-processing was done to find feature vector of hand posture image like noise reduction grayscale conversion and then equalize the image histogram later used Haar-like classifier to recognize asl and text output was given .then after they used SAPI 5.3 to convert text into speech. The system had an accuracy of 98.7%.
KarayOlan[5]Developed system for sign language recognition˜ using Multi-layer Perceptron Neural Network. Marcel Static Hand Posture Database was used. Two classifiers were used in the system the neural network of Raw Features Classifier and Histogram Features Classifier. Then network was trained with Back-propagation Algorithm to recognize which letter is given for each classifier. The system gave 70% and 85% accuracy rates from respectively Raw Features Classifier and Histogram Features Classifier. Mahesh[6]Developed a system for static sign images of ISL. After performing preprocessing on the images, the sign was recognized by using the LDA algorithm. This system only recognize alphabets of ISL.
Kumar, Thankachan and dominic [7] created the system that can take both static and dynamic input. Then features are extracted using Ada-boost, HSV. Later trained the system using an SVM classifier to predict signs.
The classification model was built using Java machine learning library JAVA -ML. Used two Classification models Zernike feature vector and trajectory feature vector. Output was in the text. Mariappan[8] created real-time sign language translation system for ISL. In the system, the region of interest are identified and signs are tracked using the segmentation feature of OpenCV. The training and prediction of signs are performed by Fuzzy C means clustering machine learning algorithm (FCM) with the accuracy of 75%.
Dixit and Jalal[9] This paper presents a methodology that recognizes the Indian sign language ISL and translates it into normal text. This system uses Combinational parameters of Hu invariant moment and structural shape descriptors are created to form a new feature vector to recognize sign. A multi-class Support Vector Machine (MSVM) is used for training and recognizing signs of ISL.Accuracy was 96 percent.. Kartik, Tejas[10]Developed a software-based Indian sign language recognition system that used the image recognition concept. In this system face detection and elimination were done by using a histogram of oriented gradients followed by an SVM classifier later the system uses an HMM chain for each pose and KNN model to classify each hand poses. The system had an average accuracy of 97.2 percent Priyanka, thilagavathi[11] Their system was a real-time recognition system. The system used a Canny edge detector for feature extraction. After segmentation and hand tracking, the latter ANN model/ architecture is used for recognizing the sign and displaying the corresponding text. The whole system was implemented using MatLab 2014 version
Chen and Zang[12] Implemented sign language recognition system with Kinect sensor launch to by Microsoft. This sensor can capture the color depth and skeleton frames. In this system Kinect sensor was used to capture signs, later HOG and SVM algorithms way used to recognize hand shape features. The recognition rate was 89.6 percent. Mayand, Piyush[13] Developed a system for recognizing American sign language. Preprocessing of the MNIST the data set is done. A model was developed using a combination of convolutional, Max pooling, flattering layers in Keras, mainly the model depended on CNN. The accuracy of the model was 99.6 percent. Their data set didn’t include J and z alphabets.
Roa, Kishore and Sastry[14] The paper proposed a sign recognition system that is based on CNN architectures. Mainly they trained the system three times with different data-set sizes. A stochastic Technique was also used. This paper also suggested that CNN models are much better than SLR models. Huang, Zhou [15] Developed a 3D CNN model for sign language recognition. 3Dconvolutional neural network (CNN) extracts discriminative spatial-temporal features from a raw video stream. To boost the performance, multi-channel video streams, including color information, depth clue, and body joint positions, are used as input to the 3D CNN to integrate color, depth, and trajectory information. They use a multilayer perceptron classifier to classify these feature representations. For comparison, they evaluated both 3D CNN and GMM-HMM on the same dataset. Accuracy for GMM-HMM is 90.8 percent and for CNN is 94.2 percent.
Conclusion
The Sign language recognition system is developed to bridge the communication gaps between speech-impaired people and normal people. In this survey, we briefly discussed various systems. The System consists of two types of hardware and software in which software system is much cost-effective, accurate. As in hardware systems they used flex sensor, Kinect sensor, etc. A software system can be implemented by using various algorithms/architectures like SLR, SVM, GMM-HMM, FCM, ANN, CNN, etc. In the software system, we found that LDA and CNN algorithms had the highest recognition rate and the CNN algorithm was used the most. Most systems used sign language alphabets not actual words or sentences.
References
[1] Aishwarya Sharma, Dr. Siba Panda, and Prof. Saurav Verma. ” Sign Language to Speech Translation” 11th ICCCNT 2020.
[2] Jinsu Kunjumon, Dr. Rajesh Kannan Megalingam. ” Hand Gesture Recognition System For Translating Indian Sign Language Into Text And Speech” Second International Conference on Smart Systems and Inventive Technology (ICSSIT) 2019.
[3] Lance Fernandes, Prathamesh Dalvi, Akash Junnarkar and Professor Manisha Bansode” Convolutional Neural Network based Bidirectional Sign Language Translation System ” Proceedings of the Third International Conference on Smart Systems and Inventive Technology (ICSSIT 2020)
[4] Vi .N.T. Truong, Chuan-Kai Yang and Quoc-Viet Tran. ” A Translator for American Sign Language to Text and Speech “ IEEE 5th Global Conference on Consumer Electronics 2016.
[5] Tulay Karay¨ Olan,˜ Ozkan Kilic¸ ” SIGN LANGUAGE RECOGNITION¨ USING “ Fifth International Conference on Intelligent Computing and Control Systems (ICICCS) 2021.
[6] Mahesh Kumar N B. ” Conversion of Sign Language into Text” International Journal of Applied Engineering Research 2018.
[7] Anup Kumar, Karun Thankachan and Mevin M. Dominic ” Sign Language Recognition “3rd InCI Conf. on Recent Advances in Information Technology RAIT-2016.
[8] Muthu Mariappan H, Dr Gomathi V ” Real-Time Recognition of Indian Sign Language “Second International Conference on Computational Intelligence in Data Science (ICCIDS-2019).
[9] Karishma Dixit, Anand Singh Jalal ” Automatic Indian Sign Language Recognition System “ 3rd IEEE International Advance Computing Conference (IACC) 2013
[10] Kartik Shenoy, Tejas Dastane, Varun Rao, Devendra Vyavaharkar” Realtime Indian Sign Language (ISL) Recognition “9th ICCCNT 2018.
[11] Priyanka C Pankajakshan, Thilagavathi B” Sign Language Recognition System “IEEE Sponsored 2nd International Conference on Innovations in Information Embedded and Communication Systems 2015.
[12] Yuqian Chen, Wenhui Zhang ” Research and Implementation of Sign Language Recognition Method Based on Kinect “ 2nd IEEE International Conference on Computer and Communications 2016.
[13] Mayand Kumar, Piyush Gupta, Rahul Kumar Jha, Aman Bhatia, Khushi Jha and Bickey Kumar Shah ” SIGN LANGUAGE ALPHABET RECOGNITION USING CONVOLUTION NEURAL NETWORK “ Fifth International Conference on Intelligent Computing and Control Systems (ICICCS) 2021.
[14] G.Anantha Rao, K.Syamala, P.V.V.Kishore , A.S.C.S.Sastry” Deep Convolutional Neural Networks for Sign Language Recognition “ Conference on Signal Processing And Communication Engineering Systems (SPACES) 2018
[15] Jie Huang, Wengang Zhou, Houqiang Li, and Weiping Li. ” SIGN LANGUAGE RECOGNITION USING 3D CONVOLUTIONAL NEURAL NETWORKS “ IEEE 5th Global Conference on Consumer Electronics 2019.