It might be challenging to focus on and solve the issues experienced by persons with disabilities, such as those who are visually, audibly, or verbally impaired, utilizing just one technology. The project\'s goal is to provide a single device solution that is easy to use, quick, accurate, and economical. A Raspberry Pi-based assistance for the blind, deaf, and dumb is presented in the study. The major goal of the technology is to provide persons with disabilities a sense of independence and confidence by seeing, hearing, and speaking for them. Vocally impaired persons can stand in front of the camera and conduct activities using sign language thanks to the suggested technology. The retrieved text is then converted into an audio format using speech synthesis and image-to-text conversion. For those who have trouble hearing, the input comes in the form of speech that is picked up by the microphone. Recorded audio is then transformed into text that is presented to the user in the window on the device\'s screen. The audio message may now be sent through the speaker to people who are blind.
Introduction
I. INTRODUCTION
Being challenged by conditions like blindness, hearing loss, or deafness is of increasing concern. Science and innovation have led people to become more reliant on comfort, yet there is a group of disadvantaged people who are striving to come up with a novel technique to make communication easier for them. The World Health Organization estimates that there are 285 million blind individuals, 300 million people who are hard of hearing, and 1 million persons who are quiet around the world. Communication is a major issue for those who are blind, deaf, or dumb in daily life.
The above-mentioned fact will be the focus of this article. It makes an effort to create a new tool that can help persons with disabilities (blind, deaf, and dumb) to easily converse with other typical people in the real world.
This paper's main objective is to bridge the communication gap and offer a few solutions that can help people who are blind, deaf, dumb, or impacted by any combination of these disabilities. Here, we suggest a technology that can assist with the above-mentioned issue by converting the sign language used by the dumb person to text and voice for helping the deaf and blind using a camera.
II. RELATED WORK
NetchanokTanyawiwat and Surapa Thiemjaru created a brand-new glove design that uses five goggle sensors and a 3D accelerometer on the back of the hand to interpret fingerprint gestures in American sign language. Each flex and pair of contact transmitters are integrated into a single input channel in the BSN node to reduce the number of channels and setup space. The signal is then examined and redrew by the program contact and flex resources. Utilizing fabric-based electrical contacts, flexible yarn, and conductive yarns, the glove design has become smaller and more flexible. Experiments with ASL finger gestures were conducted utilizing signals gathered from six speech-impaired individuals and typical topic validation. The novel sensor glove design has produced significant improvements in categorization accuracy, according to the data.
M. Mohandes, S. A-Buraiky, T. Halawani and S. Al-Baiyat have demonstrated that the interfaces of the sign language system can be classified as direct or visual devices. The direct device technique makes use of measurement tools that are in close proximity to the hand, such as instrument gloves, flexible sensors, and style and location tracking devices. While the user is wearing painted gloves and gesturing with their hands or fingers, the vision-based approach uses a camera to reflect hand movement, which can occasionally be helpful. Their major flaw is that they must perform numerous calculations before they can analyze the photos. Additionally, we shall examine directed device approaches in this study.
Tyflos- Koufos proposed a paper “Multi-modal Interfaces for Interaction and Communication between Hearing and Visually Impaired Individuals: Problems & Issues” in an effort to provide answers to these problems. Communication between blind and deaf people is one significant and difficult issue in human connection, according to the author of this research. In order to build communication and engagement between blind and deaf people, this paper gives a study on multi-modal interfaces, challenges, and problems in that regard.
Motivation: We discovered a variety of technologies that can make it easier for people with disabilities to communicate with one another and the general public, but every technology we looked at up until this point was only concerned with one particular aspect or degree of disability among the three conditions of blindness, deafness, and deafness. None of the available technologies is sufficiently advanced to serve as a universal strategy for dealing with any combination of these three limitations. In order to accomplish this, we came up with a basic strategy that anyone with any type of mix of these three disabilities can utilize to imagine themselves as a member of this wonderful world.
III. OBJECTIVES
One of the challenges that people face in life is communication between the deaf and the blind, as is widely known. This challenge consists of three cases:
Those who are deaf cannot hear anyone speaking to them.
Those who are blind cannot see a deaf person when they use sign language.
Those who are unable to hear properly do not talk such that a person who is blind may hear them.
In order to handle the issues and circumstances mentioned above, the system prototype is recommended.
IV. PROPOSED0SYSTEM FUNCTIONALITY
The device's general layout is depicted in the block diagram below. The Raspberry Pi serves as the device's support system and connects the LCD display, speaker, SD card, and camera. The system works for both the vocally and visually disabled since the camera takes a picture of the sign language used by the vocally impaired and the output is in audio format through the speaker which is helpful for the visually impaired and the message is shown on the LCD module for the audibly impaired person.
The camera is used by dumb people to recognize the hand gestures. Using a speaker and LCD display, the camera employs an algorithm to translate the different movements into text and audio sounds. The various hand gesture patterns are identified in this way. Blind people may therefore hear the message over the speaker, and deaf people can see the message over the LCD.
Additionally, the microphone is utilized to capture the voice of any of the users, which is then transformed into text using an algorithm that is displayed on an LCD that aids deaf users in understanding the proper message.
V. TOOLS REQUIRED
A. Hardware
Raspberry Pi
Pi Camera
SD card
LCD Display
Speaker
B. Software
Program: Python
Platform: Python 3 IDE
Raspberry pi OS: Raspbian OS
Library: OpenCV
VNC viewer
VI. EXPERIMENTAL SETUP AND RESULTS
Below shown is the Experimental Setup of the system prototype and system analysis. This paper depicts the 50% completion of the project.
Conclusion
In order to facilitate communication between all disabled people and non-disabled people and to account for the three types of disabilities that may arise, we have taken these factors into consideration. According to his capacity and desire, the individual can convey and transmit the message. While individuals who are unable to understand sign language can utilize the device to get the output in audio form for normal or blind people, the person can use their sign language to communicate. Additionally, for Deaf persons, the message can be shown on the LCD screen in written form. Even the microphone converts the audio input to text, which is then shown on the LCD. Therefore, this strategy can address any type of challenge that might arise in the process of communication between people with varied abilities and the rest of society.
References
[1] NetchanokTanyawiwat and SurapaThiemjarus, Design of an Assistive Communication Glove using Combined Sensory Channels, 2012, Ninth International Conference on Wearable and Implantable Body Sensor Networks.
[2] M. Mohandes and S. Buraiky, ?Automation of the Arabic sign language recognition using the power glove, AIML Journal, vol. 7, no. 1, pp. 41–46, 2007.
[3] Nikolaos Bourbakis1,3, Anna Esposito2, D. Kabraki” Multi-modal Interfaces for Interaction-Communication between Hearing and Visually Impaired Individuals: Problems & Issues” 19th IEEE International Conference on Tools with Artificial Intelligence.
[4] n.wikipedia.org/wiki/American_Sign_Language
[5] How do Deaf-Blind People Communicate? Available at: http://www.aadb.org/factsheets/db_communications.html.
[6] Indian Sign Languages using Flex Sensor Glove - International Journal of Engineering Trends and Technology (IJETT) - Volume4 Issue6- June 2013.
[7] Implementation of Flex sensor and Electronic Compass for Hand Gesture Based Wireless Automation of Material Handling Robot - International Journal of Scientific and Research Publications, Volume 2, Issue 12, December 2012 1 ISSN 2250-3153
[8] Novel Approaches for Robotic Control Using Flex Sensor - Sangeetha. P et al. Int. Journal of Engineering Research and Applications www.ijera.com ISSN: 2248-9622, Vol. 5, Issue 2, ( Part -2) February 2015, pp.79-8