People practicing social distancing has been an essential step in limiting COVID-19 transmission. Even after repeated warnings, many people fail to practice social separation. We describe a unique method for automatically recognizing two persons who do not follow social distancing rules, such as keeping a distance of around 6 feet between them. Congested traffic or dense pedestrian areas are part of our plan. We use a portable robot with a ultrasonic sensor, and an RGB-D camera and a 2-D shutter, to promote non-collision moving in the crowd and to compute the distance between all humans identified in the camera viewing field. With the use of a heat-sensing sensor, our robot can also detect a person\'s body temperature and notify safety/healthcare specialists. Indoors, our mobile robot may be linked to the location\'s CCTV cameras to boost performance based on the number of community violations identified, accurate pedestrian tracking, and other factors. We underline the benefits of employing our technology in a unique and adaptable manner against covid-19, which helps to mitigate the impact of covid-19 cases caused by a lack of social distance
Introduction
I. INTRODUCTION
Our objective is to develop a robotic automobile that can be readily controlled using pre-programmed human voice instructions. Speech Controlled Automation Systems (SCAS) are the common name for these systems. The above-mentioned SCAS system is an illustration of our entire project design. Hence our final goal is to create a robot that is controlled by voice instructions and it is also able to predict the distance between two people in a crowded place and check whether they are following the social distancing or not. A cell phone is used to operate the robot remotely bypassing the voice-command-based signals through it. We shall explore different aspects in this article that demonstrate the relationship between the robot and the smartphone and also about the camera and ultrasonic sensing algorithms. For constructing remote robots, a smartphone is an ideal visual link. It has a lot of handy functions. For the needed function, the android controller with limited control is employed in this design. Bluetooth technology facilitates communication between the app and the robot. Voice orders would be transformed into Bluetooth signals and transferred to the Robotic car's station, where they would be received by the Bluetooth module. VCRV, a voice-controlled robotic vehicle, is designed to follow and implement human commands. This system necessitates extensive training in order for the machine to map voice-based orders to code signals, decode the signals supplied to it, and carry out all of the activities based on voice commands transmitted over Bluetooth. Our robot transmits speech signal orders translated to text over a Bluetooth network. This article discusses computer characteristics for face, object, and speech recognition that are basic and straightforward to use.
II. HARDWARE REQUIREMENTS
1]. Rasberry Pi:Raspberry Pi is a small integrated circuit board that may be used for a variety of things, including learning programming skills, building hardware projects, home automation, Kubernetes clusters, Edge computing, and even industrial applications.
2. L293D Mortor Driver:The L293D is a 16-pin Motor Driver IC that can operate two DC motors in any direction at the same time. At voltages ranging from 4.5 V to 36 V (at pin 8! ), the L293D is capable of bidirectional drive currents of up to 600 mA (per channel). It can be used to control small dc motors.
3. DC Mortors:Any rotary electrical motor that converts direct current (DC) electrical energy into mechanical energy is referred to as a DC motor.
4. RGB-D Camera Sensor: RGB-D Sensors are a sort of depth sensor that works in conjunction with an RGB (red, green, and blue colour) sensor camera. They can add depth data (related to the distance to the sensor) to a conventional picture on a per-pixel basis. Many difficult tasks, such as object identification, scene parsing, posture estimation, visual tracking, semantic segmentation, shape analysis, image-based rendering, and 3D reconstruction, may benefit from the use of depth information.
5. Ultrasonic Sensor: An ultrasonic sensor is a device that uses ultrasonic sound waves to determine the distance to an item. A transducer is used in an ultrasonic sensor to emit and receive ultrasonic pulses that relay information about the proximity of an item.
6. Buzzer Sensor:A beeper or buzzer is an auditory signalling device that can be electromechanical, piezoelectric, or mechanical. The main purpose of this is to transform an audio signal to a sound signal. It is commonly used in timers, alarm devices, printers, alarms, computers, and other equipment that are powered by DC voltage. It may produce various sounds such as alert, music, bell, and siren depending on the varied designs.
7. Bluetooth Module (HC-05):The HC-05 is a wireless Bluetooth transceiver module based on the CSR BC417143 that is frequently utilised in a variety of embedded projects. It comes preinstalled with serial port profile (SPP) firmware and is designed for serial communication. For better attachment, this may be powered from 3.3V to 6V. Only a receiver module may be used with this Bluetooth module.
III. PROPOSED SYSTEM
The robot can be controlled easily through voice commands without any physical touch. These voice,commands is for controlling the robot remotely, other module of this would be connecting the robot with ultrasonic sensor, and an RGB-D camera and a 2-D shutter so that it can capture the images of the people and automatically monitors the distance between the two people in a crowded place and release a beeping sound from the buzzer with the help different image depth sensing and OpenCV algorithms that are loaded on the Rasberry Pi.
The system would be a combination of 3 phased connections: -
Phase 1: The first step will be to link the Raspberry Pi to the AMR programme using the Bluetooth module HC-05, so that commands can be given wirelessly to the robot's Bluetooth module. The AMR VOICE Application application converts voice instructions into text using the smartphone's integrated speech recognition engine. As a result, the Bluetooth module now recives the translated text-based commands from the application. The Raspberry Pi now turns the bluetooth module's commands into bits of data signals. These signals are now sent to the Mortor Driver (L293D IC) of the system .
Phase 2: In the 2nd phase the signals sent to the motor driver IC are utilised to operate the DC Motor in either direction. The L293D is a 16-pin IC that can operate two DC motors in any direction at the same time. It is based on the H bridge idea, which enables voltage to flow in either direction. As a result, the H bridge IC is an excellent choice for controlling DC motors. The motors begin spinning in the specified direction based on the commands supplied, allowing this system to move about independently. In this way our robotic vehicle is mobilised.
3. Phase 3:In the final phase, the distance is determined using an ultrasonic sensor, and photos of individuals are acquired using camera sensors (a RGB-D camera and a 2-D shutter), and these images are further analysed using openCV. The image is processed, and the result is utilised as an input. The Raspberry Pi is where a picture is processed. We employ the EAST (Efficient and Accurate Scene Text) method with OpenCV to analyze a picture and determine if the social distance is seen between persons. If the social distance is not maintained, signals are transmitted to the buzzer, which emits a beeping sound to inform the individuals to follow the social distancing.
IV. EXPERIMENTAL RESULTS
For checking the voice recognition capability of the robot setting voice, commands on our phone and check whether the robot was moving perfectly according to the direction specified. The robot was discovered to be moving precisely in accordance with the orders supplied to the user via voice commands.
For Image sensing we considered an outside area with more than 30 people moving around the robot in our evaluations. Thirty of the locations are inside the RGB-D camera’s sensing region.
Based on the locations 5 different scenarios were created for testing the ability of the robot.
Scenario 1:One individual stands in various spots within the RGB-D camera's sensing range. At the location where they stand, the person is free to orient themselves in any random direction.
Scenario 2: Two people stand together in one of the 40 equally sampled sites in the environment (distance between them 2 metres). Individually, the persons can face any arbitrary orientation.
Scenario 3: A person travels 5 metres in a perpendicular direction to the robot's orientation. For each trial, the participant walks at a different pace. The robot is stationary and can only rotate in position.
Scenario 4: For 60 seconds, a particular amount of people wander in random directions in the environment. The number of persons in the group could range from two to six. For the full 60 seconds, they could walk alone or in a group with someone else. The number of non-compliant groups (people walking together) remains constant under this scenario, and members of one group do not communicate with or join another group.
Scenario 5: Two persons move together in the surroundings in a random direction. Their paths may be smooth or feature abrupt sharp bends. The robot is free to move around and follow the walking group..
Results Of the beaches detected Robot Based on the Scenaiors: -
No Of Beaches Detected when Robot is Stationary and people are in motion.
26 out of 30
No Of Beaches Detected when Robot is in motion and people are Stationary.
24 out of 30
No Of Beaches Detected when people are Stationary and robot is in motion.
25 out of 30
No Of Beaches Detected when both Robot and people are in motion.
21 out of 30
No Of Beaches Detected when both Robot and people are stationary.
28 out of 30
Table: Social distance breaches repoted by the robot.
Conclusion
Using inexpensive sensors such as RGB-D camera sensors, we describe an unique method for detecting social distance breaches in interior scenes. We employ a mobile robot to attend to people who are not adhering to the social distancing standard and to encourage them to move apart by releasing a beeping sound. We show that our system is effective at detecting breaches, locating pedestrians, and chasing walking pedestrians.
There are a couple of drawbacks to our approach. For example, It does not distinguish between strangers and members of the same community and at present it is only 90% accurate in detecting. We also need to strengthen the enforcement of social separation by developing better human-robot interaction approaches. In the future, we might look into better exploration approaches to reduce the risk of missing breaches. We\'d also like to create methods for recognising whether or not the humans around the robot are wearing masks.
References
[1] M. Meghana, U. Kumari, J. Sthuthi Priya, and Asia Kumar Panigrahy, “Hand gesture recognition and voice-controlled robot,” ResearchGate, Aug. 2020. https://www.researchgate.net/publication/343409828_Hand_gesture_recognition_and_voice_controlled_robot.
[2] “VOICE CONTROLLED HUMAN ASSISTANCE ROBOT – International Journal of Psychosocial Rehabilitation,” Psychosocial.com, 2020. https://www.psychosocial.com/article/PR260570/17983/.
[3] P. M. Reddy, S. P. Kalyan Reddy, G. R. Sai Karthik, and B. K. Priya, “Intuitive Voice Controlled Robot for Obstacle, Smoke and Fire Detection for Physically Challenged People,” 2020 4th International Conference on Trends in Electronics and Informatics (ICOEI)(48184), Jun. 2020, doi: 10.1109/icoei48184.2020.9143048.
[4] M. Ramjan Begum, M. Chandramouli, and M. Gowtham, “DESIGN AND DEVELOPMENT OF DUAL AXIS CONTROL ROBOT FOR WRITING ROBOT THROUGH SPEECH RECOGNITION.” Accessed: Apr. 27, 2022. [Online]. Available: https://www.irjmets.com/uploadedfiles/paper/volume2/issue_3_march_2020/118/1628082955.pdf
[5] L. John, N. Vishwakarma, and R. Sharma, “Voice Control Human Assistance Robot.” [Online]. Available: https://www.ijert.org/research/voice-control-human-assistance-robot-IJERTCONV9IS03084.pdf
[6] A. Mishra, P. Makula, A. Kumar, K. Karan, and V. K. Mittal, “A voice-controlled personal assistant robot,” 2015 International Conference on Industrial Instrumentation and Control (ICIC), May 2015, doi: 10.1109/iic.2015.7150798.