Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Lakshya Jain, Upendra Bhanushali, Rajashree Daryapurkar
DOI Link: https://doi.org/10.22214/ijraset.2022.47141
Certificate: View Certificate
This paper depicts the design and implementation of a multi-level, multi-faceted software and hardware architecture that we have used to create a swarm of intelligent robots that can communicate basic instructions to each other. We have several layers in the architecture that can control a particular part of the system, with humans as the major drivers for the whole system. The individual rovers can navigate autonomously and avoid obstacles in their way. Our system utilizes several microcontrollers and companion computers such as Raspberry Pi, Pixhawk Flight Controller, and Arduino Mega Microcontroller. We have also done the circuit design and PCB implementation for the rovers. We have conducted this project as a proof of concept for the feasibility of Surveillance Robots. A swarm of intelligent robots would undoubtedly be helpful in situations such as disaster management and search and rescue missions.
I. INTRODUCTION
The main aim of our project is to design a swarm of cost-effective, reliable, and intelligent surveillance robots. The individual robots in our system are autonomous rovers equipped with a control/actuation system and can take footage of their surroundings for recording and processing functions. It follows a master-slave architecture where raspberry pi act as the master and Arduino is the subordinate. Arduino takes commands from raspberry pi, and most of the sensors are integrated into the Arduino, and a raspberry pi handles commands and image processing.
Surveillance and monitoring systems aim to produce high-quality, reliable data, giving measurement, reporting, and verification functions on captured data. One use-case of our surveillance system is monitoring forest fires, which can be detected early and precisely. Building a surveillance and monitoring system using stationary cameras for such areas can be challenging as the structures need to be designed considering the surroundings to be monitored. Also, if we use Surveillance Cameras, they can leave blind spots since they are fixed to the position. So, we propose a swarm of mobile autonomous surveillance robots that can effectively cover a large area.
A swarm of robots is a cluster that works in unison towards a particular objective. Such systems can be used for numerous systems such as light shows, disaster management, surveillance, mapping, and search and rescue missions.
II. LITERATURE REVIEW
Krauss[1] developed a cost-efficient autonomous vehicle platform that combines an Arduino microcontroller, Zumo, and Raspberry Pi-driven robot chassis. Raspberry pi provides a web interface, control for debugging, and wireless data streaming while also providing improved computing power. Arduino executes the control law in accurate time intervals. The reason to use Arduino and raspberry pi is to overcome the limitation of the Arduino related to streaming the data back. It is adding Raspberry Pi, which works as a mini-computer, to solve these issues by providing Wi-Fi via VNC or SSH. The synergistic combination of an Arduino and raspberry pi allows reading the sensor signals and gives the motors PWM signals accordingly in real-time execution by Arduino.
Rohith B N[2] uses the concepts of IoT and computer vision. This system is to be used for surveillance activity on large farms or forests. As we know Mobile Surveillance Camera gives much more flexibility than a fixed or static one, it is an optimal solution. The author gave brief information on the hardware that was used to develop, which is ESP32 with a sensor array. ESP32 is connected to Wi-Fi, and an antenna is an alternative for long-range communication. The author has also planned on implementing a fire alert system that depends on the readings from the temperature, humidity, and gas sensor. Readings are taken from the sensors, and a notification is sent in case a fire is detected. There is a video feed that can be viewed to catch false alarms. If an ultrasonic sensor detects any object, an obstacle alert is launched, which helps the robot to avoid obstacles. A microphone detects uncommon noises, and a notification is sent to the user if noises like cutting trees, vehicle passing, or other high decibels sounds are sensed. Thus the author has created a mobile surveillance system using IoT and computer vision.
Arabi et al. [3] have designed a GPS-Based Navigation system for robot path planning. A control station feeds the GPS coordinates of target points to the rover. With the help of a GPS signal, the rover creates a path to the target location. A PID control algorithm is used for a course correction of the position and direction of the robot to compensate for any deviation or error, follow the path, and reach the destination. A Raspberry Pi is used to drive the rover after making the necessary calculations. The features of this project are the calculations between destination and current locations and resolving any complications that may occur, such as path formation and localization. Haversine formula has a high degree of approximation, ensuring high accuracy, and they used longitude and latitude along with the formula for distance measurement.
G. Anandravisekar et al. [4] have proposed an IOT-based surveillance robot where they have interfaced a Wi-Fi module with Arduino to achieve an unlimited range of operations. The surveillance robot can be operated in both automatic and manual modes. The complexity and cost of this project are reduced by using the Arduino microcontroller, and the communication with the robot is carried out more securely, which has several advantages over the existing systems as they use robots with a limited range of communication and use short-range wireless cameras. Cayenne software is used in this project. This software is an object-relational mapping (ORM) framework. It allows us to work with objects abstracted from databases. It is used to design prototypes and IoT-based applications as it is a drag-and-drop project builder which allows devices to connect to the internet easily. Both manual and automatic modes can be performed with this software.
III. PROJECT DESIGN
This section covers the design and planning stages of our project. In our brainstorming sessions, we formulated a problem statement for the project, followed by the proposed solution that we would work on further. We have explained the working principles with a generalized block diagram.
A. Problem Statement
B. Proposed Solution
IV. METHODOLOGY
A. Multi-Layered Architecture For Swarm Of Robots
Since we have a swarm of platforms that need to accomplish a task by coordinating, it is essential to have a definite structure and protocols to operate every component in the system. The architecture that we have proposed is as follows:
2. High Level Control Layer: The high-level control layer is responsible for visual perception, generating instructions for the low-level control layer, communicating with higher layers, and taking instructions from them. The higher-level control layer consists of Raspberry Pi with a Pi Cam as the camera device and a zig-bee antenna for communication. The relationship between low-level and high-level control layers can be described as a unidirectional master-slave system in which Raspberry Pi can communicate with the Arduino but not the other way around. To establish this method of communication, we connected four GPIO output pins from the RPi to the Arduino digital input pins. This way, we can send a 4-bit binary signal by a very simplified approach, and we can send up to 16 individual instructions.
3. Communication Layer: The communication layer is responsible for communication between the ground station, leader robot, and little robots. Depending on the mission profile, this communication layer is switchable between Wi-Fi, ad-hoc network, or ZigBee-based communication. The leader robot is responsible for all interactions with the ground station, relaying communications to minor robots and coordinating between minor robots. The raspberry pi on the leader robot can broadcast instructions to all minor robots in its group.
4. Human Interaction Layer: The human interaction layer consists of the ground station and the interface present on the ground station to communicate with leader robots and receive any communication from them. In our current implementation, a laptop running an app designed using MATLAB is being used as the ground station. Using this app, we can seamlessly navigate different functions and send and receive communication from the leader robots. This layer acts as a point of contact between the human (the operator) and the rest of the system.
B. Block Diagram of Entire System
A. Thus, we have created a small-scale project to be used as swarm surveillance and intelligence solution. This project uses a swarm of Raspberry Pi and Arduino-powered robots for surveillance. B. This system illustrates a low-cost robotic surveillance solution that can be made using components found at most electronics stores. C. An automated surveillance system gives us convenience and a higher field of view and makes the surveillance task autonomous. D. We have implemented a multi-layered architecture in the form of the master-slave system helps the system to divide complex tasks into practical resources and helps us optimize our system to get the best performance out of it without overloading it. E. We also created a simulation to test our system using ArduRover Firmware on a Pixhawk Flight Controller Board and to make two rovers survey two different areas autonomously and simultaneously.
[1] R. Krauss,” Combining Raspberry Pi and Arduino to form a low-cost, real-time autonomous vehicle platform,” 2016 American Control Conference ACC), 2016, pp. 6628-6633, doi: 10.1109/ACC.2016.7526714. [2] B. N. Rohith,” Computer Vision and IoT Enabled Bot for Surveillance and Monitoring of Forest and Large Farms,” 2021 2nd International Conference for Emerging Technology (INCET), 2021, pp. 1-8, doi: 10.1109/IN-CET51464.2021.9456180. [3] A. Al Arabi, H. Ul Sakib, P. Sarkar, T. P. Proma, J. Anowar and M. A. Amin,” Autonomous Rover Navigation Using GPS Based Path Planning,” 2017 Asia Modelling Symposium (AMS), 2017, pp. 89-94, doi: 10.1109/AMS.2017.22. [4] Anandravisekar, G., A. Anto Clinton, T. Mukesh Raj, L. Naveen, and M. Mahendran.” IOT based surveillance robot.” International Journal of Engineering Research Technology (IJERT) 7, no. 03 (2018): 84-87. [5] B. N. Rao, R. Sudheer, M. A. Sadhanala, V. Tibirisettti and S. Muggulla,” Movable Surveillance Camera using IoT and Raspberry Pi,” 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), 2020, pp. 1-6, doi: 10.1109/ICCCNT49239.2020.9225491. [6] Sakali, Nagaraju, and G. Nagendra,” Design and implementation of Web Surveillance Robot for Video Monitoring and Motion Detection.” International Journal of Engineering Science and Computing (2017): 4298-4301. [7] R. Washington, K. Golden, J. Bresina, D. E. Smith, C. Anderson and T. Smith,” Autonomous rovers for Mars exploration,” 1999 IEEE Aerospace Conference. Proceedings (Cat. No.99TH8403), 1999, pp. 237-251 vol.1, doi: 10.1109/AERO.1999.794236
Copyright © 2022 Lakshya Jain, Upendra Bhanushali, Rajashree Daryapurkar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET47141
Publish Date : 2022-10-20
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here