Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Manasa S, Dr. Swetha Rani T, Sneha R, Sonal Jain, Swathi C L
DOI Link: https://doi.org/10.22214/ijraset.2022.45380
Certificate: View Certificate
This paper presents the design and implementation of a robotic arm integrated with computer vision. As robotics continues to become a more integral part of the industrial complex, there is a need for automated systems that require minimal to no user training to operate. With this motivation in mind, we have developed a robotic arm embedded with face-tracking technology that can be operated offline. We were able to achieve this using Arduino Uno for implementing the embedded part, collaborating the same with the OpenCV library for face tracking. Thereby, developing a multipurpose robotic arm.
I. INTRODUCTION
The design, development, and implementation of a live face tracking robotic arm are mentioned in this work. Robots are machines that are quicker and have greater power than humans. Robotics is defined because they have a look at, design, and alertness. Robots with rotating joints are referred to as articulated robots. The term "axes" is utilized in robotics to consult those joints. A servo motor is a kind of motor extensively used to electricity articulated robots. Servo vehicles could have as few as axes or as many as ten or greater. Four to 6 axes are common in commercial robots. Six axes are the maximum not unusual place in commercial programs. A face tracking system is a camera-primarily based tool that may music the motion of 1 or greater faces over time. The gadget lets you to music human faces in first-rate detail. After manually initializing the given face, a face-processing venture inspects the succeeding video. Using the robot arm to carry out face monitoring has allowed us to exactly design, generate, and check gadgets that require panning the examined tool to a particular angular role even as the computation is running. The fundamental goal of this task is to carry out the detection and monitoring of faces from the real-time enter video. The enter video circulation has received the use of a webcam or every other stay video acquisition device. The video is processed with the aid of using dividing them into frames. Each body is tested for a face. Once the face is spotted, a bounding field is drawn around it. The algorithms permit the robotic to routinely become aware of an item in a video and interpret it as a fixed trajectory to expect in which it'll stop up. The first step in monitoring an item is to come across it. The monitoring of the face is completed with the assistance of an Arduino microcontroller. It is efficaciously carried out to take a look at human moves in suburban areas, parking oodles, and banks. In visitor transportation, item monitoring is extensively used to address drift monitoring, a twist of face detection, pedestrian counting, and many others. Another key utility of item monitoring is in video compression to automatically come across and music-shifting items in films. As a consequence, extra coding bytes are allocated to shifting items and fewer coding bytes are used for backgrounds. Object monitoring additionally has numerous HCI packages consisting of hand gesticulation identification, cellular video conferencing, etc.
This has a look at goals to enhance security, especially in sturdy rooms which include the ones used to keep reaction papers. The films may be captured through the CCTV manage gadget, however, the cony catchers cannot be stopped earlier than they motive problems. Due to its giant programs in virtual video management, surveillance, and human interactions, face monitoring has been a warm subject matter in latest years. The design, development, and implementation of a stay-face monitoring robotic arm are mentioned in this work.
II. LITERATURE SURVEY
A. A human Tracking Mobile Robot with Face Detection
Authors: Satoru Suzuki, Yasue Mitsukura, Takanari Tanabata, Tsukuba, Ibaraki, Japan, Nobutaka Kimura, Tsukuba Ibaraki, Japan, Toshio Moriya
Abstract: face detection method for tracking a human by a mobile-robot. We obtain images from a web camera, and detect faces by focusing on skin colors and eyes as facial features. If we detect faces from images, we trace the detected human, take a picture of him/her, and print it automatically by using the mobile-robot. In order to show the effectiveness of the proposed method, we show the experimental results. Firstly, in the face detection, we show the face detection accuracy. Then, in the human tracking with mobile-robot by using face detection, we show the tracking performance.
B. Moving object detection, tracking and following using an omnidirectional camera on a mobile robot
Authors: S Ranganatha , Y P Gowramma(Dept. of Computer Science and Engineering, Government Engineering College, Hassan, India)
Face detection and tracking algorithms are used in computer vision applications due to the fact that they provide reliable and fast results. This paper describes a model for face tracking in video sequences using Open Source Computer Vision (OpenCV) software library. To increase the face tracking accuracy, we propose a real time face tracking algorithm based on integration of Continuously Adaptive Mean Shift (CAMShift) and kalman filter. First, haar cascade detects face in the video sequence; once the face is detected then other parts like eyes, nose and mouth are detected. After successful face region detection, key-points are extracted using Speeded Up Robust Features (SURF) framework and CAMShift algorithm applied. CAMShift algorithm calculates width, height and (x, y) coordinates of the face region, which is provided to kalman filter. Kalman filter calculates new center and size of the bounding box. Based on the center information, face tracking takes place in further frames. The experimental results at the end of this paper clearly indicate that the proposed algorithm integrating CAMShift and kalman filter is better to each single approach. Using this algorithm, we can achieve faster speed in face tracking.
C. Research on Image Processing Technology of Computer Vision Algorithm
Authors: Xin Zhang , St.Petersburg Polytechnic University, Gzhatskaya Ulitsa, 38,St Petersburg
Shuo Xu, University of Edinburgh, Shangrao City, Jiangxi Province
Abstract:With the gradual improvement of artificial intelligence technology, image processing has become a common technology and is widely used in various fields to provide people with high-quality services. Starting from computer vision algorithms and image processing technologies, the computer vision display system is designed, and image distortion correction algorithms are explored for reference.
III. METHODOLOGY
In close to future, robots are going to have an excellent role in human?s life. Their usages might be contained a huge variety of ranges and certainly considered one among their common factors might be interacting with human beings. So, with the purpose to have a humanoid interaction, robots ought to have the cap potential of detecting, monitoring, and responding to items or people. Therefore, a sturdy correct set of rules which has the opportunity of enforced in a quick time is required. A real-time set of rules is stepped one for having an actual interaction, that?s why the researchers attempt to use much less complex algorithms and they might be given that those algorithms may also have a few disadvantages. They additionally ought to take into account the end result accuracy, as in recent times the robots are used in lots of expert extraordinary factors. A combination of being actual time and enforcing correct might be the favored end result for masses of studies areas.
IV. SYSTEM DESIGN
A face tracking device is a camera-primarily based total tool that could tune the motion of 1 or greater faces over time. The device permits you to tune human faces in super detail. After manually initializing the given face, a face-processing venture inspects the succeeding video. Using the robot arm to carry out face monitoring has allowed us to exactly design, generate, and check gadgets that require panning the examined tool to a selected angular role even as the computation is running.
The face detector implemented in OpenCV is used in many face-based applications. OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. Using this software library we were able to capture the live video feed i.e. face and process it . The basic techniques for face normalization, face subspace estimation, abd face recognition can be found in the CSU face identification evaluation system. Once the face is detected by the web camera , that signal information is sent to the arduino uno through serial communication . Now, arduino uno will trigger the arduino program that has to coded in arduino IDE which in turn commands the motos to run accordingly.
V. APPLICATION
Face tracking is effectively applied to observe human actions in suburban areas, parking oodles, and banks. In traffic transportation, face tracking is widely used to handle flow monitoring, accident detection, pedestrian counting, and many others. Another key application of face tracking is in video compression to robotically detect and track moving objects in videos. As a consequence, more coding bytes are allotted to moving objects and fewer coding bytes are used for backgrounds. Object tracking also has several HCI applications such as hand gesticulation identification, mobile video conferencing and etc
VI. FUTURE SCOPE
There is usually greater room for innovation in any study. In addition to development, the robot arm may be located on a cell platform with four wheels to permit portability and navigation. This will boom the attain of the robot arm and it could without problems capable of select out and region to another. We also can alternate the kind of gripper for distinctive operations, as there are numerous styles of stop effectors to be had withinside the market.
In this chapter, we present a completely automated real-time face tracking technology mounted on a robotic that could capture a live video feed from a web camera, then use the face tracking technology to discover face positions, and perceive the detected faces.Some fields are well worth investigating to gain higher performance. In the face popularity procedure, if the heritage is just too cluttered to seize a clean foreground, the popularity charge will decrease.
[1] Al Moubayed, S., Beskow, J., Skantze, G., & Granström, B. (2012). Furhat: a back-projected human-like robot head for multiparty human-machine interaction. In Cognitive Behavioural Systems (pp. 114-130). Springer Berlin Heidelberg.. [2] Jun, B., & Kim, D. (2007). Robust real-time face detection using a face certainty map. In Advances in Biometrics (pp. 29-38). Springer Berlin Heidelberg. [3] Verschae, R., Ruiz-del-Solar, J., & Correa, M. (2006). Gender classification of faces using Adaboost. In Progress in Pattern Recognition, Image Analysis, and Applications (pp. 68-78). Springer Berlin Heidelberg. [4] Zhang, C., & Zhang, Z. (2010). A survey of recent advances in face detection. Tech. rep., Microsoft Research. [5] Caleanu, C. D., & Botoca, C. (2007). C# solutions for face detection and recognition system. Facta Universitatis-series: Electronics and Energetics,20(1), 93-105. [6] Wu, J., Brubaker, S. C., Mullin, M. D., & Rehg, J. M. (2008). Fast asymmetric learning for cascade face detection. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30(3), 369-382. [7] Yang, M. H., Kriegman, D. J., & Ahuja, N. (2002). Detecting faces in images: A survey. Pattern Analysis and Machine Intelligence, IEEE Transactions on,24(1), 34-58. [8] Kim, H. D., Choi, J. S., & Kim, M. (2007). Human-robot interaction in real environments by audio-visual integration. INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 5(1), 61. [9] Tsai, C. Y., Dutoit, X., Song, K. T., Van Brussel, H., & Nuttin, M. (2010). Robust face tracking control of a mobile robot using self-tuning Kalman filter and echo state network. Asian Journal of Control, 12(4), 488- 509. [10] CL, S. N., NIKHRA, V., JHA, S. R., DAS, P. K., & NAIR, S. B. AN INTELLIGENT FACE TRACKING SYSTEM FOR HUMAN-ROBOT INTERACTION USING CAMSHIFT TRACKING ALGORITHM. International Journal, 3. 59 [11] Pateraki, M., Baltzakis, H., Kondaxakis, P., & Trahanias, P. (2009, May). Tracking of facial features to support human-robot interaction. In Robotics and Automation, 2009. ICRA\'09. IEEE International Conference on (pp. 3755-3760). IEEE. [12] Littlewort, G., Bartlett, M. S., Fasel, I. R., Chenu, J., Kanda, T., Ishiguro, H., & Movellan, J. R. (2003). Towards Social Robots: Automatic Evaluation of Human-robot Interaction by Face Detection and Expression Classification. In NIPS. [13] Luxand FaceSDK Documentation, “Luxand FaceSDK 4.0 Face Detection and Recognition Library,” Developer?s Guide, Copyright © 2005–2011 Luxand. [14] my audience-Measure Product Overview, “my audience-Measure Complete solution for automated audience measurement research,” Copyright © 2011 Rhonda Software. [15] VeriLook SDK Brochure 2012, “VeriLook SDK Face identification for PC or Web applications,” Last Updated 2012. [16] VeriLook Surveillance SDK Brochure 2012, “VeriLook Surveillance SDK Face identification for video surveillance systems,” Last Updated 2012. [17] MegaMatcher_SDK_Brochure_2012, “MegaMatcher SDK Large-scale AFIS and multi-biometric identification,” Last Updated 2012. [18] Bolme, D. S., Beveridge, J. R., & Draper, B. A. (2009). Facel: Facile face labeling. In Computer Vision Systems (pp. 21-32). Springer Berlin Heidelberg. [19] face API Brochure, “face API The Real-Time Face Tracking Toolkit for Developers and OEMs,” © Copyright 2008. [20] Küblbeck, C., & Ernst, A. (2006). Face detection and tracking in video sequences using the modified census transformation. Image and Vision Computing, 24(6), 564-572. [21] SHORE user manual, Version 1.4.0 Generated by Doxygen 1.6.3, 2011. [22] Baba, M., Asada, N., Oda, A., & Migita, T. (2002). A thin lens-based camera model for depth estimation from defocus and translation by zooming. InProceedings of (Vol. 107, pp. 274-281).
Copyright © 2022 Manasa S, Dr. Swetha Rani T, Sneha R, Sonal Jain, Swathi C L. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET45380
Publish Date : 2022-07-06
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here