Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Pranav Rathod, Ninad Garghate, Parth Gaware, Prof. Vandana Navale
DOI Link: https://doi.org/10.22214/ijraset.2022.47837
Certificate: View Certificate
Researchers around the globe nowadays are concentrating on making our devices more interactive and trying to make the devices more operational with minimal physical contact. Thai research proposes an interactive computer system that uses computer vision to create a virtual keyboard and mouse using only hand gestures. We can utilize a built-in camera or an external camera to track the image of different gestures performed by the person’s hand and according to different gestures conduct mouse cursor operations such as right and left clicks, as well as move the cursor. In addition to that, we can control the console using different gestures such as one finger to select the particular alphabet and a four-finger motion gesture to perform swipe operation in either the left or right direction. This proposed system with no wire or any other external device will act as virtual keyboard and mouse. The only hardware aspect of this system is the webcam which is used to capture images whereas the coding part will be done in python programming language.
I. INTRODUCTION
Image processing, a division of signal processing, can correspond of an image or a videotape as input and output as an image or different parameters of it. Gesture recognition and shadowing is a kind of image processing process. In recent times, a number of gesture recognition ways have been proposed. Hand tracking has several operations including motion capture, human-computer interaction and human behaviour analysis. Several types of detectors and detection gloves are used for hand motion detection and tracking. Instead of using more precious detectors simple webcams identify the gesture and track the motion.
Video conferencing is veritably popular currently. For this reason, utmost of the computer users uses a webcam on their computer and utmost of the laptops have a built-in webcam. The proposed system which is webcam grounded, might be suitable to exclude the need of a mouse and keyboard. The process of interaction with a computer using hand gesture is a veritably intriguing & effective approach to HCI (Human- Computer- Interaction). There's some really good exploration on this interest. As the technologies are developing day by day the devices getting compact in size. Some devices have gone wireless, some of them gone idle. This paper proposes a system that could make some the devices go idle in the future that's the future of HCI (Human- Computer Interaction). The offer is to development of a Virtual Mouse and Keyboard using Gesture Recognition. The end is to control mouse cursor and keyboard functions using only a simple camera rather of traditional bias. The Virtual Mouse works as a medium of the user and the machine only using a camera. It helps the user to interact with a machine without any mechanical or physical bias and control mouse functions. Generally, we use a mouse, keyboard or other interacting bias which is substantially compact with the computer machine. The wireless bias also need a power source and connecting technologies, but in this paper, the user’s bare hand is the only input option using a webcam. So, it’s a veritably interactive way to control the mouse cursor and keyboard. This system has the implicit to replace the typical mouse and also the remote regulator of machines. The only hedge is the lighting condition. That’s why the system still can’t be enough to replace the traditional mouse as utmost of the computers are used in poor lighting conditions. In particular, people with severe movement disabilities may have physical impairments which significantly limit their capability to control the fine motor. thus, they may not be suitable to class and communicate with a normal keyboard and mouse.
II. RESEARCH METHODOLOGY
III. PROPOSED SYSTEM
In the proposed system, the process of the implementation can be started when the user's gesture was captured in real time by the webcam, after which the captured image will be dealt with segmentation process to compare and separate the value of pixels to the values of the defined colour. Once the segmentation part is done, the resultant image will be transformed to Binary Image (Having only two colours i.e., white and black) where white will represent the identified pixels, while black will represent the remaining pixels. Then according to the position of the white pixels in the image the position of the mouse pointer will be set simultaneously, thus resulting in simulating the mouse cursor without using our standard regular mouse.
The Mouse will make use of a convex hull method for its working, defects are captured or read, the usage of this defects the features of the mouse are mapped. The method of this photo reputation method entirely specializes in defects and conditional statements, the convex hull takes the distance of the palms as defects, so it could be used for more than one gesture and mapping commands. The method used for this keyboard feature is a chunk different than the Convex hull method, right here the hand function machine is used this is, the video this is shooting used the location of the hand is captured through the computer. In the open video window, a miniature digital keyboard is mapped.
Using the hand function method, the keyboard features may be decided that have been mapped and the usage of this method the keyboard feature executed, a math feature is used to decide the location of the hand and flip it right into a matrix area which makes the location recognisable for the computer.
The algorithm used to classify the feature extracted is something called Haar Cascade which is employed to detect objects, it is one of the Machine-learning algorithm during which a cascade function is trained by providing a limited amount of both positive and negative images. After which, it is used to search out items in other images. Haar cascades classifier have numerous benefits, one amongst which is it’s the time of execution it takes to train the classifier, the tactic requires a number of positive images that is images of faces along with negative images that is images without faces. After which just like our convolutional kernel features are extracted from it. Every individual feature might be a single value produced by debiting the overall of pixels within the white and black rectangles. instead of computing at each individual pixel, it split the screen into sub- rectangles and references of array are created for each one of them. The features of haar are then computed utilizing them. it is essential to notice that while detecting the object, practically each and every characteristic of the Haar are meaningless as the only features that matter are those of the article. However, Adaboost is utilized to settle on the best characteristics amongst thousands of features of Haar to represent an object. The system utilizes haar cascades along with OpenCV library. The OpenCV library maintains the repository of pre-trained Haar cascades. The bulk of those Haar cascades are used for one among two purposes: Face discovery, Eye discovery, Mouth discovery, Full/partial body discovery.
For implementation of keyboard, CVzone library in python is engaged. CVzone is a computer vision application which is used to make image processing and AI operations simple for usage which is build round the Media pipe and OpenCV libraries and to apply mouse events, media pipe library is engaged. Media Pipe is a framework for generation of machine learning pipelines for video and audio both of which comes under the category of time-series type of data.
IV. FUTURE SCOPE
At present when it comes to rough background the system is not that efficient along with the lighting factor also results in decrease of accuracy we plan to use high-definition camera can to increase the overall accuracy, furthermore if a computer system having RAM more than 8GB can also lead to increase in the accuracy by a certain percentage.
V. ACKNOWLEDGEMENT
We would like to thank Mrs. V.V. Navale for helping us out in selecting the topic and contents, giving valuable suggestions in preparation of Seminar report and presentation ’Gesture Recognition Based Virtual Mouse and Keyboard’. Thanks to all the colleagues for their extended support and valuable guidance. We would like to be grateful to all my friends for their consistent support, help and guidance.
This paper is providing a gadget to apprehend the hand gesture and update the keyboard and mouse function. That includes the motion of the mouse cursor, the click and drag with the keyboard capabilities like printing alphabets and other keyboard functions. The manner of pores and skin segmentation is applied to split the colour/picture of hand with its background. Remove arm method, which efficiently solves the state of affairs of taking into the entire frame into the camera. In general, the proposed set of rules can stumble on and recognize hand gesture in order that it is able to function mouse and keyboard features and additionally create an actual global consumer interface. 3dprinting, Architectural drawings or even doing medical operations from any corner of the globe. This mission can be easily carried out and its utility may be very full-size in medical science in which computation is needed however couldn’t completely be implemented because of the lack of interaction between computer and human.
[1] Vantukala VishnuTeja Reddy1, Thumma Dhyanchand2 , Galla Vamsi Krishna3. “Virtual Mouse Control Using Colored Finger Tips and Hand Gesture Recognition” (2020 IEEE-HYDCON) [2] Jing-Hao Sun,Ting-Ting Ji,Shu-Bin Zhang “Research on the Hand Gesture Recognition Based on Deep Learning” 2020 12th International Symposium on Antennas, Propagation and EM Theory (ISAPE) [3] Rafael Augusto Da Silva and Antonio Cl “Algorithm for decoding visual gestures for an assistive virtual keyboard” IEEE Latin America Transactions ( Volume: 18, Issue: 11, November 2020) [4] Kadir Akdeniz1 , Zehra C¸ ataltepe “Dynamic and Personalized Keyboard for Eye Tracker Typing” 2020 24th Signal Processing and Communication Application Conference (SIU) [5] Ue-Hwan Kim, Sahng-Min Yoo, and Jong-Hwan Kim, “I-Keyboard: Fully Imaginary Keyboard on Touch Devices Empowered by Deep Neural Decoder”, IEEE Transactions on Cybernetics ( Volume: 51, Issue: 9, September 2021) [6] Sugnik Roy Chowdhury Sathyabama, Sumit Pathak Sathyabama, M.D. Anto Praveena Sathyabama, “Gesture Recognition Based Virtual Mouse and Keyboard”, Proceedings of the Fourth International Conference on Trends in Electronics and Informatics (ICOEI 2020)
Copyright © 2022 Pranav Rathod, Ninad Garghate, Parth Gaware, Prof. Vandana Navale. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET47837
Publish Date : 2022-12-03
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here