Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Vallabh Chapalgaonkar, Atharva Kulkarni, Amey Sonawale
DOI Link: https://doi.org/10.22214/ijraset.2022.41789
Certificate: View Certificate
Gesture-based real-time gesture recognition systems received great attention in recent years because of their ability to interact with systems efficiently through human-computer interaction. Human-Computer Interaction can gain several advantages with the establishment of different natural forms of device-free communication. Gestures are a natural form of action that we often use in our daily lives to interact, so to use them as a way of communicating with computers generates a new paradigm of computing interaction. This project implements computer vision and gesture recognition techniques and develops a vision based low-cost input software for controlling the media player through gestures
I. INTRODUCTION
Nowadays, gesture recognition plays an important part in the interaction between humans and computers. To facilitate simple yet user-friendly communication between humans and computers hand Gestures can be used which enable us humans to interact with machines without having to use devices like keyboards, laser pens, etc. In the proposed system, users can use four simple gestures to control the Media Player without physically touching the PC. Gesture is a symbol of physical behavior or emotional expression. It includes body gesture and hand gesture. It falls into two categories: static gesture and dynamic gesture. For the former, the posture of the body or the gesture of the hand denotes a sign. For the latter, the movement of the body or the hand conveys some messages. Gesture can be used as a tool of communication between computer and human. It is greatly different from the traditional hardware-based methods and can accomplish human-computer interaction through gesture recognition. Gesture recognition determines the user intent through the recognition of the gesture or movement of the body or body parts. In the past decades, many researchers have strived to improve the hand gesture recognition technology. Hand gesture recognition has great value in many applications such as sign language recognition, augmented reality (virtual reality), sign language interpreters for the disabled, and robot control.
II. LITERATURE REVIEW
A. Title: Pixel-Based Hand Gesture Recognition with Kinect Depth Camera
Authors: Chong Wang
In 2015, Chong Wang, "Super Pixel-Based Hand Gesture Recognition with Kinect Depth Camera" proposed a system that uses the Kinect Deposit Camera. It is based on compact representation in the form of large pixels, which accurately capture shapes, textures, and deep touch features. As this program uses the Kinect camera for depth, system costs are higher.
B. Title: Handwriting The Recognition System
Authors: Swapnil D. Badgujar
In 2014, Swapnil D. Badgujar, “Handwriting The Recognition System ”proposed a system that said see touching an unknown input by hand tracking and extraction method. This program is used in see one touch. There is a thought that is a fixed background so that the system is smaller search the tracking region. This program only controls the mouse finger using webcam.
C. Title: Motion MEMS Accelerator Based Non-Specific-User Hand Touch Recognition
Author: Ruize Xu, Shengli Zhou and Wen J.Li
In 2012, Ruize Xu, Shengli Zhou and Wen J. Li, “MEMS Accelerometer Based Non-Specific-User Hand Touch Recognition”, was able to create a system that he could not identify various hand gestures such as up, down, right, and left, crossing and turning. Three different modules were developed that detect various hand gestures. MEMS (MicroElectromechanical System) Features 3- accelerometers axes are provided as inputs. Movement hand in three perpendicular direction was received by three accelerometers and sent to the system via Bluetooth. The segmentation algorithm was used and finally various hand gestures were recognized by the same touch is already saved in the system. People often prefer the internet to have a daily update weather, news etc. So, for this purpose they do keyboard and mouse functions. This program offers little accuracy in obtaining final touch points due the smallest size of the hand touch website.
D. Title: Robust The Vision based Hand Gestures Interface for Operating VLC Media Player
Authors: Anupam Agrawal , Siddharth Swarup Rautaray
In 2010, Anupam Agrawal and Siddharth Swarup Rautaray, “The Vision based Hand Gestures Interface for Operating VLC Media Player Application "program, in that the nearest K neighbor algorithm was used see various touches. Features of VLC media player which were driven by hand gestures including play, as well pause, Full screen, pause, increase volume, and decrease capacity. Lucas Kanade Pyramidical's Optical Flow The algorithm is used to detect hand input video. The algorithm mentioned above detects movement points in the image input. Then the methods of K find a hand centre. By using this facility, the hand is the same noticed. This program uses the database it contains various hand gestures and inputs compared with this image stored and appropriately VLC media player it was controlled. The current application is not very robust recognition phase.
E. Title: A-fast-sighted hand-based touch algorithm Recognition for Robot Control
Authors: Erol Ozgur, Asansarabi Malima
In 2006, it formed Erol Ozgur and Asansarabi Malima "A fast-sighted hand-based touch algorithm Recognition for Robot Control” which controlled the robot using hand gestures but with limited touch. First the division of the hand circuit was followed by pointing fingers and finally separating the gestures. The algorithm used is consistent in translation, rotation and hand scale. This program works on a robot control app with reliable performance.
III. PROBLEM DEFINITION
To develop an interface between the system and its environment such that the system could identify particular colours to take input as a referral point that can interact with the system to perform some simple tasks such as controlling media player and manipulate its various functions.
Gestures has the potential to solve some major real-world problems like:
A. Communicate at Distance
It enables users to communicate with the system even they are not close to the system. Hence if high quality web cameras are used then they are able to capture gesture at some adequate distance e.g User can access the media player by sitting anywhere in the room just gesture should be visible i.e captured by web cameras.
B. Beneficial for Person with Disability
More than 1 billion persons in the world have some form of disability. This corresponds to about 15% of the world's population. Hence gestures help to solve this problem. This people can directly show gestures and control the media environment.
C. Substitute for Keyboard and Mouse
Gesture control for media has the potential to replace computer hardware like mouse and keyboard. Due to this, the cost of computer hardware can be reduced. A large portion of e-waste is generated from computer hardware. With the help of this application, the e-waste generated due to these hardware components can be reduced.
IV. CHALLENGES IDENTIFIED
Our solution to the end problem is to create a touch system in that users can interact with a given touch and data will be uploaded to the proposed system to get the required output. In the work program, the main problem identified was to get a very clear touch, sometimes it was because of technical limitations that the quality of the camera was not up to standard. Camera adjustment is also important to track touch. In tracking touch movements, it was found that adequate light is also an important factor, ultimately tracking becomes difficult when adequate light is lacking. In real work sometimes defining what type of object to follow makes it easier for the system to track it, so the system can directly detect the preferred object without blocking the entire screen and assembling it very quickly. Other shortcomings faced by the web camera are the lack of required quality, unspecified touch, different technical requirements, system errors, system limitations, memory limit, touch tracking due to insufficient brightness, unseen system touch in-screen input, and too many touches of the same type inside the screen. of tracked inputs, the user is very fast in context to provide inputs and a system that can track all the way in quick succession.
V. METHODOLOGY
The project allows you to control a media player using hand gestures and provides the user with a new form of interaction that mirrors their experience in the real world. They feel natural and require neither interruption nor an additional device. Furthermore, they do not limit the user to a single point of input, but instead, offer various forms of interaction.
Here when we capture an image the image gets converted into RGB. Now we will check whether there are multiple hands in our image. This code has an empty list where we store the list of elements of the hand, which we have detected by using the media pipe i.e the no of points on the hand.
Now to manipulate the volume i.e to control the volume here we have 2 loops 1 for id and 2 for hand landmark. the landmark information gives us x, y coordinates, and the id number is assigned to various hand points. Then we get the height and width of the image. Then we find the central point of the image and then we finally draw all landmarks of the hand.
Now for controlling volume we need the index finger and thumb so first as discussed above the list should be empty so when our hand is detected we will assign co-ordinated to the thumb and index finger. Then we will draw a circle on the tip of the thumb and index finger. Then we will draw a line connecting the thumb and index finger (tip of thumb and index finger). Then we will find the distance between fingers by hypothesis and accordingly increase or decrease the sound. Similarly, we can control play and pause just their gestures and range are different
In the above image Fig.1, you can see that the hand is detected and then it is converted to RGB. With the help of media pipe and pycaw, we are able to achieve hand recognition and manipulate media player. As we can see when the palm is open the video is played
In above image Fig.2 you can see that the hand is detected and then it is converted to RGB. With the help of media pipe and pycaw we are able to achieve hand recognition and manipulate the media player. As we can see when the first is shown the video is paused
In the above image Fig.3, you can see that the hand is detected and then it is converted to RGB. With the help of media pipe and pycaw, we are able to achieve hand recognition and manipulate the media player. As we can see when the distance between thumb and index finger decreases volume decreases
In the above image Fig.4, you can see that the hand is detected and then it is converted to RGB. With the help of media pipe and pycaw, we are able to achieve hand recognition and manipulate media players. As we can see when the distance between the thumb and index finger increases the volume of the media player increases
VI. ALGORITHM OF WORKFLOW
This is the part where the user interacts with the application. It is by far the most exciting and fascinating part of the application. Here, the user can control different features of the media player using different gestures.
A. Fingers Recognition
a. If the length is in the range of 75-90, the media will be paused.
b. If the length is greater than 150, the media will resume playing.
3. Similarly, we find the distance between the tip of the thumb and the index finger using a hypotenuse. We achieve this by calling the math hypot function then passing the difference between x2 and x1 and the difference between y2 and y1. And based on the length between the index finger and thumb finger the volume is controlled dynamically. If the length between fingers is increased the volume increases and vice versa.
VII. ACKNOWLEDGMENT
We offer profound gratitude to Dr S. N. Gujar (Head of the department, Computer Engineering) for providing us with all the excellent academic facilities required to complete this work. We would like to thank him for simulating suggestions and encouragement, along with areas for improvement which helped us for the implementation and writing of this dissertation.
With the advancements in technology to provide novel, convenient and fast methods of human computer interaction, gesture recognition has received wide appreciation. The various existing systems have good working features but have not been well received by customers. The main problem lies in the fact that these systems have low accuracy rates and complex algorithms. The proposed system will aim to combat these issues and stand out in the crowd of gesture recognition systems. It will provide a touchless user interface for controlling multimedia files and applications such as video players and music players. It will act as a helping aid for manipulation of systems for people who have disabilities, who cannot access their input devices or anyone who prefers this more natural method of communication compared to other methods.
REFERENCES [1] Chong Wang, Zhong Liu and Shing-Chow Chan“Superpixel-Based Hand Gesture Recognition with Kinect Depth Camera”, IEEE Trans. Multimedia, vol. 17, no. 1, Jan.2015. [2] Swapnil D. Badgujar, Gourab Talukdar, Omkar Gondhalekar and Mrs. S.Y. Kulkarni “Hang Gesture Recognition System”, International Journal of Scientific and Research Publications, vol. 4, Issue 2, Feb. 2014, pp. 2250-3153. [3] Viraj Shinde, Tushar Bacchav, Jitendra Pawar and Mangesh Sanap “Hand Gesture Recognition System Using Camera”, International Journal of Engineering Research &Technology (IJERT), Vol. 3, Issue 1, January – 2014. N. Krishna Chaitanya and R.Janardhan Rao “Controlling of windows media player using hand recognition system”,The International Journal Of Engineering And Science (IJES), Vol. 3, Issue 12, Pages 01-04, 2014. [4] N. Krishna Chaitanya et R. Janardan Rao “Controlling OF Windows Media Player Using Hand Recognition System”, Journe. The International Journal of Engineering and Science (IJES), vol. 3, PP 01-04, 2014. X. Liu, Y. Huang, X. Zhang, and L. Jin. \"Fingertip in the eye: A cascaded CNN pipeline for the real-time fingertip detection in egocentric videos,\" CoRR, abs/1511.02282, 2015. [5] Anupam Agrawal and Siddharth Swarup Rautaray “AVision based Hand Gesture Interface for Controlling VLC Media Player”, International Journal of Computer Applications, vol. 10, no. 7, Nov. 2010. Asanterabi Malima, Erol Ozgur, and Mujdat Cetin “A Fast Algorithm for Vision-Based Hand Gesture Recognition for Robot Control”, Turkey, 2006.
Copyright © 2022 Vallabh Chapalgaonkar, Atharva Kulkarni, Amey Sonawale. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET41789
Publish Date : 2022-04-23
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here