In a world full of technologies, everyone is trying to make human life easier with the help of new techniques. therefore, for the sake of humanitarian relief,we developed a technology that can help the user to use laptops/computers easily with less physical contact. the proposed system consists of functions like controlling the system with the help of hand gestures and blinking eyes also speech recognition used with the help of mouse instruction or cursor move. for this project, media pipe, an open cv, Speech recognition, pyautogui, pyaudio application has been used. This system will be helpful for people with disabilities for doing their work more easily and with simplicity. Moreover, due to significant restrictions during pandemic, it can be said that contactless is a priority in today\'s world, and one of the best way to achieve this is provided in this prototype
Introduction
I. INTRODUCTION
A. Virtual mouse is a device used to air-browser functions of a system such as a computer, laptop or a smart-pad with the functions corresponding to a mouse and a marker. The proposed system aims to provide a means of controlling a computer using hand gestures, blinking eyes, and speech recognition. This system is particularly useful for people with disabilities who have difficulty using conventional input devices such as a mouse or keyboard. By using hand gestures, blinking eyes, and speech, the user can control the computer with ease and perform various tasks. Then used OpenCV which is an open-source library for working with image processing and performing computer vision tasks like face-detection and object tracking and other python packages such as AutoPy, Mediapipe and PyAutoGUI are used to move the cursor around in the application and perform functions of pointing, clicking and scrolling. This proposed model can work extremely well in real production environment and does not need the rendering power of a GPU (Graphics Processing Unit). The accuracy of the model is quite high and functional.
The proposed system aims to provide a means of controlling a computer using hand gestures, blinking eyes, and speech recognition. The proposed system is a revolutionary technology designed to provide individuals with disabilities an accessible means of controlling a computer using hand gestures, blinking eyes, and speech recognition. This innovative system aims to empower and enable individuals with limited mobility or communication difficulties to interact with computers and perform various tasks with ease. For people with disabilities, traditional methods of computer interaction such as keyboards and mice may pose significant challenges or even be completely inaccessible. This proposed system offers an alternative solution that leverages the power of hand gestures, eye blinking, and speech recognition to bridge the gap between individuals with disabilities and computer technology.
A. Artifical Intelligence
Artificial Intelligence is a computing concept that helps a machine think and solve complex problems like a human. For example, we perform a task, make mistakes and learn from our mistakes. Similarly, an AI works on a problem, makes some mistakes in solving the problem and learns from the problems. The three important aspects of AI are learning, reasoning and self- correcting. Basically, an AI system works on a huge amount of data. It takes various data as input, learns from that input and gives out (Predictions) based on various facts.
There are basically several approaches in building an AI system.
Think like human
Act like human
Think rationally
Act rationally
B. Problem Statement
Physically challenged individuals often face difficulties in using traditional computer mouse interfaces due to limited mobility in their hands, making it challenging for them to carry out everyday computing tasks. This project aims to address this issue by developing a multi-modal virtual mouse interface that can be controlled by hand gestures, eye movements, and voice commands, thus providing physically challenged individuals with an accessible and intuitive way to interact with their computers.
C. Objective
Conducting a literature review of existing virtual mouse interfaces and assistive technologies for physically challenged individuals Designing and developing a multi-modal virtual mouse interface that can be controlled by hand gestures, eye movements, and voice commands. Evaluating the usability and effectiveness of the virtual mouse interface through user testing and feedback Implementing improvements and modification based on user feedback Providing recommendations for future development and research in this area.
II. RELATED WORK
S. Vasanthagokul, K. Vijaya Guru Kamakshi, G. Mudbhari and T. Chithrakumar, "Virtual Mouse to Enhance User Experience and Increase Accessibility," 2022 4th International Conference on Inventive Research in Computer A graphical interface is user friendly and, as a result, is widely used. Systems of contactless communication surfaces for interactions have been introduced to decrease the spread of germs and combat diseases like covid-19. This system also can be utilized by disabled people who still retain motor function in this system also can be utilized by disabled people who still retain motor function in the hand and forearm. This paper put forward an AI-assisted virtual mouse system where these drawbacks are solved by utilizing a webcam/built-in camera for recording the motions of the hand and translating them into mouse.
III. MODULES
A. Machine Learning Predictive Analytical Module
The machine learning and predictive analytics module is a critical component of a multi-model computer assistant tool for individuals with physical challenges. This module is designed to enable the tool to learn from user interactions and data inputs, making it more personalized and proactive in providing assistance and support. the module utilizes advanced algorithms and statistical models to analyze data, identify patterns, and make predictions about future events or behaviors. The Machine learning and predictive analytics module can also be used to monitor and track the user's progress and provide feedback and encouragement. By analyzing data from the user's interactions with the tool, such as completed tasks or exercise routines, the module can provide feedback and encouragement to help motivate and engage the user.
B. Computer Vision Module
The Computer vision module is an essential component of a multi- model computer assistant tool designed for individuals with physical challenges. It is a technology that enables machines to interpret, understand, and respond to visual input from the environment. Computer vision technology involves the use of advanced algorithms and machine learning techniques to extract meaningful information from images, videos, and other forms of visual data. In the context of a computer assistant tool, the computer vision module can provide a range of functionalities to assist individuals with physical challenges. For example, it can enable the tool to recognize and respond to facial expressions, body language, and other nonverbal cues, which can help improve communication and interaction between the user and the computer assistant. The computer vision module can also be used for navigation and way finding.
IV. ARCHITECTURE DIAGRAM
The given steps outline a process involving an image frame, a camera, and various methods of interaction such as hand gestures, eye control, and voice commands. Additionally, the steps mention actions like double-clicking, right-clicking, and moving to the left or right.
Let's go through each step in detail:
Step 1: Image Frame In this step, an image frame is likely referring to a visual display or screen where information or content is presented. It could be a computer monitor, a mobile device screen, or any other similar interface.
Step 2: Camera the camera mentioned here is presumably used to capture visual input, either as a built-in camera in a device or as an external camera. It could be used for various purposes such as image recognition, tracking movements, or capturing gestures.
Step 3: Hand Gesture, Eye Control, Voice Command This step indicates different methods of interaction with the system. Hand gestures involve using specific movements or positions of the hand to perform actions or convey commands. Eye control suggests the use of eye movements or gaze to control or interact with the system. Voice commands refer to using spoken instructions or prompts to trigger actions or navigate through the system.
Step 4: Double Click, Right Click, Left Move, Right Move or touchpad to access context-specific options. Left move and right move imply moving or dragging an object or pointer to the left or right on the screen.
V. SYSTEM ANALYSIS
A. Existing System
The existing system consists of a mouse that can be either wireless or wired to control the cursor, know we can use hand gestures to monitoring the system.
Different hand gestures can be replaced in place of colored caps for the same purpose. different operations of mouse controlled are single left click, double left click, right click and scrolling, finger counting and also volume .
Various combinations of the colored caps are used for different operations
B. Limitations
Existing project uses only hand gesture, which not convenient for all disable people.
Limited precision: virtual mice may not be as precise as physical mice, especially when it comes to tasks that require fine motor skills or precision clicking.
Inconvenient: virtual mice require you to use a touchpad or a trackball on your laptop, which can be inconvenient for some users who prefer a traditional mouse.
C. Proposed System
To develop a system that allows people with disabilities to use a computer more easily.
To provide an alternative means of input to traditional devices like a mouse or keyboard.
To develop a system that allows people with disabilities to use a computer more easily.
To provide an alternative means of input to traditional devices like a mouse or keyboard.
D. Advantages
The main advantage of using hand gestures is to interact with computer as a non-contact human computer input modality.
Reduce hardware cost by eliminating use of mouse.
Convenient for users not comfortable with touchpad.
This type of technology easily access disabilities person can easily access virtual mouse and due to user friendly
VI. SYSTEM SPECIFICATION
A. Software Specification
Windows os
Visual studio 2022 version 17.4
Pycharm version 20221.4
Anaconda version 2022.11
VII. RESULT ANALYSIS
A. Conclusion
The proposed system is a gesture and eye-based control system with speech recognition that provides an alternative means of input to people with disabilities. The system has immense potential and can be extended further in the future to enhance its accuracy, speed, and functionality. The system has the potential to make a significant impact on the lives of people with disabilities, enabling them to access technology and perform tasks that were previously challenging
B. Future Enhancement
It can be developed for more accuracy.
It can be developed as websites.
May use a combination of different algorithms to work better.
Can also be fully automated
C. Output
References
[1] Amardip Ghodichor, Binitha Chirakattu “Virtual Mouse using Hand Gesture and Color Detection ”, Volume 128 – No.11, October 2015.
[2] Chhoriya P., Paliwal G., Badhan P., 2013, “Image Processing Based Color Detection”, International Journal of Emerging Technology and Advanced Engineering, Volume 3, Issue 4, pp. 410-415
[3] Rhitivij Parasher,Preksha Pareek ,”Event triggering Using hand gesture using open cv”, volume -02-february,2016 page No.15673-15676.
[4] AhemadSiddique, Abhishek Kommera, DivyaVarma, ” Simulation of Mouse using Image Processing Via Convex Hull Method ”, Vol. 4, Issue 3, March 2016.
[5] Student, Department of Information Technology, PSG College of Technology, Coimbatore, Tamilnadu, India,”Virtual Mouse Using Hand Gesture Recognition ”,Volume 5 Issue VII, July 2017.