A virtual Drawing Board which is a computer- based vision system which is used to draw on screen by detecting the motion of index finger with the camera. The idea of making a software using AI in python, useful in online teaching, presenting presentations and in various other industries. In present condition online teaching has become an alternate mode of teaching via online meets, without using any external devices for drawing or writing on screen one can use Virtual Drawing Board just by opening their camera and using index finger with its motion they can draw or write.
Introduction
I. INTRODUCTION
The Virtual drawing board which is a Computer based Vision Systemwhich is used to draw on screen by detecting the motion of index finger with the camera. Virtual Drawing Board will be a highly scalable, Virtual Drawing Board, based on Computer Vision, which will work on Artificial Intelligence and Machine Learning Algorithms. We’ve created a virtual board using AI in python and linked it with device camera. Opening the software it will need access to your device’s camera and after selecting colour one can draw, write, paint using his/her index finger. The camera detects the position of index finger and calculates the FPS.
There are various colour options and clear screen option to clear the screen. This will help the user to differentiate and explain in better way. It can replace the technique of using other gadgets for drawing on screen.
We went through various literature papers and found out insights about our topic. Teachers, students and employees can use it in their online as well as offline classroom and offices. Cheaper than the usual smart digital boards. Requires less space and nothing extra than just our hand to work with. We are using the computer vision techniques of OpenCV to build this application. The project can be made even better by adding a multicore module and hand contour recognition. Deep learning of OpenCV is used in order to improve hand gesture tracking. When the application is started, the camera will be activated and the user can start writing and drawing with simple hand gestures which will be tracked by the camera. The drawing will be simultaneously shown on the output window that is the virtual board. We can choose any color of for our choice to draw and also erase and clear the entire screen when needed. The painting module is created using OpenCV and python as the best machine learning tools to create an application and modules like this. As we give real time webcam data applications like paint in python use track any object by specific coordinates, our finger coordinates in our project and allows the user to write and draw by moving the fingers that is the
object. The module shows us the demonstration of the image processing versatility of OpenCV. The extraction of air-writing trajectories captured by a single web camera is proposed with an effective hand tracking method. The ultimate objective is to develop an application for computer vision and machine learning that supports human-computer interaction (HCI), also known as man-machine interaction (MMI), which refers to the relationship between the person and the computer or more specifically the machine.
II. METHODOLOGY/EXPERIMENTAL
We have used python for implementing this project and various modules and libraries such as OpenCV, numpy, Mediapipe.
Few requirements for the system :
Hardware: Minimum hardware requiremented to execute the system is Intel I5 Processor, 4gb RAM, 1gb storage and a working web camera.
Software: Windows 10 OS, Python Programming and OpenC.
We have integrated the mediapipe system. The system uses mediapipe to track hands by first identifying hand landmarks and then determining locations in accordance with those landmarks. The stages of the system are that we open the webcam device using the opencv library in Python.
Next the webcam begins reading and recording the video frame by frame and sends each frame to the handDetector class, which uses the findHand() method to track or detect the positions of fingers. After that each frame is compared to mediapipe hand landmarks, finger positions are determined using the findPositions() method of the handDetector class and which finger is opened using the fingersUp() method. Therefore, this system will detect the user's hand and monitor the fingers to draw or erase on the screen.
III. RESULTS AND DISCUSSIONS
The motive behind this software is to resolve issues caused while online teaching and in other fields. This will help user to easily draw, write, paint and learn in a different way. As it is easy to use and also beginner friendly people of all age group can use it and learn new technology using AI. We’ve created it with many colour options so the user has many options. We are learning to add various other features which will make this software more interactive and easy to use.
IV. FUTURE SCOPE
We can introduce this software with online meet platforms like google meet and zoom thus making it easier for students, teachers to make notes and explain the concept better with this tool. It can be introduced to preschool and other teaching industries to make learning fun and easier.
V. ACKNOWLEDGMENT
We would like to express our warm gratitude to our respected Prof. Supriya Telsang ma’am, who guided us at every step for the completion of our project. It seems appropriate to say thank you in the end, rather than at the beginning, because it is the omega of the project, which she has helped us to bring about. And last but not the least we thank each one of us group members for providing insights that greatly assisted the development of our project work to the final outcome.
Conclusion
Resolving all the difficulties faced by user in online teaching mode and making it easy to draw on screen just by motion of your index finger. User can choose their colours and draw whatever they want. It can be used in pre schools to teach children basic alphabet and numbers, which will make their learning more interesting. One can present their presentation and explain it well using this software. Teachers can use it in google meets and explain the concepts better.
References
[1] Prof. S.U. Saoji, Akash Kumar Choudhary, Nishtha Dua, Bharat Phogat, Saurabh Uday Saoji. “Air canvas application using opencv and numpy in python”, August 2021.
[2] Saira Beg, M. Fahad Khan, Faisal Baig. “Text Writing in Air”, 2013.
[3] Yash Patil, Mihir Paun, Deep Paun, Karunesh Singh, Vishal kisan Borate. “Virtual Painting with Opencv Using Python”, 18 December 2020.
[4] You-shen lo, Chaur-heh hsieh, Jen-yang chen, Sheng-kai tang. “Air-Writing Recognition Based on Deep Convolutional Neural Networks”, 21 October 2021.
[5] Gangadhara Rao Kommu. “An efficient tool for online teaching using opencv”, August 2021.
[6] Noraini Mohamed (Graduate Student Member, IEEE), Mmumtaz Begun Mustafa (Member, IEEE), Nazen Jomhari. “ A review of hand gesture recognition system”, December 2021.
[7] Bhumika Nandwana , Stayanarayan Tazi, Sheifalee Trivedi , Dish Kumar Khunteta , Santosh Kumar Vipparthi. “ A survey paper on Hand Gesture Recognition”, 2017.
[8] Dr. A Rengarajan, Surya Narayan Sharma, “Hand Gesture Recognition using OpenCV and Python”, Feb 2021.
[9] Prof. Jahnavi S, K Sai Sumanth Reddy, Abhishek R, Abhinandan Heggde, Lakshmi Prashanth Reddy. “Virtual Air Canvas Application using OpenCV and Numpy in Python”, 2021.
[10] Haiting Zhai, Xiaojuan Wu, Hui Han. “Research of a Real-time Hand Tracking Algorithm”, 2016.
[11] Ayushman Dash, Amit Sahu, Rajveer Shringi, John Gamboa, Muhammad Zeshan Afzal, Muhammad Imran Malik, Andreas Dengel and Sheraz Ahmed. “AirScript - Creating Documents in Air”, 2017.