Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Prof. Moumita Dey, Arindam Musib, Surajit Mandal, Shibsundar Chell, Mansaram Kar
DOI Link: https://doi.org/10.22214/ijraset.2023.57281
Certificate: View Certificate
Yoga is an ancient science and discipline originated in India 5000 years ago. It is used to bring harmony to both body and mind with the help of asana, meditation and various other breathing techniques It bring peace to the mind. Due to increase of stress in the modern lifestyle, yoga has become popular throughout the world. There are various ways through which one can learn yoga. Yoga can be learnt by attending classes at a yoga centre or through home tutoring. It can also be self-learnt with the help of books and videos. Most people prefer self-learning but it is hard for them to find incorrect parts of their yoga poses by themselves. idea behind this yoga pose detection project using deep learning or neural network learning is that yoga popularity is increasing day by day because of its benefits. Doing yoga helps us physically, mentally as well as spiritually. Because of this many people nowadays are doing it regularly. The main idea of this project is to help the people to recognize which yoga pose they are doing with the help of this detection technique. Yoga which involves 8 rungs and limbs of it, which includes Yama, Niyama, Asana, Pranayama, Dharana, Dhyana and Samadhi. To easily help people understand which pose they are performing via images, video recording by classifying it, we are implementing this project because of this people will incline towards doing more as they will get help to identify which pose they are doing very easily.
I. INTRODUCTION
Human activity recognition is a well-established computer vision problem that has imposed several challenges over the years . It is the problem of locating keypoints and the posture of a human body from the sensor data. Activity recognition is useful in many domains including biometrics, video-surveillance, human–computer interaction, assisted living, sports arbitration, in-home health monitoring, etc. The health status of an individual can be evaluated and predicted with the help of monitoring and recognizing their activities.Yoga posture recognition is a relatively newer application. [15] Humans are naturally vulnerable to a variety of health issues, with musculoskeletal illnesses being a critical area that needs prompt attention. As a result of accidents or age, a significant number of people experience musculoskeletal diseases every year. Yoga can help you achieve greater physical health. [14] [10]
The most common approach to real- time pose detection is to use convolutional neural networks (CNNs) to extract features from the input images, followed by a linear regression step to identify the corresponding poses. [1] ]However, this approach has several limitations, such as the difficulty of accurately differentiating between poses, and the need to manually label each pose. [6]To address these limitations, we propose a novel method for real-time yoga pose detection using CNNs, OpenPose, and linear regression in Python. Our method combines the benefits of both approaches, allowing us to accurately differentiate between poses, while also reducing the need for manual labeling. We evaluate our method on a dataset of yoga poses, and show that it is able to accurately identify poses in real-time.
II. PROJECT OBJECTIVES
The objectives of a yoga pose detection project typically revolve around leveraging technology to achieve various goals related to yoga practice, fitness, and wellness. Here are some common objectives:
???????III. LITERATURE REVIEW
This list is not exhaustive but represents a selection of key sources that informed our understanding of the topic.
???????IV. METHODOLOGY
The organization of the project is as follows:
Estimate the human posture in 2-d images by means of OpenCV [13] and MediaPipe. The System architecture consists of 5 stages: Executing entered commands(By Jupyter Notebook), Making the optic devices accessible, Grabbing the input from that webcam, and analysis of posture to get exact pinpoints to correlate with the already existing data sets.
After this, the live video images are converted into image frames [8]. Then the data set will compare to inbuilt poses. However, the results will get displayed in percentage. The difference will be compared for computing accuracy in a specific exercise. The algorithm used for the analysis and result calculation is done in the neural network functions and methods for each pixel [9]. A win discovery model for operating on the entire images, which returns hand- oriented box bondings. The hand corner model works mainly on the win sensor’s cropped images portion, which gives the 3d high dedication coordinates [13]. Mediapipe gives 3D milestones from just a single frame. OpenCV consists of a comprehensive set of classics as well as state-of-the-art computer vision and ma- chine literacy algos. [7] .These algorithms can be used to describe the wrong body posture, identify body bendings, and classify mortals in live video. It sews the high-resolution images of the entire scene, finding similar images from previously entered in the existing database. Making detection from feeds like hand and body position.The system uses the CV3 to get the device optic media access and the mediapipe for sketching the pinpoints on the body, hand, and legs. Apply the styling notations like thickness, circle radius, and color. After successfully recognizing all body posture poses, we will destroy the camera window of the device by command cv2.destroyAllWindows(). These algorithms can be used to describe the wrong body posture, identify body bendings, and classify mortals in live video. It sews the high-resolution images of the entire scene, finding similar images from previously entered in the existing database. Making detection from feeds like hand and body position.
The system uses the CV3 to get the device optic media access and the mediapipe for sketching the pinpoints on the body, hand, and legs. Apply the styling notations like thickness, circle radius, and color. After successfully recognizing all body posture poses, we will destroy the camera window of the device by command cv2.destroyAllWindows().
V. FUTURE WORK
Absolutely! Here’s a simplified look at where yoga pose detection using deep learning could head next:
These improvements could make the technology better at recognizing yoga poses accurately and giving you helpful advice to improve your practice, all while keeping your privacy in mind!
In this paper, we have studied how AI-based Smart systems work. We also learned about many exciting fields, like psychology and its direct relation to daily physical exercises. The model will work on suggesting a better way of exercise, and proper posture avoids harm to the body. The use of python libraries like OpenCV, Tensorflow, MAtplotlib, CNN, OpenPose and Mediapipe. We also come to the outcome of many people wanting a personal trainer who cannot afford it due to the high fees, so this will be a revolutionary system in the healthcare field In the future, the system can be further developed to include additional postures and features to improve accuracy and provide more comprehensive feedback.
[1] Ajay Chaudhari, Omkar Dalvi, Onkar Ramade, and Dayanand Ambawade. Yog- guru: Real-time yoga pose correction system using deep learning methods. In 2021 International Conference on Communication information and Computing Technology (ICCICT), pages 1–6, 2021. [2] Haoming Chen, Runyang Feng, Sifan Wu, Hao Xu, Fengcheng Zhou, and Zhenguang Liu. 2d human pose estimation: A survey. Multimedia Systems, 29(5):3115–3138, 2023. [3] Yu Chen, Chunhua Shen, Xiu-Shen Wei, Lingqiao Liu, and Jian Yang. Adversarial posenet: A structure-aware convolutional network for human pose estimation. In Proceedings of the IEEE international conference on computer vision, pages 1212–1221, 2017. [4] Muhammad Usama Islam, Hasan Mahmud, Faisal Bin Ashraf, Iqbal Hossain, and Md Kamrul Hasan. Yoga posture recognition by detecting human joint points in real time using microsoft kinect. In 2017 IEEE Region 10 humanitarian technology conference (R10-HTC), pages 668–673. IEEE, 2017. [5] Alex Kendall, Matthew Grimes, and Roberto Cipolla. Posenet: A convolutional network for real-time 6-dof camera relocalization. In Proceedings of the IEEE international conference on computer vision, pages 2938–2946, 2015. [6] Yan Li, Xinjiang Lu, Jingjing Gu, Haishuai Wang, and Dejing Dou. Towards unsupervised time series representation learning: A decomposition perspective. 2022. [7] Naveenkumar Mahamkali and Vadivel Ayyasamy. Opencv for computer vision applications. 03 2015. [8] Tanvi S. Motwani and Raymond J. Mooney. Improving video activity recognition using object recognition and text mining. In Proceedings of the 20th European Conference on Artificial Intelligence (ECAI-2012), pages 600–605, August 2012. [9] Miklas Riechmann, Ross Gardiner, Kai Waddington, Ryan Rueger, Frederic Fol Leymarie, and Stefan Rueger. Motion vectors and deep neural networks for video camera traps. Ecological Informatics, 69:101657, 2022. [10] Fazil Rishan, Binali De Silva, Sasmini Alawathugoda, Shakeel Nijabdeen, Lakmal Rupasinghe, and Chethana Liyanapathirana. Infinity yoga tutor: Yoga posture detection and correction system. In 2020 5th International conference on information technology research (ICITR), pages 1–6. IEEE, 2020. [11] Yoli Shavit and Ron Ferens. Introduction to camera pose estimation with deep learning. arXiv preprint arXiv:1907.05272, 2019. [12] Petru Soviany and Radu Tudor Ionescu. Continuous trade-off optimization be- tween fast and accurate deep face detectors. In International Conference on Neural Information Processing, pages 473–485. Springer, 2018. [13] Jiacheng Wu and Naim Dahnoun. A health monitoring system with posture estimation and heart rate detection based on millimeter-wave radar. Microprocessors and Microsystems, 94:104670, 09 2022. [14] Ze Wu, Jiwen Zhang, Ken Chen, and Chenglong Fu. Yoga posture recognition and quantitative evaluation with wearable sensors based on two-stage classifier and prior bayesian network. Sensors, 19(23):5129, 2019. [15] Santosh Yadav, Amitojdeep Singh, Abhishek Gupta, and Jagdish Raheja. Real- time yoga recognition using deep learning. Neural Computing and Applications, 31:https://link.springer.com/article/10.1007/s00521–019, 12 2019. [16] Zhe Zhang, Jie Tang, and Gangshan Wu. Simple and lightweight human pose estimation. arXiv preprint arXiv:1911.10346, 2019.
Copyright © 2023 Prof. Moumita Dey, Arindam Musib, Surajit Mandal, Shibsundar Chell, Mansaram Kar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET57281
Publish Date : 2023-12-02
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here