Yoga is a holistic practice that combines physical postures, breath control, and meditation to promote physical and mental well-being. Yoga can be learnt by attending classes at a yoga center or through home tutoring. Books, podcasts, and videos can also be used to gain the knowledge. A good deal of people prefer self-learning, but without assistance and feedback, they may find it difficult to identify the inaccurate portions of their yoga poses and could not be able to determine whether they were executed correctly. Poor yoga practice might result in health problems including strokes and nerve damage. In order to help users practice yoga, we have proposed in this study a system that can track the movements of various body parts in yoga positions. By merging the deep learning intelligence with Google is MediaPipe\'s real-time detection of various human body joint points and the ability to infer the precision of specific postures in yoga from those joint points
Introduction
I. INTRODUCTION
Yoga, an ancient practice that combines physical postures, breathing exercises, and meditation, has gained widespread popularity for its numerous physical and mental health benefits. As yoga continues to attract a diverse and growing community of practitioners, there is an increasing demand for tools and technologies that can aid individuals in their practice, particularly in terms of posture estimation and correction. In response to this demand, the fusion of computer vision and machine learning has opened up a new dimension in the field of yoga by providing innovative solutions for real-time assessment and improvement of yoga postures. The practice of yoga traditionally relies on human instructors to guide practitioners in achieving proper alignment, balance, and form. While the guidance of experienced instructors is invaluable, it is not always readily available to all, and maintaining consistent feedback can be challenging. This is where technology steps in, offering the potential to democratize access to high-quality yoga instruction and personalized feedback. Yoga posture estimation and correction systems employ computer vision techniques and capabilities of Google’s MediaPipe we detect different joint points of human body in real time and determine the precision of particular yoga poses for an individual by evaluating different angles. These technology-driven solutions hold significant promise in enhancing the overall yoga experience. They empower practitioners, from novices to advanced yogis, to develop a deeper understanding of their practice and refine their postures with precision. Moreover, these systems can help mitigate the risk of injury by identifying and correcting misalignments, promoting safer and more effective yoga sessions. In this survey paper, we delve into the rapidly evolving landscape of yoga posture estimation and correction. We explore the cutting-edge techniques and tools that are transforming the way people engage with yoga, examining the various methods and technologies employed, their advantages and limitations, and the potential for their widespread adoption. Additionally, we highlight the broader implications of this field in promoting health and wellness, as well as its relevance in the context of modern lifestyles where access to in-person yoga instruction may be limited. By providing an in-depth overview of the current state of research and development in yoga posture estimation and correction, this survey aims to shed light on the exciting advancements in the field and inspire further exploration. As technology continues to merge with the ancient practice of yoga, we are witnessing a dynamic synergy that has the potential to revolutionize how individuals engage with their physical and mental well-being through yoga, ultimately enriching their lives and promoting a more balanced and harmonious existence.
II. EXISTING SYSTEM
In the realm of yoga posture estimation, several notable research endeavors have paved the way for innovative solutions. This section provides an overview of these significant contributions, highlighting their key findings and methodologies, which serve as the building blocks for the evolving landscape of yoga posture estimation and correction.
A. Pose Classification Using Posenet
In this system, individuals can choose a yoga pose they wish to practice and submit a photo of themselves in that pose. The system employs PoseNet, a deep learning framework, to identify and extract 17 key points from the uploaded photo, representing different joints in the body. These key points are labeled by "Part ID" and come with a certainty score ranging from 0.0 to 1.0, indicating the accuracy of the detection.
The system then compares the user's pose with that of an expert, analyzing the differences in angles between various body joints. However, a limitation of this system is the absence of real-time interaction with the user. While it provides valuable feedback based on angle differences, users do not have the opportunity to receive immediate guidance and correction during their yoga practice. Nonetheless, the system serves as a useful tool for users to assess their poses and work on refining their yoga techniques.
B. Pose Classification Using Openpose
The system described is a yoga posture analysis application that utilizes OpenPose technology. OpenPose, with its real-time multi-person key point detection capabilities, identifies key points on the users' bodies, such as the positions of hands, feet, head, and other body parts. The proposed models characterize just 6 yoga asanas.
C. IoT-based Yoga Posture Recognition System
The designers of this system have suggested a novel method for detecting yoga poses utilizing infrared sensors with low resolution. The authors of that research used a wireless sensor network (WSN) based on low-resolution infrared sensors and a deep convolutional neural network (DCNN). However, this study presents a smartphone-based solution that makes use of the web camera or mobile camera in order to accommodate a broad audience.
D. Yoga Posture Recognition Utilizing Microsoft Kinect
The Microsoft Kinect system used by the authors of this system has the ability to capture critical spots of the human body in real time. When compared to the camera on a cell phone, it additionally becomes relatively costly. Security issues are also present with Microsoft Kinect. Therefore, it is not very appropriate for a posture spotting system for yoga. The system designed by the authors to identify yoga poses does not provide guidance to the user concerning how to remedy erroneous poses.
III. AIM
The aim of this project is to build a one stop solution for all yoga workout routine. In this project the aim objective is to reduce the risk of injuries that may be caused due to wrong postures and incomplete knowledge of the person practicing yoga. This can be achieved by the use of Google MediaPipe and 2D kinematic animation which assists the user on each and every step of the exercise and acts as a personal trainer guiding the user throughout the workout. This personal training software gives the user real-time feedback on the postures performed. These feedback are immediate since the user will receive them as soon as an exercise is performed. The user receives alert messages when an inexact posture is performed. The user will also receive workout insights and personalized fool-proof workout plan which will navigate the user to attain his goals. Keeping track on the progress in a person’s fitness journey is necessity and keeps the user motivated and on track as the user will have a general roadmap and sticking to it will help the user to attain the respective goals. This is inculcated using interactive real time graphs that denote the statistics of each workout regarding the time spent, the accuracy of each posture, and the number of calories burned.
IV. METHODOLOGY
The methodology for yoga posture correction and estimation takes a methodical approach to help practitioners maintain proper form and alignment throughout their practices.
Correction and Input: Give the user immediate input regarding how their present posture compares to the optimal position. Provide users with spoken instructions or visual cues to assist them adjust their alignment and form.
Visual Aids: To demonstrate the proper alignment of body components, use visual aids like lines, overlays, or markers. Give the user immediate guidance by displaying feedback on the screen.
Auditory Guidance: Provide auditory cues or instructions to let the user know what changes are necessary for correct alignment.
Scoring and Gamification: Implement a scoring system or gamification elements to engage users and encourage them to maintain correct postures. Provide positive reinforcement for achieving and holding the correct pose.
User Engagement: Create an interactive and user-friendly interface to keep users engaged and motivated. Offer progress tracking and performance history to show improvements over time.
Adaptability: Design the system to accommodate users of different skill levels and body types. Allow for customization and adaptation based on individual needs and preferences.
Continuous Improvement: Gather user feedback to enhance the accuracy and user experience of the system. Use machine learning or AI techniques to improve pose estimation and feedback algorithms over time.
This methodology aims to create a technology-assisted yoga experience that helps practitioners maintain proper form and alignment, ultimately leading to a more effective and safe yoga practice. It combines pose detection, analysis, feedback, and user engagement to guide and assist individuals in their yoga journey, alignment, and distances to evaluate the user's posture.
V. MEDIAPIPE
MediaPipe stands as a sophisticated open-source framework developed by Google, tailored to excel in realtime perception tasks. Its capabilities include precise face detection, hand tracking, pose estimation, and object recognition, all made possible through a combination of pretrained machine learning models and adaptable pipelines. This versatile platform empowers developers to seamlessly integrate cutting-edge computer vision functionalities into their applications, allowing for an enhanced user experience. What sets MediaPipe apart is its ability to analyze video streams and image sequences in real-time, recognizing facial expressions, tracking hand movements, and estimating body poses with remarkable accuracy. Its intuitive design and flexibility make it a preferred choice for developers looking to create interactive and immersive applications across various domains. MediaPipe's prowess in real-time perception technology significantly elevates the potential for innovative solutions in fields such as entertainment, education, and healthcare, enabling developers to explore new horizons in their applications and services. Media Pipe has 32 key point and they are represented in the bellow diagram
Conclusion
To sum up, yoga correction and estimation technologies— powered by cutting-edge computer vision frameworks like MediaPipe will completely transform how yoga is taught and practiced. Yoga practitioners can enhance alignment and develop their techniques with the help of these tools, which provide real-time feedback and exact analysis of yoga poses. Through the identification of specific body spots and their comparison with optimal poses, these technologies augment the educational process, offering insightful information to both novice and proficient yogis. Consequently, practitioners of yoga can practice with confidence, knowing that these advances contribute to their journey toward improved health and well-being. Yoga experiences that are personalized, interactive, and transformative have even more potential in the future because to ongoing breakthroughs in computer vision and machine learning.
References
[1] Infinity yoga Tutor :Yoga Posture Detection and Correction System. Authors-Prabath Lakmal Rupasinghe -Sri Lanka Institute of Information Technology, Chethana Lipyanapathirana – Sri Lanka Institute of Information Technology.
[2] Yoga Posture Recognition and Quantitative Evaluation with Wearable Sensors Based on Two-Stage Classifier and Prior Bayesian Network . Authors- Ze Wu, Jiwen Zhang Department of Mechanical Engineering ,Tsighua University.
[3] Real -time Yoga recognition using deep learning. Authors- Santhosh Kumar Yadav -University of Galway, Amitojdeep Singh -University of Waterloo.
[4] Yoga posture recognition by detecting human joint points in real time using Microsoft Kinect. Authors-Muhammad Usama Islam -University of Louisiana at Lafayette, Faisal Bin Ashraf - University of California ,Riverside.
[5] MediaPipe : Yoga pose detection using deep learning models. Author-Sweety Patel-RK University,Rajkot ,India