Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Rugwed Nand , Atharva Chavan , Atharva Jadhav , Vedant Achole, Priyanshu Nikam, Prof. Prranjali Jadhav
DOI Link: https://doi.org/10.22214/ijraset.2023.57569
Certificate: View Certificate
With the growing demand in personalised fitness experiences, this paper takes a fresh look at home workouts by creating a Personalised Gym Trainer with the Mediapipe library. This sophisticated device combines pose detection technology with voice assistance to provide users with real-time feedback and personalised instruction while exercising. The system identifies various exercises accurately and leverages torso point detection for greater precision by leveraging the capabilities of the Mediapipe library and OpenCV for camera tasks. Individual Python modules for certain workouts such as pull-ups and bench press are written to support a flexible and scalable solution. Python, Git, GitHub, and Jenkins are among the tools and technologies used in the process. Furthermore, the Findpose library is used to calculate angles between torso locations, resulting in Providing a quantitative assessment of proper posture. The suggested Personalised Gym Trainer is a promising leap in home fitness solutions, integrating cutting-edge pose detection with voice assistance for an interactive and personalised workout experience.
I. INTRODUCTION
Technology has ushered in a new era of fitness solutions, in which personalised instruction and interactive feedback are critical for efficient home exercises. Traditional exercise apps frequently fall short of offering real-time, personalised help. To fill this void, our initiative provides a cutting-edge Personalised Gym Trainer, a system meant to transform home training habits. This clever trainer is built around the Mediapipe library, a powerful pose detection tool that is supplemented by OpenCV for camera functions. Accuracy of Pose Detection: To ensure accurate pose detection, our method goes beyond traditional approaches by using torso point detection. This not only improves the system's precision, but it also provides the groundwork for a more comprehensive knowledge of user movements. Recognising the vast range of exercises, our system features a modular structure. Individual exercises, such as pull-ups and bench press, are represented by distinct Python modules, enhancing code organisation, scalability, and ease of maintenance.
Technological Framework: The creation of the Personalised Gym Trainer necessitates the use of a variety of tools and technologies. Python is the core programming language, and Git and GitHub make version control and collaborative development possible. Jenkins is used for continuous integration, ensuring a smooth
development workflow.
Angle Measurement for Correct Posture: We introduce angle measurement using the Findpose library to evaluate the accuracy of a user's posture. The device can objectively detect whether the user has taken the correct stance during each exercise by calculating the angles between torso spots on the body.
II. LITERATURE REVIEW
There are various applications on the market that instruct users on how to execute exercises. But We employ computer vision to instruct the user not only through which exercise to execute, but also through proper posture and counting repetitions. This application functions as a workout assistant, providing real-time posture monitoring and food advice. The application may be used not just by individuals at home, but it can also be utilized in gyms as smart trainers, minimizing the need for human interaction.
[1] Their goal was to give a bottom-up strategy for the activity of estimating the user's stance and real-time segmentation of the user utilizing photographs from the multiparton solution and by creating an effective single-shot approach.
So, the idea they proposed used a CNN, or convolutional neural network, by training it to detect and classify key points and provide accurate results by studying relative displacements and thus clustering or identifying groups of different key points and studying pose instances.
The model obtained a COCO [6] accuracy of the points of 0.665 and 0.687 using multiple level inference and single-scale inference. Part-based modelling is used. Training is dependent on the key point level structure.
[2] In a research paper Their goal was to develop BlazePose, a lightweight convolutional neural network optimised for mobile use.
Human posture prediction network architecture. During inference on a Pixel 2 phone, the network generates 33 body key points (as illustrated in Fig 1) for a single user and runs at over 30 frames per second. As a result, it's suitable for real-time applications like fitness tracking and sign language recognition. A unique body posture monitoring technology and a lightweight body position prediction neural network are two of our most significant achievements. To determine the points, both algorithms employ heatmaps and regression.
They developed a stable technique for estimating posture using Blazepose, which employs CNN and a dataset of up to 25K pictures displaying unique body endpoints to improve accuracy.
With BlazeFace and BlazePalm, the provided algorithm with 33 keypoint topology is efficient. The authors of this study have
designed a strategy for predominantly upper body critical locations. A solution demonstrating lower-body analysis of stance will also be incorporated.
The researchers suggested an efficient solution to the multi-person challenge while recognizing poses when there are numerous persons in the Realtime frame in their research paper
[3] The model is trained in this technique such that it recognizes the user's points and then segregates depending on the affinity of distinct points in the frame. This is known as the bottom-up strategy, and it is incredibly efficient in terms of accuracy and performance, regardless of the number of individuals in the frame. They used a deep neural network to get the precise location of the points in the research paper
[4]. In this method, they demonstrated DNN-based estimators. This allowed for more precision in predicting stance. The overall using this method boosts efficiency.
[5] Human Activity Recognition is one of the most researched topics in the field of computer vision. It is a powerful tool mainly used to aid medical systems, smart homes, surveillance, and many more areas. In this paper, an RGB camera was used to record gym activities such as push-up, squat, plank, forward lunge, and sit-up. Features were extracted from the recorded videos and were fed into classification algorithms such as Support Vector Machines, Decision Tree classifier, K-Nearest Neighbor classifier, and Random Forest classifier. The developed models were evaluated using metrics such as accuracy, balanced accuracy, precision score, recall score, and F1 score. The Random Forest Classifier outperformed all the other attempted methods with an accuracy of 98.98%. A repetition counter was developed, which splits workouts based on local minima analysis, and correctness of the workout was calculated for each skeletal point using dynamic time warping. An interactive android application was built for the user to gain insights on the performed workouts.
III. METHODOLOGY
The existing gap in personalized guidance during home workouts, coupled with the availability of advanced pose detection libraries, forms the basis for this project. Traditional fitness applications often lack real-time feedback and tailored guidance. This project aims to address these limitations by integrating pose detection technology, specifically leveraging the capabilities of the Mediapipe library, to provide users with personalized and accurate guidance during their workout routines. The core methodology involves the utilization of the Mediapipe library for accurate pose detection. This library offers a robust solution for tracking key body points, enabling the identification of various exercises. By leveraging the capabilities of Mediapipe, the personalized gym trainer can precisely detect the user's body movements and positions in real-time.
To capture and process the live feed from the user's webcam, OpenCV is employed. OpenCV provides essential functionalities for video capture and image processing, enabling the system to obtain frames and subsequently apply pose detection algorithms from the Mediapipe library.
In addition to standard body points, the system focuses on detecting torso points, a critical element in many exercises. This inclusion enhances the accuracy of pose detection, ensuring a more comprehensive understanding of the user's body positioning.
To streamline the development and maintenance of the system, distinct Python modules (files) are created for each specific exercise. This modular approach allows for a scalable and organized codebase, facilitating the addition of new exercises in the future. The project utilizes a range of tools and technologies, including Python for coding the application, Git for version control, and GitHub for collaborative development and code management. Continuous integration tools, such as Jenkins, aid in automating testing processes. For a nuanced evaluation of correct posture, the findpose library is employed to measure angles between torso points on the body. This method allows the system to quantitatively assess the alignment of key body parts, determining whether the user has adopted the correct pose during exercises.
To ensure the effectiveness and user-friendliness of the personalized gym trainer, user studies are conducted to gather feedback on the system's usability and satisfaction levels.
Iterative testing and refinement processes are implemented based on the collected feedback. Additionally, the system is designed with modularity in mind, enabling easy integration of new exercises and adaptability to evolving fitness trends. The emphasis on torso points detection and angle measurement contributes to the system's accuracy in recognizing and guiding users through various exercises.
Nowadays, our lives are increasingly busier, and we rarely find time in our schedules to be healthy, fit, and exercise on a daily basis. This has resulted in a slew of diseases and health problems. Many challenges can be solved by implementing Artificial Intelligence in the field of fitness. Health-related apps and technology make our life easier and our fitness path more enjoyable. Individuals can include this programmed into their own workouts, making them more efficient and error-free. We learned how to use the OpenCV library and package, as well as how machine learning may benefit humans, during this process. This project has a lot of potential for growth. The project can be expanded to include more exercises. A user interface can be built to make it easier to go through the activities. The AI trainer\'s collected data can be retained and processed for future sessions. A daily step tracker can also be included. The trainer will provide a workout schedule and intensity level based on your body type and weight. For simplicity of use, this application can be expanded into a full Android/iOS application. Future work may include the movement of the camera vertically and horizontally to capture another wide variety of exercises or it may include the use of multiple cameras to capture the body pose from various angles in order to feed the template of other exercises also Multiple failure identification considering the whole body.
[1] Ganesh, P., Idgahi, R. E., Venkatesh, C. B., Babu, A. R., & Kyrarini, M. (2020, June). Personalized system for human gym activity recognition using an RGB camera. In Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments (pp. 1-7). [2] Schmidt, B., Benchea, S., Eichin, R., & Meurisch, C. (2015, September). Fitness tracker or digital personal coach: How to personalize training. In Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers (pp. 1063-1067). [3] Kranz, M., Möller, A., Hammerla, N., Diewald, S., Plötz, T., Olivier, P., & Roalter, L. (2013). The mobile fitness coach: Towards individualized skill assessment using personalized mobile devices. Pervasive and Mobile Computing, 9(2), 203-215. [4] Martin-Niedecken, A. L., Rogers, K., Turmo Vidal, L., Mekler, E. D., & Márquez Segura, E. (2019, May). Exercube vs. personal trainer: evaluating a holistic, immersive, and adaptive fitness game setup. In Proceedings of the 2019 CHI conference on human factors in computing systems (pp. 1-15). [5] Xie, Y., Li, F., Wu, Y., & Wang, Y. (2021). HearFit+: Personalized fitness monitoring via audio signals on smart speakers. IEEE Transactions on Mobile Computing. [6] Jang, K. J., Ryoo, J., Telhan, O., & Mangharam, R. (2015, June). Cloud Mat: Context-aware personalization of fitness content. In 2015 IEEE International Conference on Services Computing (pp. 301-308). IEEE. [7] Kumar, P. S., Sagar, S., & Devraj, B. P. (2010, January). Mobile based Personalized Gym Training System. In Computer Applications II-Proceedings of the International Conference on Computer Applications: 24-27 December 2010, Pondicherry, India. Research Publishing Services. [8] Nath, I., Shaw, A., Bhadra, A., Bhowmick, D. D., Ghosh, S., & Chatterjee, S. (2023). A Novel Personal Fitness Trainer and Tracker powered by Artificial Intelligence enabled by MEDIAPIPE and OpenCV. International Journal of Intelligent Systems and Applications in Engineering, 11(3), 1085-1094. [9] Viana, P., Ferreira, T., Castro, L., Soares, M., Pinto, J. P., Andrade, T., & Carvalho, P. (2018, July). GymApp: A real time physical activity trainner on wearable devices. In 2018 11th international conference on human system interaction (HSI) (pp. 513-518). IEEE. [10] Fieraru, M., Zanfir, M., Pirlea, S. C., Olaru, V., & Sminchisescu, C. (2021). Aifit: Automatic 3d human-interpretable feedback models for fitness training. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 9919-9928).
Copyright © 2023 Rugwed Nand , Atharva Chavan , Atharva Jadhav , Vedant Achole, Priyanshu Nikam, Prof. Prranjali Jadhav . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET57569
Publish Date : 2023-12-15
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here