Kalyantra is an intelligent-robotic assistant that can speak, understand, and process natural language, and interact with users to provide practical assistance, entertainment and be useful in day-to-day life. The aim of the project is to develop a system that can provide future-oriented solutions for homes, hospitals, schools, and workplaces, and make people more comfortable, safe and convenient in their daily life. Under the guidance of IoT technologies, the system can act and effectively automate remote environments.
In terms of practical assistance, Kalyantra can perform a variety of tasks, including home automation, providing medical reminders, and assisting with mobility. The robot\'s mobility control feature can be used to control an electric wheelchair or other mobility aids, making it easier for individuals with mobility impairments to move around their environment. The system can also be integrated with IoT devices such as smart home systems, making it possible to automate tasks such as turning on lights or adjusting the thermostat.
The project is built using Python for the robot\'s programming and Flutter for the mobile application. The robot\'s hardware includes sensors for obstacle sensing, a camera for facial recognition, and a speaker for voice output. The mobile application serves as the user interface, allowing users to interact with the robot assistant and control its features. In conclusion, Kalyantra is an intelligent-robotic assistant that can provide practical assistance and entertainment, making it an ideal solution for individuals with disabilities or older individuals who may require assistance. The system can be used in a variety of settings, including homes, hospitals, schools, and workplaces, and can be integrated with IoT devices to effectively automate remote environments. With its advanced features such as emotion recognition and interactive storytelling, Kalyantra has the potential to improve the quality of life for many individuals.
Introduction
I. INTRODUCTION
A Well-being bot is the industry desired self-management and self-care for mental & physical health. Well-being also includes factors such as Feeling happy, Working productively, Time management, Coping with normal stresses of life, Keeping a balanced Life, and much more.
A well-being bot can provide personalized support to individuals seeking to improve their mental and physical health. By leveraging technology and data, these bots can offer customized recommendations and interventions based on an individual's unique needs and preferences. Additionally, well-being bots can help users track their progress over time, providing valuable insights into the effectiveness of different self-care strategies. Ultimately, the goal of a well-being bot is to help users achieve a sense of balance, fulfillment, and resilience in their daily lives.
Well-being bots can be especially beneficial in today's fast-paced and stressful world. Many people struggle to find the time and energy to prioritize their health and well-being, and may not know where to turn for support. By using a well-being bot, individuals can access resources and guidance anytime, anywhere, and in a way that feels comfortable and convenient for them.
Some well-being bots may focus specifically on certain aspects of health and wellness, such as sleep, nutrition, or stress management. Others may take a more holistic approach, addressing multiple dimensions of well-being at once. Regardless of their specific focus, well-being bots can be an effective tool for promoting self-awareness, building healthy habits, and improving overall quality of life.
As technology continues to evolve, we can expect to see even more sophisticated well-being bots in the future. These bots may incorporate advanced features such as machine learning, natural language processing, and biometric tracking, allowing them to provide even more personalized and precise recommendations. However, it's important to remember that well-being bots are just one tool in a larger toolkit for self-care, and should be used in conjunction with other strategies such as exercise, social support, and professional counseling when necessary.
A. Problem Formulation
How can we leverage technology to support individuals in managing their mental and physical health and promoting overall well- being? Specifically, how can we design and develop a well-being bot that is personalized, effective, and accessible to a wide range of users? How can we ensure that the bot provides useful, evidence-based recommendations while also respecting user privacy and security? Additionally, how can we evaluate the effectiveness of the bot in improving user outcomes and satisfaction? Finally, how can we integrate the well-being bot into existing healthcare systems and services, and ensure that it complements, rather than replaces, human support and care?
II. METHODOLOGY
The Methodology Design system consists of two main components: a hardware module and a software module. The hardware module is designed using a schematic diagram, while the software module is developed using Python for the backend and Flutter for the frontend, which creates a mobile application. Communication between the frontend and backend is established using the MQTT protocol via AWS IoT Core. In addition to the existing features, the Methodology Design system can be enhanced with the following additions: GPT, Mood Analysis. Overall, the Methodology Design system provides a comprehensive and integrated solution for controlling a bot using voice or text commands while incorporating obstacle detection, path tracing, and photo capturing capabilities.
III. WORKING
The system offers two ways to interact with the bot:
Voice commands can be given directly to the bot using a connected mic. The bot uses Google TTS to actively listen for wake- up words such as "Okay yantra" or "Do something." Once a wake-up command is recognized, the bot responds with "yes," and the user can give further tasks.
The second way to interact with the bot is through a mobile application that uses a native microphone and speech recognition to send text commands to the bot. The text is processed to determine the number of tasks given, and the bot completes them accordingly. Multiple tasks in a single text can be separated with "then," while the last command will be after the keyword "and." For example, the command "Move forward 10 cm then click a photo and return" would be processed as three separate tasks.
Photos captured by the bot's camera are uploaded to storage and a realtime database using Firebase implementation. This database serves as the primary database for generic data storage. Text to speech synthesizing is done using Espeak.
“Tell Me” keyword can be used while giving task to get Chat-GPT response.
For every sentence spoken sentiment & emotion analysis is performed.
Integrated GPT APIs for assistance: By integrating chat-GPT APIs, the system can provide conversational assistance to users. Chat- GPT can assist with tasks like answering questions, providing recommendations, or offering emotional support. The bot can interact with the user in a more human-like manner, which can improve the overall user experience.
Integration of sentiment and emotion analysis: By integrating sentiment and emotion analysis, the system can understand the user's emotional state and adapt accordingly. The bot can use this analysis to offer personalized recommendations or adjust its tone or response to the user.
For example, if the bot detects that the user is feeling anxious or stressed, it can offer calming techniques or suggest taking a break to relax. Sentiment and emotion analysis can help create a more empathetic and supportive interaction between the user and the bot.
The system controls mobility using GPIO pins connected according to the schematic to manipulate motors for movement. The bot also uses IR obstacle sensors for obstacle detection.
When an obstacle is detected, the bot halts temporarily and resumes movement once the path is clear. The system uses a Speed Encoder sensor LM393 to calculate the distance travelled or required to be travelled, giving the bot control over path tracing and memorization.
IV. DESIGNS
V. COMPONENTS
Raspberry Pi 3 Model B+: A credit-card sized computer board that can be programmed to perform various tasks and interfaces with various electronic components.
L298 Dual H-Bridge Motor Driver: A module used to control the speed and direction of DC motors.
IR Sensor Module: A device that detects the presence of an object by emitting and receiving infrared radiation.
KIS-3R33s DC buck, DC-DC Buck Converter Module LM2596: A power supply module that converts higher voltage to lower voltage for use with electronic circuits.
Ultrasonic Distance Sensor (HC-SR04): A device that measures the distance between objects using ultrasonic sound waves.
Mini Camera 5 MP 1080p Sensor OV5647: A small camera module that captures 5 megapixel images and 1080p video.
IR Speed Sensor Module Based on LM393: A device that measures the speed of rotating objects by detecting the interruptions in infrared light.
100 RPM Dual Shaft BO Motor-Straight: A DC motor with dual shafts that can rotate at a maximum speed of 100 rotations per minute.
References
[1] \"Design of a voice-activated intelligent wheelchair system for disabled people\" by Xingjian Lu and Yuqiang Luo:https://ieeexplore.ieee.org/document/8379194
[2] \"Emotion Recognition for Affective Human-Robot Interaction: A Review\" by Yunjiang Lou, Shaoping Bai, and Tao Mei: https://dl.acm.org/doi/abs/10.1145/3287003
[3] \"The Effectiveness of Robot-Assisted Therapy for Elderly People with Dementia: A Systematic Review and Meta-Analysis\" by Lei Zhang, Yanqing Yang, and Huijuan Yang: https://www.frontiersin.org/articles/10.3389/fnagi.2021.659524/full
[4] \"Navigation of Mobile Robots in Unknown Terrains: A Review\" by Sahil Garg, Rahul Singh, and Satbir Singh: https://ieeexplore.ieee.org/document/8731937
[5] \"An Introduction to Python for Robotics\" by Joshua Davis and Mark Gross: https://arxiv.org/abs/1911.06839
[6] \"Real-time Obstacle Avoidance for Mobile Robots in Dynamic Environments\" by Mahdi Rezaei and Hamed Jafari: https://ieeexplore.ieee.org/document/9365835
[7] \"A Review of Deep Learning in Robot Vision\" by Xin Li and Shuai Li: https://www.sciencedirect.com/science/article/abs/pii/S2405452620305505
[8] \"Robotics for Elderly Care: A Comprehensive Review\" by J. Kang and Y. Kim: https://ieeexplore.ieee.org/document/8721567
[9] \"Mobile Robot Localization Techniques: A Comprehensive Review\" by Vishnuvardhan Mannava and Parag Kulkarni: https://ieeexplore.ieee.org/document/9022626
[10] \"Object Detection and Recognition in Robotics\" by R. Gao, Y. Lu, and J. Wang: https://www.sciencedirect.com/science/article/abs/pii/S2405452620301466
[11] [11]\"Smart Healthcare System for Elderly People: A Review\" by S. Perera, A. S. A. Abeykoon, and \\ [11]A. M. U. R. Dharmasiri: https://ieeexplore.ieee.org/document/8672883
[12] \"A Review of the Role of Robotics in Geriatric Care\" by Ankit Mahajan and Rajesh Kumar: https://ieeexplore.ieee.org/document/9192648
[13] \"Flutter: A Framework for Building Native Apps for Mobile, Web, and Desktop from a Single Codebase\" by C. V. Agarwal and D. K. Gupta: https://www.sciencedirect.com/science/article/abs/pii/S2405452620301788
[14] \"A Survey of the State-of-the-Art in Mobile Robots Navigation\" by T. V. Luong, D. T. Pham, and D. N. Nguyen: https://www.sciencedirect.com/science/article/pii/S2212017313003393
[15] \"Robot-Assisted Rehabilitation in Stroke Patients: A Systematic Review and Meta-Analysis\" by Y. Q. Chen, Y. L. Zhou, and X. S. Li: https://journals.sagepub.com/doi/abs/10.1177/1545968319856689