Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Dr. Mrs. Archana V. Dehankar, Kunal Ghugare, Sahil Shinde, Prathamesh Rathod, Shruti Gadekar, Prajwal Chakole
DOI Link: https://doi.org/10.22214/ijraset.2025.67018
Certificate: View Certificate
Autonomous mobile robots advancement have gained significant prominence in recent times due to their widespread relevance and applications across various domains. Their ability to navigate environments independently, without relying on physical or electro-mechanical guidance systems, has made them increasingly valuable and versatile. These robots are now being utilized in sectors such as industries, healthcare, education, agriculture, and even households to enhance efficiency in services and everyday tasks. With advancements in technology, the demand for mobile robots has risen due to their ability to perform a variety of functions, including transporting heavy objects, monitoring environments, and participating in search and rescue operations. Numerous studies have explored the significance of mobile robots, their diverse applications, and the challenges they encounter. This paper examines existing literature, highlighting key issues faced by mobile robots and it presents an in-depth analysis of sensors, devices, and sensor fusion techniques commonly employed to address challenges like localization, estimation, and navigation. These methods are evaluated based on their relevance, strengths, and limitations and provide a foundation for future research, offering valuable insights into overcoming the current challenges faced by autonomous mobile robots and guiding the development of improved methodologies for their operation.
I. INTRODUCTION
Autonomous robots have seen widespread adoption across diverse sectors, such as healthcare, logistics, and industrial automation. These robots operate in unpredictable settings, making adaptability and multitasking critical to their performance. Tasks like object detection and tracking are essential in achieving efficient multitasking, especially when using AI-driven systems like Husky Lens. The Arduino Uno controller integrated with Husky Lens enables the robot to execute various operations, including recognizing, tracking, and interacting with objects. This review examines recent advancements in object detection and AI-driven multitasking frameworks, focusing on potential improvements in autonomous robot functionalities.
The rise of autonomous mobile robots (AMRs) has revolutionized various industries, offering innovative solutions to complex operational challenges. AMRs are designed to navigate dynamic and partially known environments with minimal human intervention, showcasing advanced capabilities in perception, navigation, and decision- making. These robots have been widely adopted in sectors such as healthcare, manufacturing, logistics, and education due to their ability to enhance efficiency, improve safety, and reduce reliance on human labor [1][2]. Recent advancements in artificial intelligence (AI), sensor technologies, and computational methods have further expanded the capabilities of AMRs, enabling them to perform diverse tasks, including object detection, tracking, and interaction [3][4]. By leveraging sensor fusion techniques, such as integrating data from cameras and inertial measurement units (IMUs), these robots achieve enhanced localization and environmental mapping, even in complex terrains [5][6].
The focus is on developing an autonomous robot integrating AI-based vision and mobility systems, equipped with functionalities such as face recognition, object classification, line tracking, and color recognition. Such robots have demonstrated transformative potential in sectors like healthcare, where they can efficiently guide patients and alleviate the burden on medical staff, and in industrial settings, where they streamline material handling and logistics [3][1]. Despite their promise, challenges such as sensor reliability, navigation in unpredictable environments, and real-time computational demands persist, necessitating further research and innovation in autonomous robotics [7][8]. By consolidating recent advancements and addressing key challenges, this review aims to contribute to the ongoing progress in AMR technologies, focusing on their applications, limitations, and potential improvements. This study emphasizes the importance of scalable and adaptive frameworks to ensure efficient and reliable performance in diverse scenarios [9][2].
AMRs distinguish themselves from traditional automated systems by their ability to perceive, analyze, and act autonomously in response to environmental changes. This autonomy is achieved through the integration of advanced hardware and software systems, including AI-based vision algorithms, sensor fusion technologies, and decision-making frameworks. For instance, camera-based vision systems combined with inertial measurement units (IMUs) enable accurate localization, object recognition, and environmental mapping. Such capabilities are critical for applications ranging from warehouse automation to autonomous navigation in healthcare facilities [5][6]
The practical implications of such systems are immense. In healthcare, autonomous robots equipped with vision and navigation capabilities can guide patients, deliver medical supplies, and assist in maintaining sterile environments, thereby reducing the workload on healthcare professionals [3][1]. In industrial domains, these robots excel in tasks like material handling, inventory management, and quality inspection, where precision and efficiency are paramount. Furthermore, in educational settings, they provide hands-on platforms for students to explore robotics, AI, and programming, fostering innovation and skill development.
Despite the impressive progress in AMR development, several challenges remain. Key limitations include ensuring robust performance in unpredictable environments, minimizing errors in sensor data processing, and achieving real-time computational efficiency under constrained hardware resources [7][8]. Furthermore, the adaptability of these robots to unstructured environments, such as those encountered in outdoor or disaster-response scenarios, is an area that demands further exploration. The integration of advanced AI techniques, such as reinforcement learning and neural network optimization, has shown promise in addressing some of these issues, but scalability and reliability continue to be active areas of research [9][2].
This study aims to consolidate existing advancements in AMR technologies and propose scalable frameworks for integrating AI-based vision systems and mobility mechanisms. By exploring the potential of autonomous robots across various applications, this review highlights their transformative impact on modern industries while addressing current limitations. The findings and insights presented in this paper are expected to guide researchers and practitioners toward the next generation of autonomous systems that are intelligent, efficient, and robust in real-world scenarios.
FIGURE 1. Applications of mobile robot.
Mobile robots have diverse applications across various fields, showcasing their adaptability and transformative impact on modern industries. Key areas where these robots are increasingly utilized include transportation, search and rescue, surveillance, research and education, customer support, and cleaning services.
In the transportation sector, mobile robots streamline logistics and material handling, ensuring efficiency and reducing human effort. In search and rescue operations, they play a critical role by navigating hazardous environments to locate victims and assess risks, thereby safeguarding human responders. Similarly, surveillance tasks in industries like security and defense benefit from the robots' ability to monitor and patrol areas autonomously, reducing human workload and increasing precision. Mobile robots have also emerged as valuable tools in research and education. They provide students and researchers with practical insights into robotics, artificial intelligence, and automation. In customer support, these robots enhance user experience in settings like retail stores, hotels, and airports by offering personalized assistance. Furthermore, their role in cleaning services has been transformative, with robots autonomously maintaining cleanliness in homes, offices, and public spaces.
The versatility of mobile robots demonstrates their potential to revolutionize a wide range of domains, enabling more efficient and safer operations. This review paper focuses on these applications while addressing the challenges and advancements in localization, navigation, and sensor fusion, which are critical for enhancing their capabilities in these diverse environments.
II. LITERATURE REVIEW
A. Vision-Based Robotics
Vision systems play a pivotal role in enhancing robotic capabilities, particularly in object detection, tracking, and classification.
Authors in [10] introduced convolutional neural networks (CNNs) to achieve efficient object detection in real time, deploying the YOLO framework on low-power hardware. Their work demonstrated the feasibility of real-time object recognition with constrained resources. Similarly, [11] employed a hybrid approach, blending classical computer vision methods with lightweight deep learning models like Mobile Net to achieve reliable face recognition and tracking while minimizing computational demands.
For color and tag detection, [12] proposed a technique utilizing HSV (Hue, Saturation, Value) color space for consistent color recognition under fluctuating lighting conditions. Their study proved effective in tasks requiring high precision, such as industrial sorting and labeling.
B. Mobility and Navigation
Advanced mobility designs like mecanum wheels have significantly improved robot movement in confined spaces, enabling omnidirectional capabilities.
In [7], researchers evaluated mecanum wheels’ dynamics and concluded that they are particularly suitable for applications requiring precise movement, such as logistics and healthcare. The incorporation of PID controllers in the study ensured improved path-following accuracy and motion stability. Obstacle avoidance, a cornerstone of autonomous navigation, has also been explored extensively. Studies like [5] combined ultrasonic sensors and infrared modules with path-planning algorithms (e.g., A* and Dijkstra) to enable dynamic, collision-free navigation. These works underscore the value of sensor fusion in reducing interference and increasing the reliability of navigation systems.
C. Control Systems and Sensor Integration
Control architectures are critical for multitasking robots, enabling seamless coordination between sensors, actuators, and decision-making units. Microcontrollers like Arduino Uno and Raspberry Pi are frequently used due to their versatility and compatibility with various components.
The study in [13] introduced a sensor fusion framework using Kalman filters to integrate data from cameras, IMUs, and ultrasonic sensors, achieving better spatial awareness and motion accuracy. Furthermore, [9] compared communication protocols such as I2C, SPI, and UART, highlighting their suitability for distinct robotic functions. With the rise of edge AI, [8] demonstrated how TensorFlow Lite models could be implemented on low-power devices to perform real-time object detection and gesture recognition. This innovation reduced latency by processing sensory input locally, decreasing dependency on external computation.
D, AI-Powered Multitasking
The adoption of AI has revolutionized multitasking in robotics, allowing systems to handle diverse operations simultaneously.
Reinforcement learning (RL) has proven effective for optimizing task scheduling and navigation. For instance, [14] proposed an RL-based solution for autonomous delivery robots, which could adapt to traffic changes while reducing delivery times. Another study, [4], developed a unified deep learning model capable of performing multiple tasks, such as object recognition, facial detection, and line tracking, within a single framework. This approach simplified system complexity and enhanced task efficiency.
Energy optimization in multitasking robots was addressed in [15], where researchers utilized dynamic voltage scaling to minimize power consumption during inactive periods without compromising responsiveness.
E. Integration with Micro-Controller for Multitasking
The Practical Implications Husky Lens-based robots have shown high responsiveness and adaptability in experimental setups. Robots equipped with Husky Lens effectively adapt to real-time changes in the environment, such as adjusting speeds during line tracking or following moving objects, as shown in several DIY implementations and research experiments. The high speed and accuracy of its recognition algorithms provide a robust foundation for further development in AI-based autonomous robotics.
The DIY Life project Husky Lens is a valuable AI tool for robotics applications, particularly for projects requiring multitasking. Its integration into autonomous robots enhances navigation, object recognition, and tracking capabilities without the need for complex programming. This literature review demonstrates that Husky Lens's adaptability in Arduino-controlled robots supports a broad range of applications, from industrial to educational and healthcare settings, highlighting its potential in advancing autonomous robotic systems.
III. PROPOSED METHODOLOGY
The approach of this research is to design and develop an autonomous robot integrating AI and advanced sensors for efficient multitasking operations. The robot will utilize AI- powered functionalities such as face recognition, object detection, classification, and color identification for dynamic decision-making. It will feature a mobility system with wheels powered by DC motors for omnidirectional movement.
IV. EXISTING METHODOLOGY
A. Vision-Based Systems for Enviromental Perception
Vision systems are a critical component of autonomous robots, enabling them to perceive and interpret their surroundings for tasks such as object detection, tracking, and classification. The Husky Lens AI vision module, widely used in robotics, integrates seven key functions: face recognition, object tracking, object classification, line tracking, color recognition, tag detection, and object recognition. Current research has employed lightweight AI models such as YOLO (You Only Look Once) and Mobile Net for real- time object detection and classification due to their efficiency on low-power hardware [10], [11]. These models process image data from cameras and classify objects in the robot's environment with high accuracy, allowing precise decision- making during multitasking operations. For tasks like line tracking, infrared and camera-based systems combined with PID (Proportional-Integral-Derivative) controllers are commonly used, ensuring stable and accurate path-following behavior in dynamic environments [12].
Color and tag recognition methods, including HSV (Hue, Saturation, Value) transformation, are also applied for identifying markers and distinguishing regions in robotic navigation [7]. These methodologies form the foundation for integrating AI-based vision systems into autonomous robots.
B. Mobility and Motion Control
Mobility systems play a pivotal role in enabling autonomous robots to navigate and adapt to dynamic environments. The use of mecanum wheels has been a notable innovation, allowing omnidirectional movement and enhanced maneuverability in confined spaces [5]. Robots equipped with these wheels are powered by DC motors controlled via motor drivers such as the L293D. Control algorithms like PID and trajectory optimization ensure smooth motion, even on uneven surfaces [13]. Obstacle avoidance is achieved through a combination of ultrasonic sensors and path-planning algorithms like A* and Dijkstra, which allow robots to dynamically adjust their routes based on real-time environmental feedback [9]. These mobility solutions ensure that robots can perform multitasking operations, such as simultaneous object tracking and navigation, in complex and cluttered environments.
C. Control Systems and Sensor Integration
Efficient control systems are essential for managing the robot’s hardware and processing sensor data. Arduino Uno, a versatile and widely used microcontroller, is often integrated with motor drivers and sensors to coordinate operations. Communication between components is facilitated using protocols like UART, I2C, and SPI, ensuring reliable and low-latency data exchange [8].
Sensor fusion techniques play a crucial role in enhancing the robot’s perception capabilities. By combining data from multiple sources, such as cameras, IMUs (Inertial Measurement Units), and ultrasonic sensors, the robot achieves accurate localization and navigation. Kalman filters and complementary filters are commonly used for real-time sensor data integration, reducing noise and improving decision-making accuracy [14].
Edge AI has further improved the efficiency of control systems. Frameworks such as TensorFlow Lite allow real- time inference of AI models directly on microcontrollers, reducing dependency on external computation [4].
D. AI-Driven Multitasking and Decision-Making
Artificial intelligence methodologies enable autonomous robots to handle multiple tasks simultaneously, enhancing their efficiency in diverse applications. Machine learning models, particularly deep learning frameworks like CNNs (Convolutional Neural Networks), are utilized for tasks such as object recognition, classification, and environmental mapping [15].
Reinforcement learning (RL) has been applied to optimize task scheduling and path planning in robots. In [16], an RL- based framework allowed autonomous robots to learn optimal navigation paths in dynamic environments while minimizing energy consumption. Path optimization algorithms, such as A*, Dijkstra, and their heuristic variations, ensure efficient routing while avoiding obstacles.
V. RESULT AND DISCUSSION
The paper explains the domain covering artificial intelligence, object detection, autonomous mobile robots, and deep learning. The findings highlight how sensor fusion and AI-based models improve accuracy in localization, navigation, and multitasking. The combination of multiple sensors with deep learning enables real-time decision- making, enhancing adaptability in dynamic environments. However, challenges such as energy efficiency and computational limitations remain, suggesting the need for scalable and efficient solutions in future developments.
The advancements and applications of autonomous robots equipped with AI-driven capabilities for multitasking operations. The integration of advanced vision systems, such as the HuskyLens module, with microcontrollers like the Arduino Uno, enables these robots to perform tasks such as object detection, tracking, line following, and obstacle avoidance with high efficiency. The use of Mecanum wheels for omnidirectional mobility further enhances their versatility, making them suitable for dynamic and complex environments. The reviewed studies emphasize the importance of combining robust hardware with intelligent software to develop autonomous systems capable of addressing real- world challenges. While significant progress has been made, areas such as power optimization, algorithm refinement, and testing in diverse settings require further exploration. These improvements will help enhance the reliability and adaptability of autonomous robots for various applications, including surveillance, customer service, and automation. Overall, the integration of AI and robotics has demonstrated immense potential, and ongoing advancements are expected to further transform these systems into highly efficient and reliable tools for modern industries.
[1] Khan ZH, Siddique A, Lee CW. Robotics Utilization for Healthcare Digitization in Global COVID-19 Management. Int J Environ Res Public Health. 2020 May 28;17(11):3819 [2] Z. Fan, Q. Qiu, and Z. Meng, “Implementation of a four- wheel drive agricultural mobile robot for crop/soil information collection on the open field,” in Proceedings [3] - 2017 32nd Youth Academic Annual Conference of Chinese Association of Automation, YAC 2017, 2017. [4] E. Apriaskar, F. Fahmizal, I. Cahyani, and A. Mayub, “Autonomous mobile robot based on Behaviour based robotic using V-REP simulator–Pioneer P3 DX robot,” Jurnal Rekayasa Elektrika, vol. 16, no. 1, hal. 15-22, [5] April 2020 [6] Cheng, L., & Huang, T. (2024). Multi-Tasking in AI Robots for Smart Home Applications. IEEE Internet of Things Journal. DOI: 10.1109/IoT.2024.109027 [7] I. Ballester, A. Font´an, J. Civera, K. H. Strobl, and R. Triebel, “Dot: Dynamic object tracking for visual slam,” in 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021, pp. 11705–11711. [8] Vehicle-related distance estimation using customized YOLOV7 Image and Vision Computing: 37th International Conference, IVCNZ 2022, Auckland, New Zealand, November 24-25, 2022, Revised Selected Papers, Springer (2023), pp. 91-103 [9] G. Chen, W. Dong, P. Peng, J. Alonso-Mora, and X. Zhu, “Continuous occupancy mapping in dynamic environments using particles,” IEEE Transactions on Robotics, 2023. [10] M. Lu, H. Chen, and P. Lu, “Perception and avoidance of multiple small fast moving objects for quadrotors with only low-cost rgbd camera,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 11657–11664, 2022. [11] J. Lin, H. Zhu, and J. Alonso-Mora, “Robust vision-based obstacle avoidance for micro aerial vehicles in dynamic environments,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 2682–2688. [12] Y. Wang, J. Ji, Q. Wang, C. Xu, and F. Gao, “Autonomous flights in dynamic environments with onboard vision,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021, pp. 1966–1973. [13] H. Chen and P. Lu, “Real-time identification and avoidance of simulta neous static and dynamic obstacles on point cloud for uavs navigation,” Robotics and Autonomous Systems, vol. 154, p. 104124, 2022. [14] Z. Xu, X. Zhan, B. Chen, Y. Xiu, C. Yang, and K. Shimada, “A real-time dynamic obstacle tracking and mapping system for uav navigation and collision avoidance with an rgb-d camera,” in 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023, pp. 10645 10651. [15] A. Z. Zhu, D. Thakur, T. ¨ Ozaslan, B. Pfrommer, V. Kumar, and K. Daniilidis, “The multivehicle stereo event camera dataset: An event camera dataset for 3d perception,” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 2032–2039, 2018. [16] Y. Qiu, C. Wang, W. Wang, M. Henein, and S. Scherer, “Airdos: dynamic slam benefits from articulated objects,” in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 8047–8053. [17] M. Taskiran, N. Kahraman, and C. E. Erdem, \"Face recognition: Past, present, and future (a review),\" Digit. Signal Process., p. 102809, 2020. [18] Y. Kortli, M. Jridi, A. Al Falou, and M. Atri, \"Face recognition systems: A Survey,\" Sensors, vol. 20, no. 2, p. 342, 2020.
Copyright © 2025 Dr. Mrs. Archana V. Dehankar, Kunal Ghugare, Sahil Shinde, Prathamesh Rathod, Shruti Gadekar, Prajwal Chakole. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET67018
Publish Date : 2025-02-18
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here