In the subject of intelligent transportation systems, autonomous vehicles are a prominent study issue because they have the potential to significantly reduce traffic and increase travel efficiency. One of the essential technologies for self-driving automobiles is scene classification, which serves as the foundation for these vehicles\' decision-making. Deep learning-based approaches have shown promise in solving the scene classification problem in recent years. Nonetheless, there are a few issues with scene classification techniques that need more research, like how to handle similarities and contrasts within the same category. This paper proposes an enhanced deep network-based scene classification technique to address these issues. . In the subject of intelligent transportation systems, autonomous vehicles are a prominent study issue because they have the potential to significantly reduce traffic and increase travel efficiency. One of the essential technologies for self-driving automobiles is scene classification, which serves as the foundation for these vehicles\' decision-making. Deep learning-based approaches have shown promise in solving the scene classification problem in recent years. Nonetheless, there are a few issues with scene classification techniques that need more research, like how to handle similarities and contrasts within the same category. This paper proposes an enhanced deep network-based scene classification technique to address these issues. . The experimental results show that the accuracy of the proposed method can reach which is higher than the state-of-the art methods.
Introduction
I. INTRODUCTION
Road safety is a paramount concern in our rapidly advancing technological world. The introduction of automated smart assistants in vehicles stands as a testament to this progression, offering promising potentials in improving driver safety and navigation efficiency. This project, "Design and Implementation of an Automated Smart Assistant for Enhanced Vehicle Safety", aims to contribute to this evolutionary trend by developing a comprehensive, fully automated smart assistant for vehicles. The proposed system integrates state-of-the-art features such as lane detection, blind spot monitoring, speed breaker detection, traffic signal recognition, road sign interpretation, and obstacle identification. These functionalities serve to equip the driver with a holistic understanding of their surroundings, promoting safer, informed decision-making on the roads. By seamlessly combining deep learning, computer vision, and sensor fusion technologies, this project strives to provide an advanced toolset for vehicular safety, thereby revolutionizing our interaction with transportation systems. In addition to advancing vehicle safety, our system also supports the global efforts to move towards autonomous vehicles, by providing a solid foundation of robust and reliable detection features. Join us as we delve into the development process, challenges, and outcomes of this ambitious project, driving towards a safer future on the roads.
II. RELATED WORK
An Enhanced Deep Network-Based Approach to Scene Classification for Autonomous VehiclesIn the subject of intelligent transportation systems, autonomous vehicles are a prominent study issue because they have the potential to significantly reduce traffic and increase travel efficiency. One of the essential technologies for self-driving automobiles is scene classification, which serves as the foundation for these vehicles' decision-making. Deep learning-based approaches have shown promise in solving the scene classification problem in recent years. Nonetheless, there are a few issues with scene classification techniques that need more research, like how to handle similarities and contrasts within the same category. This paper proposes an enhanced deep network-based scene classification technique to address these issues.[1]
An IOT based smart outdoor parking system
IOT is vast area of research and implementation are still going on. Traditional parking system are way too old, and cumbersome for urban cities where we find difficulties in finding vacant slots this may cause lot of traffic, minor collisions and accident in public. Even though many systems have reported in documents most of the places are organized under manual parking system and most of the visiting places in smart cities are provided only with organized door parking system.
Detection of Lane and Speed Breaker for Autonomous Vehicle using Machine Learning Algorithm.
Vehicle camera based lane and speed brake detection and drastic updation in modern technology needs the transportation to be automated and self driven . [2]
RGB camera failure and their effects in Autonomous Driving Application
RGB camera are one of the most relevant sensors for autonomous driving applications. It is undeniable that failures of vehicle cameras may compromise the autonomous driving car [4]
Reinforcements learning framework for video frame based autonomous car following
It’s application to emerging autonomous vehicle remains an unexplored research area. Aviz is designed to provide autonomous vehicle convenient and safe driving by avoiding accidents caused by human errors.
On the other hand, a few issues pertaining to scene classification techniques, such as how to handle similarities and differences within the same category, still require more research. In this research, a better deep network-based scene categorization technique is proposed to address these issues. The suggested approach uses an enhanced faster region with convolutional neural network features (RCNN) network to extract the features of scene representative objects in order to obtain local features. To highlight local semantics relevant to driving scenarios, a new residual attention block is added to the Faster RCNN network. Furthermore, a modified Inception module is employed to extract global features, presenting a combined Leaky ReLU and ELU function to minimise the potential.
The appropriate preparation of rest periods in light of the accessibility of parking spots very still regions is a significant issue for haulage organizations as well as traffic and street organizations. We present a contextual investigation of how You Just Look Once (YOLO)v8 can be carried out to identify weighty products vehicles very still regions during winter to take into consideration the ongoing expectation of parking space inhabitance. Blanketed conditions and the polar night in winter normally represent a few difficulties for picture acknowledgment, thus we utilize warm organization cameras.
As these pictures ordinarily have countless covers and shorts of vehicles, we applied move figuring out how to YOLOv8 to examine whether the front lodge and the back are appropriate elements for weighty products vehicle acknowledgment. Our outcomes demonstrate the way that the prepared calculation can identify the front lodge of weighty merchandise vehicles with high certainty, while distinguishing the back appears to be more troublesome, particularly when situated far away from the camera.[3]
A profound learning based object discovery technique to find a far off locale in a picture continuously. It focuses on far off objects from a vehicular front camcorder point of view, attempting to tackle one of the normal issues in Cutting edge Driver Help Frameworks (ADAS) applications, which is, to recognize the more modest and distant items with similar certainty as those with the greater and closer articles. This paper presents an effective multi-scale object recognition organization, named as ConcentrateNet to distinguish a disappearing point and focus on the close far off district. [5]
At first, the item recognition model inferencing will create a bigger size of responsive field discovery results and foresee a possibly evaporating point area, or at least, the farthest area in the edge. Then, the picture is trimmed close to the disappearing point area and handled with the article discovery model for second inferencing to acquire far off object identification results. At long last, the two-inferencing results are converged with a particular Non-Most extreme Concealment (NMS) technique.
The proposed network engineering can be utilized in the majority of the item location models as the proposed model is executed in a portion of the cutting edge object discovery models to really look at plausibility. Contrasted and unique models utilizing higher goal input size, ConcentrateNet engineering models use lower goal input size, with less model intricacy, accomplishing critical accuracy and review upgrades.[8]
V. PROJECT OBJECTIVES
Improvement of Road Safety: To design a smart assistant that enhances road safety by leveraging artificial intelligence and computer vision technology to detect lanes, blind spots, speed breakers, traffic signals, road signs, and potential obstacles.
Real-Time Lane Detection: To enable real-time detection of lane markings that will help the driver stay within the correct lane, reducing risks associated with straying into the wrong lane.
Blind Spot Detection: To create a reliable blind spot detection system that alerts the driver of potential hazards they may not be able to see in their mirrors, improving the safety of lane changes and turns.
Speed Breaker Identification: To automatically identify and alert the driver of upcoming speed breakers, ensuring a smooth and safe ride while preventing potential vehicle damage.
Traffic Signal Recognition: To design an intelligent system that accurately recognizes traffic signals in real-time, aiding drivers in making safer and more informed decisions on the road.
Road Sign Detection: To incorporate a feature that detects and interprets various road signs, providing real-time information to the driver and facilitating adherence to traffic rules and regulations.
Obstacle Identification: To develop a mechanism that effectively identifies any potential obstacles, such as pedestrians or other vehicles, preventing accidents and ensuring safe navigation.
References
[1] Jianjun Ni, Kang Shen, Yinan Chen ,Weidong Cao and Simon X Yang, “An Improved Deep Network Based Scene Classification method for Self-Driving Cars”, in IEEE transaction on Instrument and measurement vol 71 2022.
[2] R. K. Mohapatra, K. Shaswat, and S. Kedia, “Offline handwritten signature verification using CNN inspired by inception V1 architecture,” in Proc. 5th Int. Conf. Image Inf. Process. (ICIIP), Solan, India, Nov. 2019, pp. 263–267.
[3] M. Kasper-Eulaers, N. Hahn, S. Berger, T. Sebulonsen, Ø. Myrland, and P. E. Kummervold, “Short communication: Detecting heavy goods vehicles in rest areas in winter conditions using YOLOv5,” Algorithms, vol. 14, no. 4, p. 114, Mar. 2021.
[4] C. Wang et al., “Pulmonary image classification based on inceptionv3 transfer learning model,” IEEE Access, vol. 7, pp. 146533–146541, 2019.
[5] F. Yu et al., “BDD100K: A diverse driving dataset for heterogeneous multitask learning,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Seattle, WA, USA, Jun. 2020, pp. 2633–2642.
[6] Y. Chen, W. Li, C. Sakaridis, D. Dai, and L. Van Gool, “Domain adaptive faster R-CNN for object detection in the wild,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Salt Lake City, UT, USA, Jun. 2018, pp. 3339–3348.
[7] C. Chen, Z. Zheng, X. Ding, Y. Huang, and Q. Dou, “Harmonizing transferability and discriminability for adapting object detectors,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Seattle, WA, USA, Jun. 2020, pp. 8866–8875.
[8] Y. Song, Z. Liu, J. Wang, R. Tang, G. Duan, and J. Tan, “Multiscale adversarial and weighted gradient domain adaptive network for data scarcity surface defect detection,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1–10, 2021.