Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: V. Mahajan, R. Kulkarni, A. Kacherikar, S. Bhosekar, S. Kakad
DOI Link: https://doi.org/10.22214/ijraset.2023.56688
Certificate: View Certificate
In response to the urgent need for automated vehicle crash detection and its potential to increase road safety while reducing crash severity, this study introduces an innovative and novel approach. It focuses on using deep learning techniques for image recognition in videos to predict the likelihood of an accident. Specifically, a convolutional neural network (CNN) model was carefully constructed using TensorFlow and Keras and trained on a diverse and carefully annotated dataset covering a wide range of road scenarios. These scenarios include different conditions such as the presence of vehicles, pedestrians and different road conditions. To ensure maximum model performance, the model is optimized with the Adam optimizer and includes training using sparse stratified cross-entropy loss. Furthermore, Checkpoint model calls are carefully used to protect the best models during the training process. The overall objective of this research project is to provide an efficient and accurate real-time solution for collision detection and the ultimate goal is to make a significant contribution. This will lead to improved traffic safety. This has the potential not only to prevent accidents, but also to reduce their severity, thereby significantly improving the safety and efficiency of transportation and ultimately improving the overall well-being of society.
I. INTRODUCTION
The introduction of automated car crash detection systems represents a notable advancement in the pursuit of enhanced road safety and the reduction of accident severity. This study presents an innovative approach to address this pressing issue, with the ultimate objective of preventing accidents and minimizing their impact on individuals' lives and transportation system efficiency. In recent years, automated car crash detection systems have been increasingly adopted, indicating their potential to improve road safety. However, concerns related to their accuracy, efficiency, and overall effectiveness have hindered their widespread acceptance. This study endeavors to alleviate these concerns by proposing a novel approach that harnesses deep learning, focusing on image detection within video data.
The core of this methodology lies in the development of a Convolutional Neural Network (CNN) model, a well-suited deep learning framework for image analysis. The model was implemented using TensorFlow and Keras, and meticulously trained on a comprehensively annotated dataset that encompasses various road scenarios. [1][2] provide these scenarios which include diverse elements such as different vehicle types, pedestrian presence, and varying road conditions. This diverse dataset ensured that the system was proficient in handling the multifaceted intricacies of real-world road environments.
The ultimate goal of this research project was to make a substantial contribution to the advancement of road safety.
By offering a proactive solution capable of preventing accidents and reducing their severity, the aim is to create an environment in which transportation is safer and more efficient. This study represents a promising avenue for leveraging cutting-edge technology to serve the greater good, emphasizing the significance of innovation in the realm of public safety and transportation.
II. RELATED WORK
The research paper [1] provides a proactive solution to address the serious problem of road accidents. This system works by immediately notifying the user's emergency contacts in case of an accident and providing the exact location of the accident. Vehicle sensors detect an accident and the system is activated to send SMS to designated emergency contacts. In addition, it is equipped with a reset button that stops the warning notifications if all passengers are confirmed safe. This innovative approach has the potential to reduce response time and save lives in the event of an incident.
Many research papers have contributed significantly to the understanding and development of vehicle crash detection through videos. The paper [2] creates a variety of synthetic videos. Is considered. We use these videos to train a 3D CNN model and then use domain matching to adapt the model from synthetic videos to real videos. Their experiments demonstrated the effectiveness of this approach for real-world car crash detection, showing promising progress in this field.
Another paper presented at the IEEE 2020 conference [3], an innovative framework is introduced. Convolutional Neural Network (CNN). This framework is carefully designed to use GPS-equipped probe vehicles as data sources to facilitate highly accurate detection of highway traffic accidents, as shown on Guangzhou TV. We demonstrate the superior performance of our framework compared to alternative methods, with special emphasis on The effectiveness of that airport highway. This study highlights the potential of CNN-based systems in the field of traffic accident detection and significantly contributes to improving traffic safety and traffic system efficiency..
Research paper [4] presents a new approach to traffic accident detection. This research focuses not only on using classes to recognize objects, but also on identifying features of objects that are similar to safe, dangerous, or collapsing conditions. I guess. To achieve this, a new database was built and an innovative method called "Voting R-CNN" was proposed. This method consists of two steps. One is to recognize objects by class, and the other is to calculate attribute properties. Attentional mechanisms that capture background information enable the recognition of object features. Extensive experiments on a newly developed database demonstrate the effectiveness of this approach and make an important contribution to the field of crash detection.
Another paper [5] highlights the critical problem of road accidents in India. They proposed a system that uses live surveillance cameras installed on highways to quickly detect accidents. This system uses a convolutional neural network (CNN) to classify video frames as disaster or non-disaster. Given the alarming rate of accidents and fatalities in India, this research aims to provide a solution that can quickly detect accidents and report them to the authorities, especially in areas with limited surveillance. The main objective is to improve road safety and reduce the harmful effects of road accidents.
A research paper [6] highlights the increasing number of road accidents in India. The company has introduced a system that uses convolutional neural networks (CNN) and training for accident detection that aims to improve accuracy compared to accelerometer-based methods. The proposed system is a combination of GPS and GSM hardware components installed in vehicles to quickly identify accidents and save lives. This study provides an advanced approach to road safety in India where accidents and their consequences are a major concern.
Paper [7] presents a model based on machine learning and deep learning that includes clustering and classification techniques for accurate accident detection from car cameras. This method shows an impressive accuracy of 94.14% and holds great promise to improve road safety and control.
Paper [8] deals with the early detection and reporting of accidents in sparsely populated areas or when the driver is alone and unable to send a hazard signal. slow, it pays. We are working on it. label. They offer a vehicle IoT (IoV) system based on deep learning that includes indoor sensors and cameras, a cloud-based deep learning server, and a control platform. When an accident with one or more vehicles is detected, the system uploads the accident data to the cloud, starts detecting the accident and sends an emergency message.
The results show an accuracy of 96% for the detection of traffic accidents and an average time of 7 seconds for emergency alerts, showing the potential to improve road safety and medical assistance in critical situations.
Paper [9] investigates the increase in the frequency of road accidents due to the increase in the number of road accidents. Dream car. They highlight the need for effective methods of traffic anomaly detection. Existing approaches mainly focus on identifying anomalies directly from traffic flow data, but often lack comprehensive capabilities for traffic flow analysis. In this paper, three new features of traffic flow are proposed for traffic representation and anomaly detection: traffic congestion, traffic volume, and traffic instability. The authors extract these features through residual analysis, quadratic discretization, and multiresolution wavelet analysis, and then apply these features to traffic anomaly detection problems. Experimental results show that accident detection using this proposed feature is more effective than using raw traffic data, making it a promising alternative for future applications and research in this area.
III. PROPOSED SYSTEM
A. System Overview
The proposed system for car accident detection is underpinned by a comprehensive dataset acquired from Kaggle, specifically labeled as "Accident Detection from CCTV Footage." Dataset selection was based on its relevance and richness. A rigorous preprocessing phase was employed to establish uniformity within the dataset. This includes the resizing of all images to a standardized 250x250 pixel dimension and representation in the RGB color space. In addition, data augmentation techniques are thoughtfully applied, introducing random rotations, flips, and brightness adjustments to impart variability that emulates real-world scenarios.
A judicious approach was used to partition the dataset into training and validation sets. This strategic partitioning facilitates model training on one segment of the data, while conducting an independent evaluation of unseen data. Considerable care was exercised to ensure that the validation set retained its representativeness within the overall dataset, safeguarding against overfitting and furnishing a dependable measure of the model's capacity for generalization.
This system also addresses the potential challenge of class imbalance in the dataset. To correct the bias and increase the accuracy of the model in crash detection, this system incorporates strategies such as class weighting methods and data balancing during the model training process.
At the heart of this work is the development of customized image recognition models using advanced deep learning techniques. TensorFlow and Keras libraries serve as the main tools for creating this model. The model architecture is structured in a sequential paradigm and is characterized by a set of convolution layers followed by a max-pooling layer that helps reduce the spatial dimension. These layers are designed to extract relevant features from the input image and then input them into dense layers for classification.
Quality is at the fore in the architectural design of the standard model. The system seeks to maintain an optimal balance between model complexity and computational efficiency by adapting to specific requirements for determining vehicle complexity. This is achieved without tracking the model's high accuracy and real-time performance, and the activation function plays an important role in setting the model's performance. An adjusted linear unit (ReLU) activation function is strategically placed after each convolution layer. This deliberate ordering provides non-linearity and improves the model's ability to identify complex patterns. The final density layer uses the softmax activation function to generate a probability distribution that contains two main classes: random and non-random. The system recognizes and focuses on the need for data replication during training, which improves generalizability and reliability.
B. System Architecture
The meticulously crafted system architecture diagram provides a detailed and comprehensive visual representation of the primary data processing components that constitute the core of an accident detection system. This diagram plays an integral role in facilitating a profound understanding of the architectural intricacies of the system, effectively conveying the orchestration of data and processes within the system.
The Convolutional Neural Network (CNN) model, which serves as the linchpin for the functionality of the system, lies at the heart of the diagram. It underpins the critical processes of feature extraction, classification, and ultimately, accident detection. The diagram methodically illustrates the interconnectedness and dependencies of various elements within the system, underscoring the pivotal role of the CNN in these intricate processes.
The diagram meticulously delineates the systematic flow of the system workflow. Commencing with the data collection phase, the system undertakes rigorous gathering of accident and non-accident images and videos. Each data instance was diligently labeled, thereby establishing a foundation for subsequent model training. Following data acquisition, the system transitions into the preprocessing phase, meticulously executing tasks, such as image resizing and data normalization. This meticulous preparation ensured data uniformity and compatibility during the ensuing stages. The linchpin of the system resides in the training of the CNN model utilizing the meticulously prepared dataset. This phase represents a pivotal component that enables the model to accumulate knowledge and gain insights from the wealth of data.
Upon completion of the training phase, the system rigorously assesses the performance of the trained CNN model. This phase entails the deployment of a dedicated test dataset to gauge the predictive accuracy of the model, ensuring that it complies with stringent standards.
Concurrently, this system embarks on the development of meticulously crafted web application. This application is designed with precision to empower users, granting them the capability to upload images and videos. By doing so, users can receive real-time predictions from the trained CNN model. The diagram visually portrays the intricate interactions involving the central processing unit, accident detection web application, and external entities, highlighting the system's user-centric approach.
Within this web application, the accident detection module receives user-contributed images and videos, conducts data preprocessing, extracts relevant features, deploys the CNN model for feature classification, and subsequently delivers the classification results back to users. This meticulous user-focused procedure ensures swift and efficient data processing, and provides timely user feedback.
Moreover, the web application harnesses the CNN model to extract pertinent features from incoming images and videos. Subsequently, a machine learning classifier is judiciously applied to categorize these extracted features into accident or non-accident classifications, augmenting the precision and operational efficiency of the system.
IV. IMPLEMENTATION
A. Hardware Requirements
B. Software Requirements
C. Modules of Implementation
In the realm of supervised machine learning, the implementation of algorithms requires adherence to a well-structured framework comprising the following essential modules:
Convolutional layers are principal components (CNN). Performs basic calculations using input data. Feature Maps and Filters The kernel is represented by a 2D array that moves over the image and performs convolution to detect features. Some parameters affect the size of the output which is adjusted before training the neural network. These parameters are:
a. Number of filter
b. Stride
c. Zero-padding
After convolution, a modified linear unit (ReLU) transform introduces nonlinearity into the model.
2. Pooling Layer
Pooling layers, a key component of convolutional neural networks, play an important role in dimensionality reduction. Unlike convolution layers, pooling layers do not include weights and focus on applying aggregation functions to the inputs. There are two main types of pools.
a. Maximum Pooling
b. Average Pooling
3. Fully Connected layer
The final part of the neural network, the fully connected layer, uses features obtained from the previous layer to perform classification. As can be seen from the layer of friction and integration, a modified linear unit (ReLU) activation function is used. Unlike the previous layer, the fully connected layer uses the Softmax activation function to classify the input data and generate a probability score from 0 to 1.
V. DATASET
The "Accident Detection From CCTV Footage" dataset is a meticulously compiled collection of image frames sourced from publicly available YouTube videos, encompassing a diverse range of accident scenarios. It is thoughtfully structured into three main segments, "Train," "Test," and "Val," each housing "Accident" and "Non-Accident" subfolders, facilitating binary classification between accident and non-accident frames. This dataset is intended to support research on accident detection, video analysis, and machine learning. Furthermore, it includes annotations, specifically timestamps for accidents, to assist in the algorithm development and evaluation. Within this dataset, the images were systematically categorized based on the presence or absence of accidents, making a clear distinction between the accident and non-accident scenarios. This categorization is essential for identifying and differentiating accident-related content from non-accidental situations within a dataset. Using the available labels and features, a thorough analysis was performed to detect and classify images based on the presence or absence of accidents. This classification approach relies on establishing a statistical relationship between the features of the dataset and images under consideration. This meticulous methodology ensures the accurate identification of accident-related content, providing valuable insights for applications in safety, risk assessment, and incident analysis, with a primary focus on distinguishing between accident and non-accident scenarios.
VI. RESEARCH METHODOLOGY
The methodology used in this study follows a systematic approach to train and evaluate deep convolutional neural network (CNN) models specifically designed for accident detection and classification. The training process starts with precise parameter settings and uses TensorFlow, a popular deep learning framework.
The research dataset underwent meticulous organization into three distinct subsets: training, validation, and testing. This meticulous structuring ensured a comprehensive data preparation process for both model training and subsequent evaluation. To enhance reproducibility and maintain control over critical parameters such as image size, batch size, and color mode, the dataset was efficiently loaded using the tf.keras.preprocessing.image_dataset_from_directory command. Performance optimization is achieved through the adept application of functions, such as cache() and prefetch(buffer_size=AUTOTUNE).
The core of the learning process is the complex construction of deep CNN models with layers dedicated to batch regularization, convolution, max-weighting, and dense layers, which will eventually reach the softmax output layer. This model is complexed using Adam optimizer with accuracy as the primary criterion for evaluation.
The training process unfolds through the execution of model.fit(). This phase was accompanied by vigilant and continuous monitoring and validation against the validation dataset. The model weights are judiciously saved at moments of peak performance, facilitated by the application of checkpoints. The training phase spanned 30 epochs, during which the loss and accuracy metrics were consistently and rigorously evaluated.
Subsequent to the comprehensive training phase, the meticulously trained model underwent rigorous testing using a dedicated testing dataset. Predictions were systematically generated for a carefully selected subset of images using the model.predict() command. These predictions were thoroughly compared to the ground truth labels, resulting in a precise evaluation of the classification performance of the model.
At the same time, the research methodology includes obtaining comprehensive statistical information about the performance of the model. This is done by using the classification_report () function, which calculates and presents important classification criteria. These metrics include precision, recall, and F1 scores, which are reported separately for each class in the database. This comprehensive report provides a nuanced and class-specific perspective on the model's predictive capabilities.
Furthermore, the methodology integrates the use of the confusionmatrix() function to create a visually informative representation of model performance. Enhanced by plt.imshow(), this visualization illustrates the confusion matrix as a heat map. This visual representation provides a clear understanding of the predictive competence of the model in real-world accident detection and classification scenarios.
The delineated research methodology establishes a systematic, rigorous, and well-structured approach to model training, evaluation, and performance assessment. This methodology serves as a strong foundation for the reliability and integrity of the research outcomes. The results of this study, presented in the figures below, provide a tangible representation of the effectiveness of the outlined methodology.
This study resulted in the development of an active accident detection system using deep learning techniques and a conventional convolutional neural network (CNN) model. The system shows great promise in improving road safety and reducing the impact of accidents, as evidenced by its impressive 97 percent accuracy. Extensible system design balances compatibility and efficiency and solves the complexity of deep learning by favoring real-time processing. In addition, the system expands detection capabilities and reduces the severity of accidents, making it a valuable tool for improving vehicle safety and human well-being. As shown in the attached table, our system\'s accuracy (95%), recall (94%) and F1 score (94%) are strong evidence of its effectiveness. These results pave the way for the development of real-time crash detection systems that have the potential to significantly reduce the rate and severity of crashes on our roads. In summary, this study demonstrates a promising and highly efficient detection system that provides deep learning with the potential to improve road safety and crash outcomes.
[1] A. Chaudhari, H. Agrawal, S. Poddar, K. Talele and M. Bansode, \"Smart Accident Detection And Alert System,\" 2021 IEEE India Council International Subsections Conference (INDISCON), NAGPUR, India,2021,pp.1-4,doi:10.1109/INDISCON53343.2021.9582163. [2] E. Batanina, I. E. I. Bekkouch, Y. Youssry, A. Khan, A. M. Khattak and M. Bortnikov, \"Domain Adaptation for Car Accident Detection in Videos,\" 2019 Ninth International Conference on Image Processing Theory, Tools and Applications (IPTA), Istanbul, Turkey, 2019, pp. 1-6, doi: 10.1109/IPTA.2019.8936124. [3] X. Liu, H. Cai, R. Zhong, W. Sun and J. Chen, \"Learning Traffic as Images for Incident Detection Using Convolutional Neural Networks,\" in IEEE Access, vol. 8, pp. 7916-7924, 2020, doi: 10.1109/ACCESS.2020.2964644. [4] T. -N. Le, S. Ono, A. Sugimoto and H. Kawasaki, \"Attention R-CNN for Accident Detection,\" 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 2020, pp. 313-320, doi: 10.1109/IV47402.2020.9304730. [5] S. Ghosh, S. J. Sunny and R. Roney, \"Accident Detection Using Convolutional Neural Networks,\" 2019 International Conference on Data Science and Communication (IconDSC), Bangalore, India, 2019, pp. 1-6, doi: 10.1109/IconDSC.2019.8816881. [6] P. Borisagar, Y. Agrawal and R. Parekh, \"Efficient Vehicle Accident Detection System using Tensorflow and Transfer Learning,\" 2018 International Conference on Networking, Embedded and Wireless Systems (ICNEWS), Bangalore, India, 2018, pp. 1-6, doi: 10.1109/ICNEWS.2018.8903938. [7] A. K. Agrawal et al., \"Automatic Traffic Accident Detection System Using ResNet and SVM,\" 2020 Fifth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN), Bangalore, India, 2020, pp. 71-76, doi: 10.1109/ICRCICN50933.2020.9296156. [8] W. -J. Chang, L. -B. Chen and K. -Y. Su, \"DeepCrash: A Deep Learning-Based Internet of Vehicles System for Head-On and Single-Vehicle Accident Detection With Emergency Notification,\" in IEEE Access, vol. 7, pp. 148163-148175,2019,doi:10.1109/ACCESS.2019.2946468. [9] Zhu, L., Wang, B., Yan, Y. et al. A novel traffic accident detection method with comprehensive traffic flow features extraction. SIViP 17, 305–313 (2023). https://doi.org/10.1007/s11760-022-02233-z.
Copyright © 2023 V. Mahajan, R. Kulkarni, A. Kacherikar, S. Bhosekar, S. Kakad. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET56688
Publish Date : 2023-11-16
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here