Storm damage assessment plays a pivotal role in disaster management, facilitating timely and efficient response efforts. Traditional methods often fall short due to their reliance on manual inspection and susceptibility to human error. With the advent of deep learning technologies automated approaches have emerged, significantly enhancing the accuracy and speed of damage detection. Among these, YOLOv8 (You Only Look Once version 8) stands out as a cutting-edge object detection model, offering superior performance in real-time applications. This study proposes the utilization of YOLOv8 for storm damage assessment, leveraging its anchor-free detection mechanism, enhanced feature aggregation and optimized loss functions. By integrating YOLOv8, the proposed approach aims to deliver high-precision damage mapping in both urban and rural environments. This approach highlights the potential of YOLOv8 to demonstrate storm damage assessment, paving the way for more resilient disaster management strategies.
Introduction
I. INTRODUCTION
In recent years, the frequency and intensity of storms have significantly increased, driven by climate change and other environmental factors. These extreme weather events result in widespread destruction, posing immense challenges to disaster management and recovery efforts. Accurate and timely storm damage assessment is critical for prioritizing rescue operations, allocating resources, and planning post-disaster reconstruction. Traditional methods, which rely heavily on manual inspections and basic remote sensing techniques, are often labour-intensive, time-consuming, and prone to errors.
Advancements in computer vision and deep learning have revolutionized the field of storm damage assessment, enabling automated, accurate, and scalable solutions. Computer vision techniques analyse visual data, such as satellite imagery and aerial photographs, to identify and quantify damage. When combined with deep learning—a subset of artificial intelligence that mimics human cognitive processes—these methods can process vast amounts of data and deliver actionable insights in real time.
Among the various deep learning approaches, object detection models have gained prominence for their ability to identify and classify damage in complex environments. YOLO (You Only Look Once) models, in particular, have been widely adopted for their speed and accuracy in real-time applications. The latest iteration, YOLOv8, introduces several enhancements, including an anchor-free detection mechanism, improved feature aggregation, and optimized loss functions, making it an ideal choice for storm damage assessment. This paper explores the potential application of YOLOv8 in storm damage assessment, by leveraging high-resolution satellite data. The proposed approach aims to deliver precise damage mapping, aiding in effective disaster response and recovery in data poor regions and communities.
II. LITERATURE REVIEW
Several other deep learning models have been employed for storm damage assessment, including convolutional neural networks (CNNs), fully convolutional networks (FCNs), and transformer-based architectures.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012) CNNs are widely used for image classification and segmentation tasks. For storm damage assessment, CNNs have been utilized to classify damage severity and identify affected areas from satellite and aerial imagery. Long, J., Shelhamer, E., & Darrell, T. (2015) FCNs are designed for pixel-wise segmentation tasks, making them suitable for identifying specific damage zones in high-resolution images. They have been used to segment flood-affected areas and classify structural damage in post-storm scenarios. Jocher, G., et al. (2023) YOLO (You Only Look Once) models have been widely used for real-time object detection tasks due to their speed and accuracy. YOLOv8, the latest iteration, introduces advanced features such as anchor-free detection, enhanced feature aggregation, and improved loss functions, making it highly suitable for storm damage assessment.
III. METHODOLOGY
A. Data Collection
The satellite data used in this study has been collected from Maxar’s Open Data Program. The study consists of two datasets from Maxar’s GEO-1 mission before and after the primary storm.
B. Data Preprocessing
The initial dataset consists of the pre-event and post-event images in form of grids measuring 512x512 for easier data processing. Next according to the process of image annotation we have labelled the images to train the model so that when the manual annotation is completed, labelled images are processed by the deep learning model to replicate the annotations without human supervision. We have annotated 100 images using LabelMe and have converted them to .txt format using the LabelMe2Yolo package. We utilized the package to divide the data into training (80%) and testing (20%) sets, and also to create a .yaml file. The dataset comprises of four diverse classes such as undamaged residential buildings, damaged residential building, undamaged commercial building and damaged commercial building.
C. Architecture
YOLO-v8 is an advanced object detection model that adopts a one-stage detection methodology comprised of 225 layers, 11137535 parameters, 11137519 gradients, 28.7 GFLOPs. The architecture of YOLO-v8 is depicted in Fig. (1) and described below-
Convolutional Layer: It learns low-level features from the input images
C2f Layer: Employed to acquire more intricate gradient flow information while ensuring computational efficiency
Spatial Pyramid Pooling Fast: Captures contextual information at multiple scales by applying pooling operations of different sizes on the feature maps to have a broader perception of the objects in the image which enhances its detection performance
Up sampling Layer: Increases the resolution of low-resolution feature maps for the detection of smaller objects and capturing fine-grained details using bilinear interpolation
Concatenation Layer: Combines feature maps from different scales to generate comprehensive representations of objects in the image
Detect: Oriented Bounding Boxes format designates bounding boxes by their four corner points with coordinates normalized between 0 and 1 for object detection
Fig. 1 Architecture of YOLOv8
D. Preliminaries
This research utilized the Python programming language in conjunction with a Google Cloud-based integrated development environment (IDE) named
The GitHub repositories for YOLOv8 were replicated on Google Drive
E. Training
Training Parameters: It is trained with 50 epochs using a batch size of 32
F. Evaluation
Now that we have trained our model, all that is left is to evaluate it. For evaluation, we have generated the following
Mean Average Precision - The report was plotted to generate to visualize the result
Confusion Matrix - To evaluate a model’s accuracy and error rates when it comes to object detection within a specific dataset
IV. RESULT AND ANALYSIS
A. Quantitative Analysis
Mean Average Precision: From the above results we can see that we achieved an overall MAP-50 of 0.34 and the following MAP-50 in the various classes:
Fig.2 Training and Val Loss Performance Evaluation
Undamaged Commercial Building -0.29
Undamaged Residential Building -0.69
Damaged Residential Building - 0.23
Damaged Commercial Building - 0.1
B. Confusion Matrix
Fig. (3) illustrates the model’s impressive performance in recognizing the undamaged residential building.
Fig.3 Confusion Matrix
C. Qualitative Analysis
Visual inspection of segmentation outputs revealed:
Elevated value along the diagonal of the matrices shows good performance in recognizing the undamaged residential building
The model struggles in detecting commercial buildings. Overall, the model is improving in object detection accuracy of other classes
V. APPLICATION
The developed YOLOv8-based storm damage assessment model has practical applications in various domains:
1) Disaster Resource Management
Rapid damage assessment: Enables emergency response teams to quickly identify severely affected areas, prioritize rescue operations, and allocate resources efficiently
Real-time monitoring: Can be integrated into UAV systems for real-time damage mapping during ongoing storm events
2) Urban Planning and Infrastructure Resilience
Post disaster reconstruction: Provides detailed damage maps to guide rebuilding efforts and improve the resilience of urban infrastructure
Risk mitigation: Assists city planners in identifying vulnerable areas and implementing preventive measures
3) Insurance and financial claims
Claims processing: Automates the assessment of property damage for faster and more accurate insurance claim settlements
Risk assessment: Helps insurers evaluate potential risks and set premiums accordingly
4) Environmental research
Impact analysis: Facilitates studies on the environmental impact of storms, such as deforestation or soil erosion
Climate change studies: Provides data to analyse the increasing frequency and severity of storms due to climate change
Conclusion
Due to insufficient mapping of infrastructure and ecosystems, in vulnerable, data-poor regions, governments and institutions lack a clear and complete picture of the natural and material world around them. In the past, a range of algorithms such as Fast and Faster RCNN have been employed for the object detection with Synthetic Aperture Radar (SAR) data. However, the effectiveness of these algorithms falls short in comparison to the newer cutting-edge YOLO models. The YOLOv8-based storm damage assessment model demonstrates significant potential in addressing the challenges of rapid and accurate damage analysis following severe weather events. The model achieved a mean precision average (mAP) of 0.34. Therefore, by leveraging state-of-the-art deep learning techniques, the model provides a scalable and efficient solution for identifying and categorizing storm-induced damages across diverse geographic regions and storm types.
References
[1] J. M. Alruwaili et al.: Deep Learning-Based YOLO Models for the Detection of People with Disabilities
[2] United Nations Office for Disaster Risk Reduction (UNDRR). (2020). Disaster Resilience in Urban Areas. Retrieved from https://www.undrr.org.
[3] National Aeronautics and Space Administration (NASA). (2022). UAV Applications in Disaster Management. Retrieved from https://www.nasa.govR. E. Sorace, V. S. Reinhardt, and S. A. Vaughn, “High-speed digital-to-RF converter,” U.S. Patent 5 668 842, Sept. 16, 1997.
[4] FEMA. (2021). Post-Disaster Recovery Planning. Federal Emergency Management Agency. Retrieved from https://www.fema.gov
[5] UN-Habitat. (2020). Building Urban Resilience in the Face of Climate Change.
[6] Retrieved from https://unhabitat.org https://universe.roboflow.com/ey-ddy5c/storm-damage-assessment
[7] J. Terven and D. Cordova-Esparza, ‘‘A comprehensive review of YOLO: From YOLOv1 and beyond,’’ 2023, arXiv:2304.00501.
[8] J. Padhye, V. Firoiu, and D. Towsley, “A stochastic model of TCP Reno congestion avoidance and control,” Univ. of Massachusetts, Amherst, MA, CMPSCI Tech. Rep. 99-02, 1999.
[9] G. Jocher, A. Chaurasia, A. Stoken, J. Borovec, Y. Kwon, K. Michael, T. Xie, and J. Fang, NanoCode012; Ultralytics/YOLOv5: V7.0-YOLOv5 SOTA Realtime Instance Segmentation. Zenodo, 2022.
[10] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012)