Convolutional neural networks (CNNs) have yielded state-of-theart performance in image classification and other computer vision tasks. Their application in fire detection systems will substantially improve detection accuracy, which will eventually minimize fire disasters and reduce the ecological and social ramifications. However, the major concern with CNN-based fire detection systems is their implementation in real-world surveillance networks, due to their high memory and computational requirements for inference. In this paper, we propose an original, energy-friendly, and computationally efficient CNN architecture, inspired by the SqueezeNet architecture for fire detection, localization, and semantic understanding of the scene of the fire. It uses smaller convolutional kernels and contains no dense, fully connected layers, which helps keep the computational requirements to a minimum. Despite its low computational needs, the experimental results demonstrate that our proposed solution achieves accuracies that are comparable to other, more complex models, mainly due to its increased depth. Moreover, this paper shows how a tradeoff can be reached between fire detection accuracy and efficiency, by considering the specific characteristics of the problem of interest and the variety of fire data
Introduction
I. INTRODUCTION
Rate of forest fires reports have increased yearly due to human causes and dry climate. To avoid terrible disaster of fire, many detection techniques have been widely studied to apply in practice. Most of traditional method are based on sensors due to its low-cost and simple installation [1]–[3]. These systems are not applicable for using outdoor where energy of flame affected by fire materials and the burning process affected by environment that have potential cause of false alarms. Visual-based approach of image or video processing was shown to be more reliable method to detect the fire since the closed circuit television (CCTV) surveillance systems are now available at many public places, can help capture the fire scenes. In order to detect fire from scenes of colour-videos, various schemes have been studied, mainly focus on the combination of static and dynamic characteristics of fire such as colour information, texture and motion orientation, etc. Colour-based detection methods mainly depend on chosen value of thresholds resulted in high false alarm rate; that need to be improved by extracting dynamic features of fire from sequence of images captured in video. However, those systems are still not practical to use in large scale and hard-to-reach regions like remote and wild forests, where the configuration and maintenance of the system are difficult tasks
II. LITERATURE SURVEY
III. METHODOLOGY
Pre-processing: The aim of pre-processing is an improvement of the image data that suppresses unwilling distortions or enhances some image features important for further processing, although geometric transformations of images (e.g. rotation, scaling, translation) are classified among pre-processing methods here since similar image resizing, converting images to grayscale, and image augmentation.
Image Detection: This section covers the detail of the proposed fire pixel classification algorithm. Figure shows the flow chart of the proposed algorithm. Rule based colour model approach has been followed due to its simplicity and effectiveness. For that, colour space RGB and YCbCr is chosen. For classification of a pixel to be fire we have identified seven rules. If a pixel satisfies these seven rules, we say that pixel belong to fire class.
Feature Extraction: Feature extraction refers to the process of transforming raw data into numerical features that can be processed while preserving the information in the original data set. It yields better results than applying machine learning directly to the raw data.
Fire Detection: We took two sequential images from video frames. After applying basic two methods edge detection and colour detection we get probable area of fire pixel then we compare the RGB value to of frame1 to the frame 2 for corresponding pixel and if pixel value differs then motion detector will show motion and will give resultant output to the operator.
Flow of Execution: When the fire is detect to the module, those image is gives to module as a input and when according to the input the by the use of multithreading one thread is gives to output with massage to Owner “Fire Detected……Fire Detected and beep signal with sound” and other and to Fire-Extinguish Department with Address “Emergency….Emergency and Address (Shanti Niwas near JSPM College, Narhe, Pune)”.
IV. SYSTEM IMPLEMENTATION
A. Convolutional Neural Network (CNN)
In deep learning a convolutional neural network (CNN) is a class of deep neural networks, most commonly applied to analyze visual imagery. Now when we think of a neural network we think about matrix multiplications but that is not the case with ConvNet. It uses a special technique called Convolution.
It has four steps:
a. Convo 2D
b. Max Pooling
c. Flatten
d. Fully connected Network
CNNs are used for image classification and recognition because of its high accuracy. The CNN follows a hierarchical model which works on building a network, like a funnel, and finally gives out a fully-connected layer where all the neurons are connected to each other and the output is processed
V. FUTURE SCOPE
The Project has been motivated by the desire a system that can detect fires and take appropriate action, without any human intervention.
Implementation in a satellite to detect the accendently fire happens in the forest.
For further accuracy use of Neural Networks for decision making can be made and GSM module can also be implemented for sending SMS to nearby fire station in case of severe fire. Water sprinklers can also be incorporated. By research and analysis, the efficiency of the proposed Fire detection system can be increased. The margin of false alarms can be reduced even further by developing algorithms to eliminate the detection of red coloured cloth as fire. By proper analysis, suitable location height and length for camera installment can be decided, in order to remove blind-spot areas.
Conclusion
In summary, an aerial based forest fire detection method has been examined through a large database of videos of forest fires of various scene conditions. To enhance the detection rate, at first the chromatic and motion features of forest fire are extracted and then corrected using rule to point out the fire area. Secondly, to overcome the challenge of heavy smoke that covers almost the fire, smoke is also extracted using our proposed algorithm. Our framework proves its robustness with high accuracy rate of detection and low false alarm rate in practical application of aerial forest fire surveillance.
References
[1] 1 A. AAAlkhatib, “Smart and Low Cost Technique for Forest Fire Detection using Wireless Sensor Network,” Int. J. Comput. Appl., vol. 81, no. 11, pp. 12–18, 2013.
[2] J. Zhang, W. Li, Z. Yin, S. Liu, and X. Guo, “Forest fire detection system based on wireless sensor network,” 2009 4th IEEE Conf. Ind. Electron. Appl. ICIEA 2009, pp. 520– 523, 2009.
[3] A. A. A. Alkhatib, “A review on forest fire detection techniques,” Int. J. Distrib. Sens. Netw., vol. 2014, no. March, 2014.
[4] P. Skorput, S. Mandzuka, and H. Vojvodic, “The use of Unmanned Aerial Vehicles for forest fire monitoring,” in 2016 International Symposium ELMAR, 2016, pp. 93–96.
[5] F. Afghah, A. Razi, J. Chakareski, and J. Ashdown, Wildfire Monitoring in Remote Areas using Autonomous Unmanned Aerial Vehicles. 2019.
[6] Hanh Dang-Ngoc and Hieu Nguyen-Trung, “Evaluation of Forest Fire Detection Model using Video captured by UAVs,” presented at the 2019 19th International Symposium on Communications and Information Technologies (ISCIT), 2019, pp. 513–518.
[7] C. Kao and S. Chang, “An Intelligent Real-Time Fire-Detection Method Based on Video Processing,” IEEE 37th Annu. 2003 Int. Carnahan Conf. OnSecurity Technol. 2003 Proc., 2003.
[8] C. E. Premal and S. S. Vinsley, “Image Processing Based Forest Fire Detection using YCbCr Colour Model,” Int. Conf. Circuit Power Comput. Technol. ICCPCT, vol. 2, pp. 87–95, 2014.
[9] C. Ha, U. Hwang, G. Jeon, J. Cho, and J. Jeong, “Vision-based fire detection algorithm using optical flow,” Proc. - 2012 6th Int. Conf. Complex Intell. Softw. Intensive Syst. CISIS 2012, pp. 526–530, 2012.
[10] K. Poobalan and S. Liew, “Fire Detection Algorithm Using Image Processing Techniques,” Proceeding 3rd Int. Conf. Artif. Intell. Comput. Sci., no. December, pp. 12– 13, 2015