Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Geethanjali P, Metun , Dr. M Rajeshwari
DOI Link: https://doi.org/10.22214/ijraset.2023.57847
Certificate: View Certificate
Human-wildlife conflicts have escalated due to increased encroachment on natural habitats, necessitating advanced surveillance to mitigate potential threats and preserve biodiversity. Traditional monitoring methods are labor-intensive and inefficient when dealing with the volume and velocity of data generated by camera traps. This research introduces an innovative, automated wildlife detection system utilizing a Convolutional Neural Network (CNN) - the MobileNet-SSD V2 - to process images for real-time animal detection. The paper elaborates on a comprehensive methodology, from dataset curation to model training and deployment, leveraging TensorFlow Lite for on-device inference. The approach includes building a versatile CNN framework with labeled images from benchmark datasets, annotated using LabelImg and deployed it in Raspberry Pi Model B using TensorFlow Lite. The proposed system is not only accurate but also conserves memory resources by employing an optimized CNN architecture that requires less computational storage. With regular updates using fresh camera-trap images, the system ensures continual improvement in species recognition precision. The system exhibits exceptional accuracy, robustness in diverse environmental settings, and real-time alert mechanisms for rapid response. Through extensive experiments, this automated detection model achieves a higher accuracy rate in clear conditions and maintains precision of with considerable resilience in lower-quality image scenarios. The findings in this research demonstrate the profound capabilities of CNNs in ecological surveillance and their potential to assist conservation efforts.
I. INTRODUCTION
Human-animal conflicts pose a significant problem, resulting in the loss of vast resources and endangering human lives. In recent times, the frequency of these conflicts has been on the rise. As humans encroach into forests to secure their livelihoods or claim land for agricultural practices, rapid industrialization leads to the expansion of urban areas. This encroachment often results in animals entering nearby villages and damaging vegetation in farmlands. Therefore, constant monitoring of these intrusion prone areas is essential to prevent the entry of such animals or any other unwanted intrusions.
This paper proposes a novel solution to address this issue by a method for wildlife animal detection using image processing and Convolutional Neural Networks (CNNs) based on camera-trap images by integration with Raspberry Pi microcontroller and utilizing Solar panels for charging the Battery continuously to cut down the overall cost of the detection model. Sensor cameras are strategically installed on a pole or trees within a specific area to form a fixed camera-trap network for wildlife monitoring. These camera traps are triggered to capture images whenever movement is detected.
This study is centered on the enhancement of wildlife monitoring through the deployment of camera traps that yield motion-triggered video sequences with GSM location tracking. Traditional surveillance techniques often suffer from low detection rates, particularly in videos where the distinction between the animal and its environment is minimal. Such methods also struggle with high rates of false positives, largely due to the dynamic nature of background elements. The proposed hardware and software architecture in this paper will provide an effective monitoring system, expediting the analysis of wildlife research data and facilitating resource management decisions. The creation of diverse automated tools to process the vast amount of images from camera traps is urgently needed. This includes tools for identifying animals, segmenting images, extracting relevant data, and tracking animal movements. This innovative approach aims to provide more effective means of mitigating conflicts between humans and animals, offering a promising avenue to address this global challenge.
II. LITERATURE REVIEW
The swift pace of industrial development often drives wildlife into adjacent rural areas, where they encroach on agricultural lands for sustenance. This interaction poses dangers to both wildlife and human populations, frequently resulting in the depletion of resources and occasionally in the loss of life. Additionally, these animals frequently wander onto roads, leading to vehicular collisions and property damage. The model’s real-time effectiveness should be consistent with its theoretical capabilities. The core hypothesis of this research is the inadequacy of images showing animals behind bars in the training sets of current models, with specific emphasis on pandas and deer. This study involves collecting targeted training data and creating a unique dataset to explore methods for enhancing the CNN’s performance. The model’s performance was improved by integrating images from both caged and non-caged animal datasets. However, the outcomes in real-time applications fell short compared to theoretical performance. Although there have been many attempts to utilize CNNs for animal detection, the efficiency of these models is often limited by factors like the volume of training data and the duration of model training.[1-4]
The research proposes a new algorithm using Deep CNN where the method focuses on improving the accuracy of object classification within the framework. Optimal accuracy was noted with image sizes of 50x50 pixels and training over 100 epochs.[5] The study details the creation of a specific dataset using Google Open Images and COCO datasets by evaluating several advanced neural network architectures including YOLOv3, RetinaNet, R-50-FPN, Faster RCNN R-50-FPN, and Cascade RCNN R-50-FPN.[6] The YOLOv3 architecture showcased the best performance metrics, achieving over 35 fps and an mAP of 0.78 across ten classes, and 0.92 for a combined class. Meanwhile, the RetinaNet R-50-FPN architecture achieved a recognition speed of over 44 fps but had a 13 percent lower mAP compared to YOLOv3.[7]
Traditionally, farms have relied on electric fences to deter animals from entering, which may cause animals to behave abnormally. To address these challenges, the proposed system can be deployed in areas where it is needed. There is a need to enhance the accuracy of the Convolutional Neural Network (CNN) model while reducing computational complexity. This problem is addressed by a novel approach for identifying rare animals in images using Convolutional Neural Networks (CNNs) is presented.[8-9] This method focuses on automatically extracting image features from the training set to develop a system capable of recognizing rare animals. Key components of this system include an image acquisition and preprocessing module, which processes images in real-time to reduce noise and enhance recognition accuracy. Additionally, a module for locating target rare animals within images is incorporated.[10] This localization module efficiently identifies the positions of rare animals in the images. The recognition algorithm further refines the network by adjusting the weight parameters in each layer based on the unique characteristics of rare animal images.[11]
This paper proposes a system where the main focus is on developing a wild animal detection and real-time alert system to prevent wild animal intrusions and mitigate potential harm with a higher level of accuracy for optimal speed. The system identifies animal intrusions and promptly sends alert notifications to address the situation. This research provides solutions to the problems identified in the existing system such as developing an animal detection and real-time alert system that can be used to prevent wild animal assaults, capture the image of animals and categorize it using image processing, and to alerting farm owners and forest officials about animal intrusion. The proposed system introduced in this study capitalizes on solar power for sustainability, driving a Raspberry Pi via a battery module. Incorporating GSM for live animal tracking achieves a synergy of eco-friendliness and technological advancement. This cost-effective solution stands out for its autonomy and low environmental impact, marking a significant step forward in remote ecological monitoring.
III. METHODOLOGY
The following is a detailed methodology for an autonomous wildlife detection system designed to identify lions, tigers, and elephants using advanced machine learning algorithms. The system is built around the MobileNet SSD V2 architecture, utilizing its two variants: MobileNet-SSD V2 coco and MobileNet-SSD V2 320x320, for image processing and object detection.
A. Hardware Setup
The system is mounted on a pole to ensure an expansive field of view as shown in Figure 1. This setup incorporates a photovoltaic module that harnesses solar energy, effectively channeling it to a storage unit that supplies power to the central processing unit, a Raspberry Pi. This self-reliant power system underlines the autonomous nature of the deployment. The visual representation illustrates the operational setup situated within the target observation zone. Affixed to a robust and immobile post, the surveillance camera, along with the motion detection sensor, communication module, alert system, and illumination component, operates cohesively.
The utilization of solar power is a deliberate choice, emphasizing sustainable energy use, which in turn energizes the Raspberry Pi through an intermediary battery module. An additional protective layer encases the Raspberry Pi circuitry and other integral elements, shielding them from environmental elements and potential physical disturbances. This protective measure ensures the continued integrity and functionality of the critical technological components within.
V. ACKNOWLEDGMENT
We express our heartfelt gratitude to the Head of Department of Electronics and Telecommunication, BIT, Dr. M. Rajeshwari for her support and guidance during the course of this project. The authors would like to thank Shri. Devaraju V, DFO of Ramanagar Forest, Karnataka for the extended support for this research and the anonymous reviewers for their valuable comments that improved the manuscript significantly.
This research successfully demonstrates the deployment of a Convolutional Neural Network (CNN) model for the accurate detection and classification of wildlife species, specifically lions, tigers, and elephants. This standalone model is placed in Ramanagar forest, Karnataka, India, and tested for intrusion of Elephants. Through rigorous training and validation processes, the model has exhibited remarkable proficiency, with accuracy rates consistently exceeding 95 percent for clear background and lighting and above 85 percent for blur images. The descending trends in classification, localization, and regularization losses not only indicate the model’s capability to learn from the data but also its ability to generalize beyond the training set without overfitting, as reflected by the stabilization of the regularization loss. The incorporation of a GPS module would facilitate the precise localization of elephant, tiger, and lion herds, further contributing to timely preventive measures. Future work will focus on enhancing the model’s capabilities by improving the resolution and quality of the input data, which is anticipated to further refine the detection accuracy and expand the model’s operational range. Additionally, the scalability of the approach suggests that with additional training, the model could be adapted for the detection of other species, thereby broadening its utility in ecological studies and conservation strategies. To mitigate human-animal conflicts, the installation of a sound-emitting system at the entry point can be implemented to deter wild animals before they encroach on agricultural lands. Additionally, incorporating temperature and humidity sensors would enhance the durability and operational reliability of the device under various environmental conditions. Lastly, the integration of night vision alongside high-resolution cameras would significantly improve nighttime detection capabilities, ensuring the round-the-clock effectiveness of the system. In conclusion, this study represents a significant step forward in the field of wildlife monitoring and conservation technology. The results underscore the viability of CNNs in addressing ecological challenges and highlight the transformative potential of artificial intelligence in fostering harmonious coexistence between human development and wildlife preservation.
[1] W.Xue, T.Jiang, and J.Shi, “Animal intrusion detection based on convolutional neural network,” in 2017 17th International Symposium on Communications and Information Technologies (ISCIT), Cairns, QLD, Australia, 2017, pp. 1–5, doi: 10.1109/ISCIT.2017.8261234. [2] D.Yudin, A.Sotnikov, and A.Krishtopik, “Detection of Big Animals on Images with Road Scenes using Deep Learning,” in 2019 International Conference on Artificial Intelligence: Applications and Innovations (IC-AIAI), Belgrade, Serbia, 2019, pp. 100–103, doi: 10.1109/IC-AIAI48757.2019.00028. [3] X.Hao, G.Yang, Q.Ye, and D.Lin, “Rare Animal Image Recognition Based on Convolutional Neural Networks,” in 2019 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Suzhou, China, 2019, pp. 1–5, doi: 10.1109/CISP-BMEI48845.2019.8965748. [4] N.Li, W.Kusakunniran, and S.Hotta, “Detection of Animal Behind Cages Using Convolutional Neural Network,” in 2020 17th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTICON), 2020, pp. 242–245, doi: 10.1109/ECTI-CON49241.2020.9158137. [5] A. M. Roy, J. Bhaduri, T. Kumar, K. Raj, “WilDect-YOLO: An Efficient and Robust Computer Vision-Based Accurate Object Localization Model for Automated Endangered Wildlife Detection,” Ecological Informatics, vol. 75, July 2023, 101919. [6] N. K. El Abbadi, E. M. T. A. Alsaadi, “An Automated Vertebrate Animals Classification Using Deep Convolution Neural Networks,” 2020 International Conference on Computer Science and Software Engineering (CSASE), Duhok, Iraq, 2020, pp. 72–77, doi:10.1109/CSASE48920.2020.9142070. [7] M. A. Kayumbek Rakhimov, A. Ibragimov, “Animal Habit Monitoring System on the Roadside to avoid animal collisions with Support Vector Machine Model,” 2020 International Conference on Information Science and Communications Technologies (ICISCT). [8] Yu-Chen Chiu, Chi-Yi Tsai, Mind-Da Ruan, Guan-Yu Shen, TsuTian Lee, “Mobilenet-SSDV2: An improved object detection model for embedded systems,” 2020 International Conference on System Science and Engineering (ICSSE), IEEE, pp. 1–5. [9] R. Chandrakar, R. Raja, & A. Miri, “Animal detection based on deep convolutional neural networks with genetic segmentation,” Multimedia Tools Appl 81, pp. 42149–42162 (2022), https://doi.org/10.1007/s11042-021-11290-4. [10] A. Biglari, W. Tang, “A Vision-Based Cattle Recognition System Using TensorFlow for Livestock Water Intake Monitoring,” IEEE Sensors Letters, vol. 6, no. 11, Nov. 2022, Art no. 5501404, doi: 10.1109/LSENS.2022.3215699. [11] P. Juang, H. Oki, Y. Wang, M. Martonosi, L. S. Peh, D. Rubenstein, “Energy-efficient computing for wildlife tracking: design tradeoffs and early experiences with zebranet,” Proceedings of the 10th International Conference on Architectural Support for Programming Languages and Operating Systems, 2002, pp. 96–107. [12] [12] V. Dyo, S. A. Ellwood, D. W. Macdonald, A. Markham, C. Mascolo, S. Scellato, N. Trigoni, R. Wohlers, K. Yousef, “Evolution and sustainability of a wildlife monitoring sensor network,” Proceedings of the 8th ACM Conference on Embedded Networked Sensor Systems, SENSYS 2010, Zurich, Switzerland, November 2010, pp. 127–140. [13] [13] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, D. Ramanan, “Object detection with discriminatively trained part-based models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 9, pp. 1627–1645, Sep. 2010, doi: 10.1109/TPAMI.2009.167. [14] [14] A. Krizhevsky, I. Sutskever, G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems, vol. 25, F. Pereira, C. J. C. Burges, L. Bottou, K. Q. Weinberger, Eds. Red Hook, NY, USA: Curran Associates, 2012, pp. 1097–1105. Accessed: Oct. 22, 2019. [Online]. Available: http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf [15] [15] S. Li, L. Yang, J. Huang, X.-S. Hua, L. Zhang, “Dynamic anchor feature selection for single-shot object detection,” Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 6608–6617, doi: 10.1109/ICCV.2019.00671. [16] [16] K. Chen, J. Li, W. Lin, J. See, J. Wang, L. Duan, Z. Chen, C. He, J. Zou, “Towards accurate one-stage object detection with AP-loss,” Proc. IEEE/CVF Conf. Comput.Vis. Pattern Recognit. (CVPR), Long Beach, CA, USA, Jun. 2019, pp. 5114–5122, doi:10.1109/CVPR.2019.00526.
Copyright © 2024 Geethanjali P, Metun , Dr. M Rajeshwari. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET57847
Publish Date : 2024-01-01
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here