Agriculture automation has been on the rise using, among others, Deep Neural Networks (DNN) and IoT for the development and deployment of numerous controlling, monitoring and shadowing operation at a fine-granulated position. In this fleetly evolving scenario, managing the relationship with the rudiments external to the farming ecosystem, similar as wildlife, is a applicable open issue.
One of the main concerns of present cultivators is guarding crops from wild creatures’ attacks. There are different traditional approaches to address this problem which can be murderous (e.g., shooting, trapping) and non-lethal (e.g., scarecrow, chemical repellents, organic substances, mesh, or electric walls). Nonetheless, some of the traditional styles have environmental pollution effects on both humans and ungulates, while others are veritably precious with high conservation costs, with limited trustability and limited effectiveness.
In this project, we develop a system, that combines AI Computer Vision using DCNN for detecting and recognizing animal species, and specific ultrasound emigration (i.e., different for each species) for repelling them. The edge computing device activates the camera, and executes its DCNN software to determine the target, and once the animal is detected, it sends back a communication to the Animal Repelling Module including the type of ultrasound to be generated for the specific division of the animal.
Introduction
I. INTRODUCTION
Agriculture has seen numerous revolutions, whether the domestication of creatures and plants a few thousand years ago, the methodical use of crop rotations and other advancements in agriculture practice a few hundred years ago, or the “green revolution” with methodical parentage and the wide use of man-made manures and fungicides a few decades ago.
Agriculture is undergoing a fourth revolution started by the exponentially adding use of information and communication technology (ICT) in husbandry.
Autonomous, robotic vehicles have been developed for cultivating purposes, similar as mechanical weeding, application of manure, or harvesting of fruits.
The development of unmanned upstanding vehicles with independent flight control, together with the development of weightless and potential hyperspectral snapshot cameras that can be used to calculate biomass development and fertilization status of crops, opens the field for sophisticated farm operation advice.
Also, decision-tree models are available now that allow growers to separate between plant diseases based on optical information. Virtual hedge technologies allow cattle herd operation based on remote-sensing signals and detectors or actuators attached to the livestock.
II. PROPOSED SYSTEM
AI Computer Vision based DCNN for detecting animal species, and specific ultrasound emission (i.e., different for each species) for repelling them. design, deployment and assessment of an intelligent smart agriculture repelling and monitoring IoT system based on embedded edge AI, to detect and recognize the different kinds of animal, as well as generate ultrasonic signals tailored to each species of the animal.
This combined technology used can help farmers and agronomists in their decision making and management process. Deep learning in the cast of Convolutional Neural Networks (CNNs) to accomplish the animal recognition. Architecture of the proposed system is shown in figure 1.
III. ARCHITECTURE PROCESS
In this paper, there are four stages for training and predicting the data from the database. First stage is Preprocessing and the second stage is Segmentation and the third stage is feature extraction and final stage is classification
A. Registration and Login Page
There will be a login for the farmers and admin, registration page for the new user. Farmer dashboard there will be animal’s pictures that intruding in the farm with the time of intrusion
B. Preprocessing
Animal Image pre-processing are the steps taken to format images before they are used by model training and inference. The steps to be taken are:
smooth our image to remove unwanted noise. We do this using gaussian blur.
Binarization
Image binarization is the process of taking a grayscale image and converting it to black-and-white, basically reducing the information contained within the image from 256 tones of grey to 2: black and white, a binary image.
C. Segmentation
The region growing methodology and recent affiliated work of region growing are described here. RG could be a simple image segmentation system grounded on the seeds of region. It’s also classified as a pixel-grounded image segmentation system since it involves the selection of initial seed points. This approach to segmentation examines the neighbouring pixels of initial “seed points” and determines whether the pixel neighbours should be added to the region or not grounded on certain conditions. In a very normal region growing approach, the neighbour pixels are examined by using only the “intensity” constraint. A threshold position for intensity value is set and those neighbour pixels that satisfy this threshold is chose for the region growing. A Region Proposal Network, or RPN, could be a completely convolutional network that concurrently predicts object bounds and objectless scores at each position. The RPN is trained end-to-end to induce high-quality region proposals. It works on the feature map (output of CNN), and each feature (point) of this map is named Anchor Point. For each anchor point, we place 9 anchor boxes (the combinations of various sizes and rates) over the image. These anchor boxes are cantered at the point within the image which is admire to the anchor point of the feature map.
D. Feature Extraction
In feature extraction process, the useful information or characteristics of the image are pulled in the form of statistical, shape, colour and texture features. The Transformation of the input image into the features is called feature extraction. Features are pulled by using feature extraction techniques. Features are pulled grounded on texture, boundary, spatial, edge, transform, colour and shape features. Shape-based features are divided into the boundary and region-grounded features. Boundary features are also called contour-grounded which uses boundary portions. Boundary grounded features are geometrical descriptors (periphery, major axis, minor axis, border, eccentricity and curvature), Fourier descriptors and statistical descriptors (mean, variance, standard deviation, dispose, energy and entropy). Region grounded features are texture features as GLCM.
Gray Level Co-occurrence Matrix:GLCM it is a second-order statistical texture analysis method. It examines the spatial relationship among pixels and defines how frequently a combination of pixels are present in an image in a given direction Θ and distanced. Each image is quantized into 16 argentine situations (0–15) and 4 GLCMs (M) each for Θ = 0, 45, 90, and 135 degrees with d = 1 are attained. From each GLCM, five features (Eq. 13.30–13.34) are pulled. Thus, there will be 20 features for each image. Each feature is standardized to range between 0 to 1 before passing to the classifiers, and each classifier receives the same set of features.
2. Classification: DCNN algorithms were applied to automatically detect and reject incorrect animal images during the classification process. This will guarantee proper training and thus the best possible performance. The CNN creates feature maps by casting up the convolved grid of a vector-valued input to the kernel with a bank of filters to a given layer. Also a non-linear corrected linear unit (ReLU) is used for calculating the activations of the convolved feature maps. The new feature map acquired from the ReLU is regularized using local response normalization (LRN). The output from the normalization is further calculated with the use of a spatial pooling strategy (max or average pooling). Also, the use of dropout regularization scheme is used to initialize some unused weights to zero and this activity most frequently takes place within the completely connected layers before the category layer. Finally, the use of softmax activation function is used for classifying image labels within the completely connected layer. The pulled features of animal image are compared with the ones stored in the animal database. The animal image is also classified as animal type. If the animal is found corresponding repellent module is called to repelling them through the generation of ultrasounds, which has recently been proven as an alternative, effective method for protecting crops against predicted animals. The mail and sms notification consisting of captured image is notified to the user regarding the detected motion in this phase.
IV. RESULT AND DISCUSSION
This module graphs the training and validation accurateness and loss for each epoch. During an time, the loss function is calculated across every data item and it’s guaranteed to give the quantitative loss measure at the given epoch and plotting curve across each repetition only gives the loss on a subset of the all dataset.
Trials with the dataset E2 show better accurateness results respect to the unbalanced dataset E1 reaching an accurateness of 80.6% Top-1 and 94.1% Top-5, respectively The plots of coaching and testing accuracy of the joint CNN (Top-5) depending of number of epochs are depicte
First trial was conducted using the single branch SVM without taking in account the features of shapes. Another trial employed the proposed joint CNN according to decision marking rules.
Conclusion
Agricultural farm security is widely needed technology nowadays. In order to accomplish this, a vision-based system is proposed and implemented using Python and OpenCV and developed an Animal Repellent System to blow out the animals. The accomplishment of the application required the design and development of a complex system for intelligent animal repulsion, which integrates newly developed software components and allows to recognize the presence and species of animals in real time and also to avoid crop damages caused by the animals.
References
[1] M. De Clercq, A. Vats, and A. Biel, ``Agriculture 4.0: The future of farming technology,\'\' in Proc. World Government Summit, Dubai, UAE, 2018, pp. 11-13.
[2] Y. Liu, X. Ma, L. Shu, G. P. Hancke, and A. M. Abu-Mahfouz, ``From industry 4.0 to agriculture 4.0: Current status, enabling technologies, and research challenges,\'\' IEEE Trans. Ind. Informat., vol. 17, no. 6, pp. 432 4334, Jun. 2021.
[3] M. S. Farooq, S. Riaz, A. Abid, K. Abid, and M. A. Naeem, ``A survey on the role of IoT in agriculture for the implementation of smart farming,\'\' IEEE Access, vol. 7, pp. 156237-156271, 2019.
[4] K. Kirkpatrick, ``Technologizing agriculture,\'\' Commun. ACM, vol. 62, no. 2, pp. 14-16, Jan. 2019.
[5] A. Farooq, J. Hu, and X. Jia, ``Analysis of spectral bands and spatial resolutions for weed classification via deep convolutional neural network,\'\' IEEE Geosci. Remote Sens. Lett., vol. 16, no. 2, pp. 183-187, Feb. 2018.
[6] M. Apollonio, S. Ciuti, L. Pedrotti, and P. Banti, ``Ungulates and their management in Italy,\'\' in European Ungulates and Their Management in the 21th Century. Cambridge, U.K.: Cambridge Univ. Press, 2010, pp. 475-505.
[7] A. Amici, F. Serrani, C. M. Rossi, and R. Primi, ``Increase in crop damage caused by wild boar (Sus scrofa L.): The `refuge effect,\'\'\' Agronomy Sustain. Develop., vol. 32, no. 3, pp. 683-692, Jul. 2012.
[8] S. Giordano, I. Seitanidis, M. Ojo, D. Adami, and F. Vignoli, ``IoT solutions for crop protection against wild animal attacks,\'\' in Proc. IEEE Int. Conf. Environ. Eng. (EE), Mar. 2018, pp. 1-5