This project focuses on the development of an image processing based system for the detection of weeds in agricultural fields. The proposed system uses computer vision techniques to extract relevant features from the images of the field and classify the presence of weeds in the field. The system involves pre-processing of the image to remove noise and enhance the quality of the image, followed by segmentation to extract the regions of interest. The features extracted from the segmented regions are then used to train a classification model that can identify the presence of weeds in the field. The proposed system is expected to help farmers in identifying the presence of weeds in their fields accurately and quickly, thereby reducing the amount of herbicides used and increasing crop yield.
Introduction
I. INTRODUCTION
Weed detection is an important task in agriculture, as the presence of weeds can significantly reduce crop yield and quality. Traditionally, farmers have relied on manual methods to identify and remove weeds, which can be time-consuming and labour-intensive. In recent years, the use of image processing techniques has emerged as a promising solution for automating weed detection in agricultural fields. Image processing can provide fast and accurate identification of weeds, allowing farmers to take timely action to mitigate their impact.
The goal of this project is to develop an image processing based system for detection of weeds in agricultural fields and after weed detection the robot will spray herbicides exactly on that weed. The proposed system utilizes computer vision techniques to extract relevant features from images of the field and classify the presence of weeds. The system involves pre-processing of the image to remove noise and enhance the quality of the image, followed by segmentation to extract the regions of interest. The features extracted from the segmented regions are then used to train a classification model that can identify the presence of weeds in the field. The proposed system is expected to provide several benefits to farmers. Firstly it can help farmers identify the presence of weeds accurately and quickly, allowing them to take timely action and secondly it can reduce the amount of herbicides used, by employing smart herbicides robot which will lower the environmental impact. Finally the system can improve crop yield and quality by enabling farmers to detect and remove weeds before they have a chance to damage the crop.
II. LITERATURE REVIEW
Author
Type
Crop
Training Setup
Dataset strength
Accuracy
Fawakherji, et al., 2019
Pixel wise segmentation using CNN
Sunflower
NVIDIA GTX 1070 GPU
500 images
90%
Knoll, et al., 2018
Image Based Convolutional Neural Networks
Carrot
GTX Titan having 6GB graphic memory
500 images
93%
McCool, et al., 2017
Image Based Convolutional Neural Networks
Image Based Convolutional Neural Networks
Not mentioned
20 training and 40 testing images
90.5%
Tang, et al., 2017
K-means feature learning accompanied with CNN
Soybean
Not mentioned
820 images
92.89%
Córdova-Cruzatty, et al., 2017
Image Based Convolutional Neural Networks
Maize
Core i7 2.7 GHz 8 core CPU Computer with Nvidia GTX950M
2835 maize and 880 weed images
92.08
Chavan, et al., 2018
AgroAVNET
12 classes
Intel Xeon E5-2695, 64GB RAM and NVIDIA TITAN Xp with 12GB RAM
5544 images
93.64
III. METHODOLOGY
A. Data Acquisition and Augmentation
The dataset contains 1300 images of sesame crops and different types of weeds with each image labels. Each image is a 512 X 512 color image. Labels for images are in You Only Live Once (YOLO) format. The dataset was split for the ratio of 70:10:20, i.e., 910 images for training, 260 images for validation, and 130 images for the final testing. The dataset was trained on roboflow software.
B. Weed Detection using YOLO Algorithm
The performance of YOLOv3 was evaluated based on the metrics used in the Pascal VOC Challenge, which are listed in Table 1. The first metric is Intersection over Union (IoU), which is the ratio between the area of overlap and the area of union of the bounding boxes of the prediction and the ground truth object.
IoU: Intersection over Union, Ao: area of overlap, Au: area of union, R: Recall, TP: True Positive, FP: False Positive, FN: False Negative, P: Precision, F1: F1 score, mAP: mean Average Precision.
To calculate the other performance metrics, true positive, false positive and false negative detections should be determined first. For a detection to be considered as True Positive (TP), or ground truth objects that were correctly identified, IoU should be equal to or greater than 0.5. It was deliberately set this low to account for human errors in creating the bounding boxes for the ground truth. False positive (FP) detections, on the other hand, are those having IoU values under 0.5. Finally, false negative (FN) detections were the ground truth objects that were completely missed by the predictions or those assigned with low confidences in predictions. After calculating TP, FP and FN, the following performance metrics can be calculated to determine recall, precision, and F1 score and mean average precision. Recall is the sensitivity of the weed detection system. This metric defines the proportion of true positive detections to total ground truth objects. Precision is the proportion of the true positive detections to all positive detections. Next is F1 score, which quantifies the overall performance of detection by incorporating both precision and recall. Finally, mean average precision (mAP) is the area under the precision-recall curve. It is an alternate metric to F1 score in terms of summarizing precision and recall. This metric is often used during the training to select which weights fit the model.
C. Training YOLO
Training is the process where YOLOv3 algorithm fits the training dataset to a predictive model in identifying weeds from images. The images are trained in roboflow software. For each 100 iterations, weights were generated during the training process. YOLOv3 was trained using the loss function below to simultaneously predict whether the weed objects were detected together with the ground truth bounding boxes in the images. The first and second terms of the loss function was calculated for the localization loss of detected objects, which was the error in the predicted bounding box locations and sizes. The third term of the loss function was the confidence loss of the detected object, which measured how likely it was for the bounding box to contain an object of a specific class. The fourth term of the loss function was the confidence loss if no object was detected, where the threshold reduced unnecessary detection of objects in the background. Finally, the last term was the classification loss, which, in this study, is not applicable because there were only one object class (weed). During the training, the mAPs and loss function chart were enabled and monitored. The early stopping point for the training was conducted when the average loss has no longer decreased as much after many iterations and when the highest mAP was achieved. For this case, maximum iteration number of 1500 was implemented.
D. Validation & Testing
The purpose of validation is to evaluate the performance of the weights generated from the training. Testing is to make sure overfitting is minimized, which means the YOLO-WEED using the generated weights can also detect weeds from other datasets.
Figure 4.1shows weed and crop detection in sesame field with labelled bounding boxes of the YOLO model of an image with various sizes of bounding boxes. The custom architecture performs well with smaller objects than the regular YOLO architecture. However, Figure shows image of weeds at a distance and performed poorly in this specific image. The model is not consistent in detecting smaller weeds across all images. Figure shows weeds starting to overrun the sesame pant. In this figure, it can be noted that the model does struggle with very small detection that even with the naked eye is hard to detect. The model struggle with really small plants due to its direct connected network. The Recall was higher than the other model probably due to high accuracy with medium/smallish size weeds, which an impressive task is considering YOLO is capable of real-time predictions.
Conclusion
In this study, we created a system for identifying invasive weeds in sesame plants using pictures. Modern objection detection models can detect weeds with high accuracy and high recall, according to tests conducted on a variety of models. We were able to identify some of the major flaws, such our inability to find tiny weeds. These findings allow us to draw the conclusion that real-time object detection algorithms are just as accurate at weed detection as their non-real-time counterparts.
References
[1] Alpaydin, E., 2016. Machine Learning: The New AI. Cambridge: The MIT Press.
[2] Bah, M. D., Hafiane, A. & Canals, R., 2017. Weeds detection in UAV imagery using SLIC and the hough transform. 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), pp. 1-6.
[3] Brown, J. et al., 2017. Polar Coordinate FarmBot Final Project Report, California Polytechnic State University: s.n.
[4] Chavan, R., T. & Nandedkar, A. V., 2018. AgroAVNET for crops and weeds classification: A step forward in automatic farming. Computers and Electronics in Agriculture, Issue 154, pp. 361-372.
[5] Córdova-Cruzatty, A. et al., 2017. Precise Weed and Maize Classification through Convolutional Neural Networks, Sangolquí, Ecuador: s.n.
[6] Daman, M., Aravind, R. & Kariyappa, B., 2015. Design and Development of Automatic Weed Detection and Smart Herbicide Sprayer Robot. IEEE Recent Advances in Intelligent Computational Systems (RAICS), pp. 257-261.
[7] Dankhara, F., Patel, K. & Doshi, N., 2019. Analysis of robust weed detection techniques based on the Internet of Things (IoT). Coimbra, Portugal, Elsevier B.V, pp. 696-701.
[8] De Baerdemaeker, J., 2013. Precission Agriculture Technology and Robotics for Good Agricultural Practices. IFAC Proceedings Volumes, 46(4), pp. 1-4.
[9] Deng, L. & Yu, D., 2014. Deep Learning: Methods and Applications. Foundations and Trends in Signal Processing, 7(4), pp. 197-387.
[10] Ertel, W., 2017. Introduction to Artificial Intelligence. Second ed. Weingarten, Germany: Springer.
[11] FarmBot Inc., 2018. FarmBot Inc. [Online] Available at: https://farm.bot/pages/series-a [Accessed 18 02 2020].
[12] Fawakherji, M., Youssef, A. B. D. D. P. A. & Nardi, D., 2019. Crops and Weeds Classification for Precision Agriculture using context-independent Pixel-Wise Segmentation. s.l., s.n., pp. 146-155.
[13] Marinoudi, V., Sorensen, C., Pearson, S. & Bochtis, D., 2019. Robotics and labour in agriculture. A context consideration. Biosystems Engineering, pp. 111-121.
[14] Marzuki Mustafa, M., Hussain, A., Hawari Ghazali, K. & Riyadi, S., 2007. Implementation of Image Processing Technique in Real Time Vision System for Automatic Weeding Strategy. Bangi, Malaysia, IEEE.
[15] Niku, S. B., 2020. Introduction to Robotics: Analysis, Control, Applications. Third ed. California: Wiley.