Weeds are aggressive, need light, water, nutrients and space for crops, garden plants or lawn. Weed management usually consists of spraying herbicides throughout the agricultural sector. Most of them grow quickly and can take over many fields they are in. A rapidly developing area of research today is artificial intelligence, specifically deep learning. One of its many applications is object recognition using computer vision. This work proposes a deep learning with image processing framework for classification of various crops and weeds. A deep convolutional neural network (CNN) architecture is developed to implement this classification with improved accuracy by increasing the deep layers compared to existing CNNs.
Introduction
I. INTRODIUCTION
Invasive weed growth is difficult to control because it grows quickly and aggressively composts with another crop. Fungal bacteria and nematodes can be introduced, which is difficult to control and causes the grower to reduce crop yield and yield. Killing weeds with herbicide and reducing the edge of the cultivator is expensive.
One of the most challenging challenges is weed control. Weeds compete with crops for sunlight and water. Cultural practices can minimize infestations, known as appropriate irrigation, fertilization and mowing. However, herbicides can offer a highly effective forage control process.
Dynnana, Christiansen experimented with the identification and recognition of weeds in natural field conditions at early growth stages remains a research subject with unresolved issues. Depending not only on growth stage, but also on external factors such as wind, light and nutrition, weed seedlings change appearance, suggesting that optimal identification and recognition algorithms should be able to cope with such changes. Different weed classification strategies have different criteria for segmentation quality. If the purpose is to use shape-based features to decide the species to which a plant belongs. Additionally, the plant must be included to preserve sharp edges in the segmented image.
Deep learning is currently one of the newest and most studied technologies. It is a tool used to build intelligent systems as close as possible to the human brain. This has a huge
Agriculture is the oldest and most important means of human survival. Population growth in recent years. led to greater demand for agricultural products. Automation is being automated to meet this requirement without depleting the ecological resources that agriculture uses (Mehta141).Agriculture is one such example. where automation has found solutions to some of the challenges ?such as plant disease infestation, weed control, pesticide control, lack of drainage facilities, and lack of storage management (Ma et al. [5]).
The three main tasks that the paper aims to accomplish are:
Accurate classification using CNN for overlapping plants and weeds.
For real-time classification, the system must be reliable and robust.
To reduce the misclassification rate.
The whole experiment is performed using Python programming and Keras and Python tensorflow libraries. The results can save significant time in the use of algorithms, reduce the cost of the breeder and increase its performance.
II. LITERATURE SURVEY
Bah et al (II) combined both the Hough transform and simple linear iterative clustering. In this technique, the focus is on identifying plant lines: what does not occur in that line or is different from its neighbors is considered a weed. First, the context is segmented and the shadows are removed by applying morphological operations to obtain the skeleton of the plant lineage. This method provides an accuracy of 90%.
Classification of sunflowers and weeds. Unet is used to train 500 image datasets for soil and vegetation classification. Then, the background is eliminated and ROI is used for later classification to separate plants and weeds. The thirteen-layer CNN model achieves 90% accuracy.
Tang et al [7] implemented image classification based on k-means algorithm to distinguish soybean plants from weeds. Here, 820 images were used to classify soybean plants and different weed species. The full set gives an accuracy of 92.89%. SNET, LeNet, Alexnet and CNET convolutional networks were tested for the Raspberry CNET convolutional networks were tested for the Raspberry Pi 3 camera, which captured 3600 corn images. Corodova et al. constructed the four convolutional neural networks listed above. Among the four networks listed, CNET achieved 92.08% accuracy.
Miloto et al [9] developed a semantic segmentation based CNN for sugar beet and weed classification. This experiment was tested with 10,000 images and required 48 hours to achieve 94.74% accuracy. Chavan et al. developed a hybrid version of VGGNET and Alexnet. This model was tested for 5544 images with multiple plants and achieved 93.64% accuracy.
The OpenCV platform, a support vector machine classifier was implemented by Ambika et al [2]. This processes the original or input image and computes the geometric parameters (width, area, length, diameter, and perimeter). Adnan Faraoq et al [6] investigated a patch- based weed detection using hyperspectral images. For this purpose, CNN was evaluated and compared with Histogram of Gradient (HoG).
Sarmad Hameed et al [1] pointed out that as the world population increases, the demand for wheat also increases. In order to intensify the growth of wheat in wheat field, the weeds and barren land must be identified to reduce the weed growth so that the wheat growth can be increased. PROPOSED WORK
Image acquisition is taken from the crop field. Also online data set for weeds are used. High resolution camera is used for more accuracy in the color RGB format. Figure 1 displays the building block of the proposed work. All the images are stored in respective size and jpg format.
The weed images used here are corrupted by various factors such as poor resolution, noise, poor resolution, improper lighting variations and other background images. The given RGB images are converted to grayscale images during the pre-processing step. Various noises and unwanted background objects or images are suppressed using filtering techniques. Weed features are properly analyzed and extracted using feature extraction techniques, and weeds and plants are grouped separately in the classification process. Neural convolutional network technique is used for feature extraction. Different texture features such as energy, entropy, size, shape, red, green and blue color features are used to analyze the weed features. The classifiers are first trained, then validated and finally tested for many images using artificial neural network. validated and tested with images of different weeds classifiers are artificial neural network. Fig 2 shows the detailed version of the proposed work.
The CNN layer consists of an image input layer, a 2D convolutional layer, a rectilinear layer, a 2D maxpooling layer, a dropout layer, a flattening layer, a fully connected layer, a softmax layer, and an output layer. Table 1 shows the information of the dataset. The architecture of the proposed convolutional neural network shown in Fig. 3 includes an input image layer, four 2D convolutional layers, six rectified linear unit layers, four two-dimensional max-pooling layers, three fully-connected layers, a softmax layer, and a classification layer. The input image layer decomposes the given image based on the specified window size. Convolutional layers provide the filtered images incorporating the padding process. The ReLu layer improves the overall speed of the network. Downsampling of images was performed with max-pooling layers. Fully connected layers with different sized neurons are used to smooth the input image. The classification layer classifies the processed image into groups
III. RESULTS
Standard hyperparameters were used and results from CNN were compared: stack accumulation:5, base learning rate: 0.03, stack size: 2, learning rate policy: exponential data gamma: 0.95, Solver type: Ada delta with 30 training epochs The validated and tested results for all CNN were arranged in binary confusion matrices, including true negative (tn), true positive (tp), false negative (fn) and false positive (fp). Table 2 shows the validation and test results.
TABLE 2
VALIDATION AND TESTING RESULTS
Parameters
Data set A
Data set B
True Positives
830
16352
True negatives
291
1405
False positives
26
94.1
False negatives
15
2019
Weed Precision
96. 822%
99.351%
Weed Recall
98.101%
89.125%
TABLE 3
SIMULATED IMAGE RESULTS
Crop
Weedl
Weed2
a.
•
, dp
11
- t`A._
•
•
Detection Image
Detection in
overlapped
Normalimage
zy
100 j
.17
riew-r. ,
,it; •
;,
3
t - 1
,
.,
_I+
The results show that a high true positive value means that the input images containing the target weed are accurately identified. A true negative value means that the same plants were perfectly identified. A false positive value means that the images without target weeds were incorrectly identified as weeds, and a false negative value means that the wrong plants are considered weeds. For the online data, the sea of plants is segmented separately from the weeds, as shown in Table 3. Figure 4 shows how the network evolves during the training phase. The upper part of this plot shows that the training accuracy improves each time a minibactch is introduced. The lower part shows the performance degradation of the poor mesh.
Conclusion
Weed detection is a critical task for agricultural productivity. This requires improved computational methods that allow faster response. Therefore, the proposed method has higher accuracy than the existing methods. The experiment was conducted for cultivation of lake seed with several weeds. The result shows 95% accuracy in classification using convolutional neural networks and maxpooling layers, supported by a lower rate of misclassification of weeds and plants. Future work can focus on identifying weed species that can be combined with this existing work.
References
[1] Ambika N.K, Supriya P. \"Detection of Vanilla Species by Employing Image Processing Approac:h\", 8th International Conference on Advances in Computing and Communication (1CACC-2018), 2018.
[2] Wason,R.,-Deep Learning: Evolution and Expansion\", Cognitive System Research, Issue 52, pp.701-708.201E
[3] MeIda, p_, \"Automation in Agriculture; Agriboi the Next Generati on Weed Detection and Herbicide Sprayer - .4 Review\". Journal of Basic and Applied Engineering Research. 3(3), 2016. pp. 234-238.
[4] Jha, K., Doshi, A., Patel, P. & Shah, M., \"A comprehensive review on automation in agriculture using artificial intelligence\". Artificial Intelligence in Agriculture, 2019, pp. 1-12.
[5] Fawakheiji, M., Youssef A. B. D. D. P. A. &Nardi, D., \"Crops and Weeds Classification for Precision Agriculture using context-independent Pixel-Wise Segmentation\".ariw. 2019, pp. 140-155
[6] Tang, J. et al., \"Weed identification based on K-means feature learning combined with convolutional network\". Computers and Electronics in Agriculture, Issue 135,2017. pp. 63-70.
[7] Cordova-Cruzatty, A., 2017_ \"Precise Weed and A•afre Classification through Convolutional Neural Networks\", Sangolqui, Ecuador, 2017.
[8] Miloto, A., Lobes. P. &Stachniss, la, \"Real-Time Blob-Wise Sugar Beets vs Weeds Classification For Altnitoring Fields Using Convolutional Neural Networks\". Bonn, Germany, International Conference on Unmanned Aerial Vehicles in Geomatics, 2017,
[9] Chavan, R., T. &Nandedkar. A. V., 2018. “AgraAHVET for crops and weeds classification: A step forward in automatic farming\". Computers and Electronics in Agriculture, Issue 154. 2018. pp_ 361-372
[10] Bah, M. D., Hafiane, A. & Canals, R., \"Weeds detection in U.41/ imagery using SLIC and the Hough transform\". 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), 2017, pp. 1-6.
[11] Bias Tejeda. A_ J_ & Castro Castro. R.. \"Algorithm of Weed Detection in Crops by Computational Vision\". Cholula, Mexico, IEEE, 2019.
[12] Dyrntann, M. and Christiansen, P. \"Automated Classification of Seedlings Using Computer Vision. Technical report\", Aarhus University,2014.
[13] AarhusWang, G., Wang. S. \"Differential fault analysis on PRESENT key schedule\". In 2010 International Conference 011 Computational Intelligence and Security (pp. 362-366). IEEE.2010.
[14] Sharma, 0., \"Deep Challenges Associated with Deep Learning\". la: 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon). Faridabad. India: s.n.,2019, pp. 72-75