Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Apurva Save, Aksham Gupta, Sarthak Pruthi, Divyanjana Nikam, Prof. Dr. Shilpa Paygude
DOI Link: https://doi.org/10.22214/ijraset.2022.40275
Certificate: View Certificate
Plant disease diagnosis is the foundation for efficient and precise plant disease prevention in today\'s complicated environment. Plant disease identification has become digitised and data-driven as smart farming has grown, allowing for advanced decision support, smart analysis, and planning. This work provides a deep learning-based mathematical model for detecting and recognising plant diseases, which improves accuracy, generality, and training efficiency. The prevention and control of plant disease have consistently been broadly talked about in light of the fact that plants are presented to the external climate and are profoundly inclined to diseases. Typically, the precise and quick diagnosis of disease assumes a significant part in controlling plant disease, since helpful protection measures are frequently carried out after right diagnosis Identification of the plant diseases is the way to prevent the misfortunes in the yield and amount of the rural item. Early Detection of Plant Leaf Disease is a significant need in a developing horticultural economy like India. Without legitimate recognizable proof of the disease, disease control measures can be an exercise in futility and cash and can prompt further plant misfortunes. Our task proposes a profound learning-based model which will be trained utilizing a dataset containing pictures of healthy and diseased crop leaves. The model will serve its target by ordering pictures of leaves into diseased classes dependent on the example of imperfection. The framework effectively recognizes various sorts of disease found in Tomato Crop.
I. INTRODUCTION
Plant infection can easily stifle growth and have a negative influence on output. Every year, a financial hardship of up to $20 billion is assessed all across the world. The most difficult issue for analysts is dealing with a variety of situations. In addition, traditional tactics rely on pros, encounters, and guides, but the majority of them are pricey, time-consuming, and labor-intensive, with difficulty in accurately recognising them. Hence, a fast and exact approach to identify plant infections appears so critical for the good thing about trade and biology to agriculture. Disease control procedures can be a waste of money and time if the disease isn't properly identified, and it can lead to more plant loss. Our project proposes a deep learning-based model that will be trained with photos of healthy and diseased crop leaves from a dataset. The model will achieve its goal by categorising photos of leaves into unhealthy categories based on defect patterns.
II. RELATED WORK
To detect leaf illnesses, this study[1] used K-Medoid clustering and Random Forest classification methods. The image of the leaf is pre-processed in this research, and then the clustering approach is used to locate the impacted area of the leaf. The system detects sickness in the leaf using K-Medoid clustering and the Random Forest algorithm. In this research, Random
Forest classification is used to detect and categorise the type of disease, which is less efficient for image datasets. Deep neural networks can learn good features on their own in image problems, therefore CNN is a better option.
The focus of this research work[2] is on two popular architectures: AlexNet and GoogLeNet. The authors of this work investigated the performance of both of these architectures on the PlantVillage dataset by training the model from scratch in one case and then using transfer learning to adjust already trained models (trained on the ImageNet dataset) in the other. A model based on the Training From Scratch method achieves an accuracy of around 91.80% when photos other than those from the dataset are tested, and an accuracy of around 31% when images not from the dataset are tested. In this paper [3], pre-trained deep learning models AlexNet and VGG16 are used to detect diseases in plants as well as a healthy plant using photographs taken immediately from the field with cell phones. The majority of previous models and studies were carried out using photographs of single leaf samples in the third research publication, however in this work the entire or portion of the plant was used to create the dataset. When using photos from real-world situations to evaluate the models, the accuracy of both AlexNet and VGG16 drops drastically, and in some cases diseases are incorrectly identified due to factors such as the presence or lack of weed, soil colour, illumination shift, and so on.
The data sets produced in this paper [4] are used to identify healthy and sick leaves using Random Forest. To identify the sick and healthy pictures, the generated datasets of diseased and healthy leaves are combined and trained under Random Forest. It uses the Histogram of an Oriented Gradient to extract picture characteristics (HOG).
Because the Hu moments shape descriptor and Haralick features can only be computed on a single channel. As a result, before computing Hu moments and Haralick characteristics, RGB must be converted to grayscale. The image [5] used as input was first pre-processed by eliminating the backdrop and using the erosion technique to remove any noise. The texture features from the improved picture were extracted using the Gray Level Co-occurrence Matrix (GLCM). The performance of the Support Vector Machine (SVM) classifier was examined using the N-fold cross-validation technique after it was trained using various kernel functions. Using the linear kernel function and the SVM classifier, the suggested system attained a 99.83 percent accuracy. Despite the great accuracy acquired, it is insufficient to forecast or distinguish between healthy and diseased leaves. A dataset of 383 photos [6] shot with a digital camera was employed. On the dataset, Otsu's picture segmentation algorithm has been used. The RGB colour components were used to obtain colour features, while the regionprops function was used to obtain shape features, and GLCM was used to obtain texture features. A feature extraction module was created by combining all of the extracted features. By training the decision tree classifier, supervised learning approaches were applied for classification. Despite their high accuracy, decision trees have a number of drawbacks, including over-fitting in the event of noisy data and a limited amount of control over the model by the user. The proposed work [7] utilizes two cascaded classifiers, first classifier sections leaf from the foundation for which neighborhood measurable highlights are utilized . Then, at that point another classifier is prepared utilizing luminance and shade from HSV colorspace so the classifier can recognize infection and distinguish its level. The calculation that has been created is general, as it very well may be applied to any infection. Be that as it may, cascaded classifiers rely upon different conditions for example the lines of the leaves are distinguishable, Leaves are large size for examination and the testing requires controlled climate SIDDHARTH SINGH CHOUHAN referenced a technique that was utilized for recognizing illness which happens on leaves of orchid plants. Pictures of orchid plant handouts are acquired using advanced cameras. For classifying pictures into two illness classes ,a total of a few methodologies like morphological handling ,filtering technique , and border segmentation technique are utilized by the calculation. Two classes utilized in this are sunlight based burn and dark leaf spot. Nonetheless, the division method proposed and utilized in this can just recognize two unique kinds of orchid leaf sickness. For characterization of different sorts of leaf illness present on orchids, new or other division strategies need to be made. This is on the grounds that they need numerous mixes of the handling procedures to discover powerful boundary division strategies.
Bhumika S. Prajapati [9] presents an overview on cotton leaf infection location and grouping. It is hard for natural eyes to recognize precisely which kind of leaf infection exists on the plant leaf. Along these lines, the use of AI method and picture preparation strategy can be useful to precisely recognize the cotton leaf sickness. The pictures which are utilized for this errand were obtained utilizing a computerized camera from the cotton field. This work portrays just broad and various methodologies which recognize and group leaf illnesses of cotton and depict division just as foundation expulsion strategies
Mohd Yusuf [10] gives a constant procedure of location of edges for recognizing illnesses present on Hevea_leaves and furthermore its equipment execution. There are three primary sicknesses which happen on Hevea leaves. For picture examination these three infections which are Bird's Eye Leaf Spot ,Corynespora Leaf Spot and Colletotrichum Leaf Disease are utilized. The illness can be recognized by utilizing the Sobel edge identification calculation. MATLAB is utilized to make the Sobel edge discovery strategy. The two strategies' outcomes are differentiated.
III. SYSTEM DESIGN
The basic methodology of our project is that the image is input into our model using a website which we have integrated using Flask with our model where the user will upload the leaf image with a uniform background. The image will be then preprocessed. Features are extracted by the model itself so as to classify the correct disease. Once the disease is classified, it is mapped with the correct remedy or the fertilizer which would be required to eradicate the problem and to fight off the disease in the future too.
IV. IMPLEMENTATION
A. Dataset
The tomato leaf pictures collection was chosen from the PlantVillage database and categorised into ten groups using kaggle. Each image is made up of a single leaf, giving each category a total of 10,000 training photos. The diseases in the dataset are classified into the following categories:
Bacterial spot
Early blight
Late blight
Septoria leaf spot
Tomato mosaic virus
Two-spotted spider mite
Leaf Mold
Target Spot
Tomato yellow leaf curl virus
B. Data Preprocessing
We created up data generators to read photos from our source files, transform them to Float32 tensors, and distribute them to our network. For our test cases, we have one generator, and for our validation cases, we have another.Batches of 224x224 photos will be generated using our generator. Data entering the neural network should be standardised in some way to make it easier for the neural networks to process. In our situation, we'll preprocess our dataset images by converting the pixel values to the [0,1] range (typically, the pixel values range from 0 to 255)
C. Pre Trained Models
The GoogLeNet engineering comprises 22 layers (27 layers including pooling layers), and a piece of these layers are an aggregate of 9 inception modules. GoogLeNet is a sort of convolutional neural network dependent on Inception engineering. It uses Inception modules, which permit the network to pick between different convolutional channel sizes in each square. An Inception network stacks these modules on top of one another, with periodic max-pooling layers with stride 2 to split the resolution of the grid.
Inceptionv3 is a convolutional neural network for aiding image analysis and object recognition which has the accompanying feature - Using 5x5 convolution to two 3x3 convolution tasks to further develop computational speed. Inception V3 by Google is the third form in a progression of Deep Learning Convolutional Architectures. Inception V3 was prepared utilizing a dataset of 1,000 classes from the first ImageNet dataset which was prepared with more than 1 million preparing pictures, the Tensorflow version has 1,001 classes which is expected to be an extra background class not utilized in the first ImageNet. Inception V3 was prepared for the ImageNet Large Visual Recognition Challenge where it's anything but a first next in line.
Xception is a convolutional neural network that is 71 layers deep. We can stack a pre-prepared form of the network prepared on in excess of a million images from the ImageNet data set .The pretrained network can characterize images into 1000 object classifications, like console, mouse, pencil, and numerous creatures. Subsequently, the network has learned rich feature portrayals for a wide scope of images. The network has a picture input size of 299-by-299.
MobileNetV2 is a convolutional neural network design that looks to perform well on mobile gadgets. It depends on a rearranged residual construction where the residual associations are between the bottleneck layers. The transitional extension layer utilizes lightweight depthwise convolutions to filter features as a source of non-linearity. All in all, the design of MobileNetV2 contains the underlying completely convolution layer with 32 filters, trailed by 19 residual bottleneck layers.
D. Experimentation
From the dataset which is being used in this project we have segregated the images into two sets: Training set and Testing set. For every architecture model we have used firstly, we train the architecture model using the images in our training set and after gaining impressionable results, we test the model using images in our testing set. We continue to test the model till we get the desired result. The results are observed and recorded for comparison with other architectures.
For all the other architectures the same process takes place and the results are recorded. In addition to this we have trained our own model with 3 convolution, 3 Pooling and 2 Dense Layers. For all pretrained models as well as our model we have done data augmentation using Image data generator. All other parameters like number of epochs , optimizer , activation function are kept the same to correctly analyse all the models.
V. RESULTS
As depicted in the above table MobileNetV2 architecture provides highest validation accuracy out of all the pretrained models but our own build model gives 88.50% validation accuracy that is even higher than any of the pretrained models. So, our own model gives a better validation accuracy than any of the other pretrained architectures. Also, the training accuracy is 90.17% for our model which is lower than any of the other models. The highest training accuracy is 96.75% provided by the MobileNetV2 architecture. For our project, MobileNetV2 gives the highest accuracy but our model is also effective for our project.
VI. FUTURE SCOPE
The system successfully interprets various Diseases and is also capable of providing fertilizers suggestion for the respective disease. Furthermore, this system can be made more robust by incorporating more image dataset with wider variations like more than one leaf in a single image. An App could also be developed for the project which could make the work of the farmers easier.They could directly upload image on the app and it would tell the disease and the cure then and there.This would reduce the time and efforts.This project is limited to just one crop for now but in the future more crops and even flowers dataset can be added so that it is helpful for every agricultural need.Newer models can also be added and tried with time which may result in better accuracy and would make the model even faster.
Different approaches and models of Deep Learning methods were explored and used in this project so that it can detect and classify plant diseases correctly through image processing of leaves of the plants. The procedure starts from collecting the images used for training, testing and validation to image preprocessing and augmentation and finally comparison of different pretrained models over their accuracy. Finally, at the end , our model detects and distinguishes between a healthy plant and different diseases and provides suitable remedies so as to cure the disease.This paper proposed and developed a system which uses plant leaf images to detect different types of disease in tomato crops, and also provides appropriate fertilizer suggestions.
[1] R. Indumathi., N. Saagari., V. Thejuswini. and R. Swarnareka., \"Leaf Disease Detection and Fertilizer Suggestion,\" 2019 IEEE International Conference on System, Computation, Automation and Networking (ICSCAN), Pondicherry, India, 2019, pp. 1-7, doi: 10.1109/ICSCAN.2019.8878781. [2] Mohanty Sharada P., Hughes David P., Salathé Marcel., “Using Deep Learning for Image-Based Plant Disease Detection” Frontiers in Plant Science, doi:10.3389/fpls.2016.01419/ISSN=1664-462X [3] Krishnaswamy Rangarajan, Aravind & Raja, P. & Ashiwin, Rajendran & Mukesh, Konnaiyar. (2019). Disease classification in Solanum melongena using deep learning. Spanish Journal of Agricultural Research. 17. 0204. 10.5424/sjar/2019173-14762. [4] Ramesh, Shima, et al. \"Plant disease detection using machine learning.\" 2018 International conference on design innovations for 3Cs compute communicate control (ICDI3C). IEEE, 2018. [5] Prajapati, Harshadkumar & Shah, Jitesh & Dabhi, Vipul. (2017). Detection and classification of rice plant diseases. Intelligent Decision Technologies. 11. 357–373. 10.3233/IDT-170301. [6] Sabrol, Hiteshwari & Kumar, Satish. (2016). Tomato plant disease classification in digital images using classification tree. 1242-1246. 10.1109/ICCSP.2016.7754351. [7] A. Parikh, M. S. Raval, C. Parmar and S. Chaudhary, \"Disease Detection and Severity Estimation in Cotton Plant from Unconstrained Images,\" 2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA), 2016, pp. 594-601, doi: 10.1109/DSAA.2016.81. [8] S. S. Chouhan, A. Kaul and U. P. Singh, \"A deep learning approach for the classification of diseased plant leaf images,\" 2019 International Conference on Communication and Electronics Systems (ICCES), 2019, pp. 1168-1172, doi: 10.1109/ICCES45898.2019.9002201. [9] B. S. Prajapati, V. K. Dabhi and H. B. Prajapati, \"A survey on detection and classification of cotton leaf diseases,\" 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), 2016, pp. 2499-2506, doi: 10.1109/ICEEOT.2016.7755143. [10] N. M. Yusoff, I. S. Abdul Halim, N. E. Abdullah and A. A. Ab. Rahim, \"Real-time Hevea Leaves Diseases Identification using Sobel Edge Algorithm on FPGA: A Preliminary Study,\" 2018 9th IEEE Control and System Graduate Research Colloquium (ICSGRC), 2018, pp. 168-171, doi: 10.1109/ICSGRC.2018.8657603
Copyright © 2022 Apurva Save, Aksham Gupta, Sarthak Pruthi, Divyanjana Nikam, Prof. Dr. Shilpa Paygude. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET40275
Publish Date : 2022-02-08
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here