Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Darshil Maru
DOI Link: https://doi.org/10.22214/ijraset.2022.48406
Certificate: View Certificate
Agriculture is one in all the formost significant roles within growth and development of our nation economy. The identification of diseases is that the key of forestall the losses within the yield and quantity of the agriculture product. Diseases detection on the plant is incredibly critical for sustainable agriculture. It’s challenging to watch the plant the plant manually especially people who are new to farming. It requires excessive time interval. Therefore a correct prediction and detection of disease will reduce the utilization of fertilizer within the field, which helps from soil impurities. In this present paper we have explained how we train our model with normal dataset and augmented dataset and achieved accuracy greater than 95%
I. INTRODUCTION
To contribute to the event of countries, knowledge of agriculture sectors is crucial. Agriculture could be a one- of- a- kind source of wealth that develops farmers. For a powerful country, the event of farming is a necessity and a requirement within the global market. The world’s population is growing at an exponential rate, necessitating the massive food production within the next 50 years. Information about differing kinds of crops and diseases occurring at each level and its analysis at an early stage play a key and dynamic role within the agriculture sector. A farmer's main problem is that the occurrence of assorted diseases on their crops. The disease classification and analysis of illnesses may be a crucial concern for agriculture's optimum food yield. Food safety is an huge issue due to lack of infrastructure and technology, so crop disease classification and identification are important to be considered within the coming days. classification and identification are important to be considered within the coming days. Detection and recognition of crops illnesses is a vital study topic because it may be capable of monitoring huge fields of crops and detecting disease symptoms as soon as they occur on plant leaves. As a result finding a fast, efficient, least inexpensive, and effective approach to work out crops diseases instances is kind of important.
Transfrer Learning:-The study of transfer learning is motivated by the very fact that individuals can intelligently apply knowledge learned previously to resolve new problems faster or with better solutions. transfer learning is an machine learning method where we reuse a pre-trained model because the start line for a model on a new task. To put it simply a model trained on one task is repurposed on a second, related task as an optimization that permits rapid progress when modeling the second task. By applying transfer learning to a new task, one can achieve significantly higher performance than training with only a little amount of data.
Maize belongs to the Poaceae monocot family and is that the third most significant cereal crop in the world. Though maize isn’t eaten directly, it’s use to make several products like corn starch, syrup and ethanol. Maize plant leaves suffer from a range of infections, and the three most prevalent maize leaf diseases are the northern corn blight disease, common rust disease & grey leaf spot disease.
II. BACKGROUND AND RELETATED WORK
The study of transfer learning is motivated by The fact that people can intelligently apply knowledge learned previously to solve new problem faster or with better solution.
III. METHODOLOGY
A. Dataset
We have four type of corn leaf as follow.
The fungus Exserohilium turcicum is chargeable for the northern corn blight disease. The primary striking symptom for this disease is that the large grey cigar-shaped lesions that on the leaf’s surface. Moderate to chill temperatures, and a comparatively high humidity level act as a catalyst for this diseases.
2. Common Rust
Common rust disease is one more maize disease favoured by high humidity levels and cold temperatures. During this disease, variety of small tan spots develop on both the surfaces of the leaf and as a result, the photosynthesis of the leaf reduces drastically. the identical has shown.
3. Gray Leaf Spot
The Gray leaf spot caused by Cercospora zeae-maydis fungal pathogen, is one amongst the foremost significant yield-limiting foliar diseases found within the maize plant. Ever since being first reported in the 1970s, it's posed a heavy threat to maize production worldwide, with a big impact in large areas of Africa and also the U.S. Corn Belt. The symptoms of this disease characterized by linear (and rectangular) lesions on the lower surface of the leaf within the early stage, which later turns into rust spots.
5. Transfer Learning Model
VGG-16 model architecture – 13 convolutional layers and 2 Fully connected layers and 1 Softmax classifier VGG-16 - Karen Simonyan and Andrew Zisserman introduced VGG-16 architecture in 2014 in their paper Very Deep convolutation Network for Large Scale Image Recognition. Karen and Andrew created a 16-layer network comprised of convolutional and fully connected layers. Using only 3×3 convolutional layers stacked on top of each other for simplicity.
a. The first and second convolutional layers are comprised of 64 feature kernel filters and size of the filter is 3×3. As input image (RGB image with depth 3) passed into first and second convolutional layer, dimensions changes to 224x224x64. Then the resulting output is passed to max pooling layer with astride of 2.
b. The third and fourth convolutional layers are of 128 feature kernel filters and size of filter is 3×3. These two layers are followed by a max pooling layer with stride 2 and the resulting output will be reduced to 56x56x128.
c. The fifth, sixth and seventh layers are convolutional layers with kernel size 3×3. All three use 256 feature maps. These layers are followed by a max pooling layer with stride 2.
d. Eighth to thirteen are two sets of convolutional layers with kernel size 3×3. All these sets of convolutional layers have 512 kernel filters. These layers are followed by max pooling layer with stride of 1.
e. Fourteen and fifteen layers are fully connected hidden layers of 4096 units followed by a softmax output layer (Sixteenth layer) of 1000 units.
2. VGG-19
VGG-19 model architecture – 16 convolutional layers and 2 Fully connected layers and 1 SoftMax classifier VGG-19 - Karen Simonyan and Andrew Zisserman introduced VGG-19 architecture in 2014 in their paper Very Deep Convolutional Network for Large Scale Image Recognition. Karen and Andrew created a 19-layer network comprised of convolutional and fully connected layers. Using only 3×3 convolutional layers stacked on top of each other for simplicity.
a. The first and second convolutional layers are comprised of 64 feature kernel filters and size of the filter is 3×3. As input image (RGB image with depth 3) passed into first and second convolutional layer, dimensions changes to 224x224x64. Then the resulting output is passed to max pooling layer with astride of 2.
b. The third and fourth convolutional layers are of 128 feature kernel filters and size of filter is 3×3. These two layers are followed by a max pooling layer with stride 2 and the resulting output will be reduced to 56x56x128.
c. The fifth, sixth, seventh and eighth layers are convolutional layers with kernel size 3×3. All three use 256 feature maps. These layers are followed by a max pooling layer with stride 2.
d. Ninth to Sixteenth are two sets of convolutional layers with kernel size 3×3. All these sets of convolutional layers have 512 kernel filters. These layers are followed by max pooling layer with stride of 1.
e. Seventeenth and Eighteenth layers are fully connected hidden layers of 4096 units followed by a softmax output layer (Nineteenth layer) of 1000 units.
C. Transfer Learning With Leveraging Pretrained Models
ImageNet is a research project to develop a large database of images with annotations e.g. images and their labels. Pretrained models like InceptionV1, Inception V2, VGG-16 and VGG-19 are already trained on ImageNet which comprises of disparate categories of images. These models are built from scratch and trained by using high GPU’s over millions of images consisting of thousands of image categories. As the model is trained on huge dataset, it has learned a good representation of low level features like spatial, edges, rotation, lighting, shapes and these features can be shared across to enable the knowledge transfer and act as a feature extractor for new images in different computer vision problems. These new images might be of completely different categories from the source dataset, but the pretrained model should still be able to extract relevant features from these images based on the principles of transfer learning. In this paper we will unleash the power of transfer learning by using pretrained model - VGG-16 and VGG-19 as an effective feature extractor to classify four type of corn leaf diseases even with fewer training images.
IV. ANALYSIS
As discussed earlier, first we will train our data without augmentation with VGG-16 and VGG-19. Later we will improve the accuracy using image augmentation technique. Finally, we will leverage the pretrained model VGG-16 and VGG-19 which is already trained on a huge dataset with diverse range of categories to extract features and classify images.
All the evalutation metrics will be compared in later stage.
A. Pretrained Transfer Learning model as a feature extractor without Data Augmentation
Above is the code to call VGG-16 pretrained model. We need to include weights = ‘imagenet’ to fetch VGG-16 model which is trained on the imagenet dataset. dataset. It is important to set include_top = False to avoid downloading the fully connected layers of the pretrained model.
Below are the model metrics for VGG-16 Model accuracy and loss after fine tuning the pretrained model without data augmentation.
Above is the code to call VGG-19 pretrained model. We need to include weights = ‘imagenet’ to fetch VGG-16 model which is trained on the imagenet dataset. dataset. It is important to set include_top = False to avoid downloading the fully connected layers of the pretrained model.
Below are the model metrics for VGG-19 – Model accuracy and loss after fine tuning the pretrained model without data augmentation.
B. Pretrained Transfer Learning Model As A Feature Extractor With Data Augmentation
The performance of most ML models, and deep learning models in particular, depends on the quality, quantity and relevancy of training data. However, insufficient data is one of the most common challenges in implementing machine learning in the enterprise. This is because collecting such data can be costly and time-consuming in many cases.
Data augmentation is a process of artificially increasing the amount of data by generating new data points from existing data. This includes adding minor alterations to data or using machine learning models to generate new data points in the latent space of original data to amplify the dataset.
a. Rotating the image randomly by 50 degrees using the rotation_range parameter.
b. Translating the image randomly horizontally or vertically by a 0.2 factor of the image's width or height using the width_shift_range and the height_shift_range parameters.
c. Applying shear-based transformations randomly using the shear_range by 0.2 parameter and Zoom by 0.2
2. VGG-16 With Data Augmentation.
Below are the model metrics for VGG-16 Model accuracy and loss after fine tuning the pretrained model with data augmentation.
Above table shows training and validation accuracy for two different model using with and without data augmentation.The first model VGG-16 without Data augmentation gives validation accuracy of 98%, but with data augmentation its validation accuracy is 99.25%. And the second model VGG-19 without Data augmentation gives the validation accuracy of 97.55%,but with Data augmentation its gives the validation acuuracy of 98.75%.
The main goal of this research work is to train our model with and without data augmentation. So, we can say that by using the data augmentation with our model we can enlarge our data and introduce some new data point from the existing data for our model the get the better accuracy and predicition
[1] Sumita Mishra, Rishabh Sachan, Diksha Rajpal, Deep Convolutional Neural Network based Detection System for Real-time Corn Plant Disease Recognition, https://doi.org/10.1016/j.procs.2020.03.236. [2] Weihui Zeng, Haidong Li, Gensheng Hu, Dong Liang, Lightweight dense-scale network (LDSNet) for corn leaf disease identification, https://doi.org/10.1016/j.compag.2022.106943. [3] Daniel Fernando Santos-Bustos, Binh Minh Nguyen, Helbert Eduardo Espitia, Towards automated eye cancer classification via VGG and ResNet networks using transfer learning, https://doi.org/10.1016/j.jestch.2022.101214. [4] Honghui Xu, Wei Li, Zhipeng Cai, Analysis on methods to effectively improve transfer learning performance, https://doi.org/10.1016/j.tcs.2022.09.023. [5] Qiang Zhang, Zhe Tian, Jide Niu, Jie Zhu, Yakai Lu, A study on transfer learning in enhancing performance of building energy system fault diagnosis with extremely limited labeled data, https://doi.org/10.1016/j.buildenv.2022.109641.
Copyright © 2022 Darshil Maru. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET48406
Publish Date : 2022-12-26
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here