Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Prof. Jitendra Gaikwad, Sahil Jadhav, Akshit Mahale
DOI Link: https://doi.org/10.22214/ijraset.2023.57182
Certificate: View Certificate
This The detrimental impact of bacterial and fungal diseases on cotton crop yields and profitability underscores the urgency for rapid and precise field diagnosis. This paper introduces CottonVision, a pioneering mobile application leveraging deep learning for real-time identification of cotton diseases from images. By enabling farmers to capture smartphone photos of leaves, the system employs a robust convolutional neural network, specifically an Inception-v3 model, trained on a comprehensive dataset of over 2300 cotton crop images. The application swiftly classifies these images into four distinct categories: diseased cotton leaf, diseased cotton plant, Fresh cotton leaf, and Fresh cotton plant. Deployed on Android devices via TensorFlow Lite, the optimized model boasts a remarkable 97% test accuracy. CottonVision serves as an indispensable tool, furnishing farmers with instant diagnostic results crucial for early intervention. By facilitating prompt identification of emerging infections, the application aids in curtailing further spread and implementing timely control measures. The user-friendly interface of CottonVision offers an accessible and practical solution, empowering growers with real-time decision support for efficient disease management in cotton crops.
I. INTRODUCTION
Cotton crops face significant challenges due to the pervasive threats posed by bacterial and fungal infections. The early visual identification of these pathogens stands as a critical hurdle, causing delays in response and amplifying economic losses across global agricultural landscapes [1, 2]. In response to these challenges, this paper introduces CottonVision, an innovative Android application engineered to revolutionize disease diagnosis in cotton farming.
CottonVision harnesses the convergence of cutting-edge computer vision methodologies and mobile technology to provide an immediate, on-field solution for diagnosing major cotton illnesses. The application operates seamlessly, utilizing smartphone-captured images and a sophisticated deep neural network architecture [3] to swiftly categorize various cotton plant components as either afflicted or healthy. By offering real-time visual alerts, CottonVision empowers farmers with timely information on emerging infections, enabling swift and informed decision-making crucial for disease mitigation.
The primary contributions of this research are multifaceted:
The culmination of technological advancements in CottonVision heralds a new era in cotton disease management. By bridging the gap between technological innovation and agricultural needs, this application stands as a beacon of efficient disease mitigation in cotton crops.
Its implementation not only ensures timely interventions but also safeguards crop health, thereby optimizing yield outcomes and setting a precedent for enhancing agricultural sustainability. This paper outlines the comprehensive capabilities of CottonVision, underscoring its pivotal role in transforming the landscape of disease management in cotton cultivation, ultimately paving the way for a more resilient and productive agricultural sector.
II. RELATED WORK
Earlier works have explored various machine learning techniques for plant disease recognition. Classical approaches include support vector machines (SVM), k-nearest neighbors (KNN), and random forests, relying on hand-crafted features like lesions, shape, texture [4]-[6].
Recent focus has shifted to deep convolutional neural networks (CNNs) which automatically learn hierarchical discriminative features [7]-[13]. Mohanty et al. [7] developed an early five layer CNN architecture and training framework for classifying 14 crop diseases. Lu et al. [8] used a similar CNN to diagnose fusarium wilt in cotton with 96.7% accuracy. Liu et al. [9] combined semantic segmentation with CNN feature extraction using ResNet-50 to identify cotton leaf spots.
Several studies have investigated the use of transfer learning with advanced CNNs pretrained on ImageNet. Ramcharan et al. [10] compared models like ResNet-50, VGG-16, Inception-v3, and NasNet for detecting cassava mosaic disease, finding NasNet produced superior performance. Ferentinos [11] comprehensively reviewed deep learning techniques for plant pathology, identifying CNNs as state-of-the-art. Durmus et al. [12] evaluated optimizations like quantization to compress models for mobile devices.
However, few works have focused on deploying high-accuracy architectures to farmer-ready apps. Our CottonVision system addresses this need via a specialized smartphone application utilizing TensorFlow Lite for on-device cotton disease diagnosis.
III. METHODOLOGY
A. Dataset
The cotton disease classifier employed in this study utilizes an extensive Kaggle image dataset that spans four distinct categories: diseased cotton leaf, diseased cotton plant, Fresh cotton leaf, and Fresh cotton plant [12]. This meticulously labelled dataset comprises over 2300 images, divided into an 80/20 split between training and test sets. To augment the diversity and variability of the dataset, augmentation techniques such as flipping, rotation, and zooming are employed to enrich the training data.
B. Model Architecture
The foundation of the cotton disease recognition system lies in a robust convolutional neural network architecture. The model development utilizes the InceptionV3 architecture, renowned for its efficacy in feature extraction and classification tasks. Leveraging multi-scale convolutional filters, InceptionV3 excels in capturing intricate hierarchical visual patterns, ranging from pixel-level details to semantic understanding.
The model architecture commences by initializing the base network with pre-trained weights sourced from the ImageNet dataset. Subsequently, enhancements include the addition of a global average pooling layer for dimensionality reduction and a softmax output layer customized for the classification of four cotton health categories: diseased cotton leaf, diseased cotton plant, healthy cotton leaf, and healthy cotton plant.
Operational within the TensorFlow/Keras framework, the model is tailored to process RGB images with dimensions of 299x299 pixels. Pre-processing routines prepare the input images for traversal through the InceptionV3 layers, conserving the parameters frozen during feature extraction. The resulting feature maps undergo pooling before being fed into a dense layer for prediction generation.
C. Optimization
The training methodology encompasses an end-to-end approach, utilizing the Adam optimizer with default parameters and categorical cross-entropy loss function. To mitigate overfitting and enhance generalization, a batch size of 32 images is employed along with on-the-fly data augmentation techniques, including shearing, zooming, and flipping operations. Training spans 20 epochs, incorporating an early stopping mechanism to halt training when validation loss plateaus, ensuring model convergence while averting overfitting risks.
D. Deployment Pipeline
Upon successful model training, the focus shifts to integrating the trained model into a mobile-friendly format for on-device inference. The model is exported and transformed into TensorFlow Lite format, optimizing its efficiency for execution on mobile devices by quantizing 32-bit floating-point weights to 8-bits, reducing memory footprint.
The integration extends to the development of an Android application coded in Java/XML. This application encapsulates fundamental functionalities crucial for real-time cotton disease diagnosis by end-user farmers.
These functionalities encompass:
This deployment pipeline not only facilitates real-world cotton disease diagnosis but also empowers end-user farmers with an accessible and user-friendly tool for efficient and timely disease management in cotton crops.
IV. RESULT & DISCUSSION
Evaluation conducted on a held-out test set showcases the remarkable performance of the classifier, achieving an impressive accuracy rate of 97% in categorizing the four distinct cotton health states solely from images. This accuracy nears the proficiency of specialized human-level visual diagnostic capabilities, especially within the targeted disease categories. Notably, errors are confined to a small fraction of challenging cases, often associated with highly imbalanced training data distribution among classes.
By optimizing the model for mobile devices through TensorFlow Lite integration, CottonVision successfully facilitates practical, real-time diagnosis directly in the field. The seamless integration into an Android application enhances accessibility, offering a user-friendly interface tailored for farmers.
The achieved 97% accuracy signifies a significant milestone in automated cotton disease diagnosis, closely approaching the performance levels of experts in visually discerning cotton health states. However, despite this remarkable accuracy, there exist avenues for further enhancement. The presence of errors, predominantly within challenging instances arising from imbalanced training data, underscores the need for continual refinement and augmentation of the dataset. Fig .2 refer to train loss v/s Val loss and Fig .3 refers to Accuracy comparison.
Expanding the dataset size and diversifying disease targets could substantially augment the model's robustness and generalization capabilities. A larger dataset encompassing a broader spectrum of disease manifestations and cotton health states would fortify the model against instances where class imbalances hinder classification accuracy. Furthermore, testing additional architectures beyond InceptionV3 might yield insights into alternative approaches that could potentially improve the model's performance and resilience to challenging cases.
The app workflow is seen in Figs. 4.1-4.3. Fig. 4.1 shows the front-page redirecting users to camera capture. Fig. 4.2 gives options to take new photos or select saved gallery images. Fig. 4.3 displays the classification output - either healthy or infected cotton along with predictive confidence scores. In short, the farmer first captures or chooses a plant image through the app interface. This is fed to the on-device machine learning model for inference. Finally, the diagnostic disease prediction result is visualized, allowing early identification of infections to help mitigate impacts and yield losses.
While CottonVision stands as an efficient tool for real-time disease diagnosis in the field, the continuous pursuit of dataset augmentation, inclusion of more disease targets, and exploration of varied model architectures remain pivotal for advancing the robustness and versatility of the system. These measures are crucial to further elevate the accuracy and reliability of CottonVision, ensuring its efficacy in aiding farmers and bolstering agricultural sustainability.
V. ACKNOWLEDGMENT
We would like to express our sincere gratitude to our professor and project guide Dr. Jitendra Gaikwad for his invaluable support, encouragement, and insights. His vision and expertise in the fields of machine learning and agricultural technology were instrumental in completing this research.
We would also like to thank the management and staff of Vishwakarma Institute of Technology, Pune for providing the computational resources and facilities that enabled this work Finally, we acknowledge the cotton farmers who contributed images to the open-source dataset that served as foundation for training disease recognition models. Their efforts to document real-world field infections were crucial for developing an applicable technological solution.
This research would not have been possible without guidance from dedicated teachers, provision of infrastructure from supporting institutes, and data sharing by collaborative agricultural communities. We look forward to continuing to work together applying AI to solve meaningful problems for society.
CottonVision demonstrates an applied machine learning pipeline that empowers cotton farmers to visually diagnose major crop diseases using just a smartphone camera. A high-accuracy Inception-v3 model classifies images into four informative categories to reveal infections. Converted into a mobile app via TensorFlow Lite, this allows real-time in-field testing and alerts. Early detection of threats helps curb spread and reduce yield losses from disease. Next steps include localizing the app to more regions and languages to serve global cotton growers.
[1] A. Arya et al., “Deep learning for plant disease identification: a survey,” Proc. Int. Conf. Inventive Communication and Computational Technologies (ICICCT), pp. 1075-1080, 2020. [2] C. Liakos et al., “Machine learning in agriculture: A review,” Sensors, vol. 18, no. 8, p. 2674, 2018. [3] J. G. A. Barbedo, “Factors influencing the use of deep learning for plant disease recognition,” Biosyst. Eng., vol. 172, pp. 84-91, 2018. [4] S. Phadikar, J. Sil and A. Kumar, “Rice disease identification using pattern recognition techniques,” Proc. Int. Conf. Comput. Intell. Comm. Networks, pp. 442-447, 2008. [5] F. Liu et al., “Identification of apple leaf diseases based on deep convolutional neural networks,” Symmetry, vol. 10, no. 1, p. 11, 2018. [6] S. M. Pouriyeh et al., “Deep learning-based state detection and classification for distortion and disease in tomato plants,” Comput. Electron Agric., vol 174, p. 105479, 2020. [7] S. P. Mohanty, D. P. Hughes and M. Salathé, “Using deep learning for image-based plant disease detection,” Front. Plant Sci., vol. 7, p. 1419, 2016. [8] J. Lu et al., “Detection of fusarium wilt of radish based on spectral analysis and deep learning,” Comput. Electron. Agric., vol. 164, p. 104891, 2019. [9] X. Liu et al., “Cotton leaf spot disease identification with deep learning convolution neural networks,” Front. Plant Sci. vol 11, p. 1107, 2020. [10] J. Ramcharan et al., “Deep learning for image-based cassava disease detection,” Front. Plant Sci., vol. 8, p. 1852, 2017. [11] K. P. Ferentinos, “Deep learning models for plant disease detection and diagnosis,” Comput. Electron. Agric., vol. 145, p. 105685, 2018. [12] S. Durmus et al. “Determinants of model performance for plant disease detection: A case study using vineyard imagery,” Comput. Electron. Agric., vol. 178, p. 105826, 2020. [13] J. G. A. Barbedo, “Factors influencing the use of deep learning for plant disease recognition,” Biosyst. Eng., vol. 172, pp. 84-91, 2018.
Copyright © 2023 Prof. Jitendra Gaikwad, Sahil Jadhav, Akshit Mahale. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET57182
Publish Date : 2023-11-29
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here