Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Shivaprasad T K, Sameeksha , Sathvik K S, Yashmitha Yashawantha P
DOI Link: https://doi.org/10.22214/ijraset.2023.51209
Certificate: View Certificate
Typically, hazy and low contrast, Diabetic Retinopathy (DR) results from damage to the blood vessels inside the delicate tissue behind the eye (that is, the retina). Fundus images produced by fundus cameras are frequently imperfect. The risk of blindness in DR patients can be decreased with early detection and treatment. Finding and documenting diabetic retinopathy in people is a challenging and error-prone procedure because it is challenging to capture photographs with colour fundus photography. DR classes can be identified and categorised early by utilising machine learning techniques and a variety of screening criteria. These methods\' accuracy, though, is below average. This shortcoming of these techniques can be remedied by using a different technique for the task such as deep learning. This technique uses image metadata to train a deep learning model and learn features using hundreds of classes in DR. This allows experts to create models that can classify invisible images into appropriate classes or levels of acceptable accuracy. This paper proposed a DenseNet201 model to classify fundus images into the correct severity class. We trained the model on 1914 images per class and used checkpoints to save the best weight automatically.
I. INTRODUCTION
According to the Malaysian Diabetic Retinopathy Screening Group, modern lifestyle, environmental, and societal factors are to blame for diabetes, which is acknowledged as a significant global public health issue. The retina is a thin tissue layer in the human eye. The retina, which is responsible for vision, is transformed under the influence of light and receives optical input for some kind of nerve signal, such as visual perception, and these signals are transmitted to the brain. Diabetes can cause damage to the retina of the eye leading to the condition called diabetic retinopathy (DR). Here "retino" means retina and "pathy" means disease.
Three stages of DR can be broadly distinguished: proliferative DR, non-proliferative DR, and diabetes without retinopathy. Diabetes without retinopathy refers to the early stages of DR. They are only visible under a microscope. Large molecules like proteins and lipids can enter the vessels due to increased vascular or capillary permeability. These liquids get stuck when they leak out. They can be identified with sophisticated eye testing. Swelling may also result from this leak, which appears as a flexing of yellow and white on the retina and macula. The so-called hard exudate is this. In the second stage of DR, microaneurysm and hard exudate can be found, which leads to ischemia. Ischemia refers to insufficient oxygen delivery to retinal cells. Vascular endothelial growth factor in the retina through generating new blood vessels, avoids ischemia. Proliferative DR, which causes hazy vision, is the end stage of DR. As DR reaches stage 3, it can turn proliferative and extremely dangerous, leading to total and irreversible blindness.
Also, it has been demonstrated that there are very few ophthalmologists who specialise in treating diabetes patients. The necessity for automated DR detection systems grows along with the global population of diabetes patients. Ophthalmologists diagnose and track a range of eye illnesses, including linked DR instances, using fundus pictures. Fundus photographs of the retina taken with fundus cameras are frequently insufficient. Typically, hazy and has low contrast. This could be evidence that all photographs taken of fundus images in DR cases are blurry, whether they are bright or dark. Hence, it is exceedingly challenging to make an accurate diagnosis of DR for a variety of reasons.
Fundus images using deep learning were developed in several previous studies to improve diagnostic reliability and reduce reliance on human experts. This will help the ophthalmologist performing the screening test to see if the fundus image taken has DR function and help to be further diagnosed by a specialist.
The following are the most significant contributions made by the current work to date:
II. RELATED WORK
Automated medical diagnosis has made extensive use of artificial neural networks (ANN). Among traditional ANN algorithms, the feedforward multilayer network, or MLP, is the most well-known and often used architecture. Moreover, earlier research displays MLPs developed using the backpropagation (BP) method like follows: Prostate cancer, Alzheimer's disease, and colon cancer. The BP algorithm, in its simplest form, is one of the most often used MLP training methods. As a result, additional methods try to increase how accurate MLPs taught using LM algorithms are at detecting the existence of diabetes.
ANN applications have a considerable impact on how fundus images are interpreted. Many researchers have successfully implemented ANN for DR detection. A multi-layer perceptron network (MLP) was created for categorization using 150 fundus pictures from two different hospitals. The results showed a 94.11% accuracy rate. Convolutional neural networks describe how to use the MLP classifier and Radial Basis Function to obtain 95% sensitivity and 75% accuracy with 5,000 validated pictures for an 80,000-fundus image dataset (RBF) The identification of the hard exudates in fundus images was then done using a proposed classifier. 95.9% of MLPs are discovered, compared to 85.7% of RBFs.
Elsias et al. suggested a method to automatically find lesions associated with diabetic retinopathy irrespective of the dataset and categorise the detected lesions using deep learning. The initial stage of the suggested strategy involves gathering data on diabetic retinopathy from several datasets to create a data pool. Faster RCNN is used to identify lesions and designate areas of interest. In the second stage, classify the obtained images using transfer learning and attention techniques. The proposed method was based on a convolutional neural network that is capable of being divided into three modules: encoder, attention, and decoder. This network is known as EAD-Net. EAD-Net received fundus scans to do automatic feature extraction, pixel-by-pixel label prediction, and normalisation and augmentation. A breakthrough clinical DR diagnosis method is the disclosed EAD-Net technology. The segmentation of four different lesion types yields excellent results. Gharaibeh outlines a novel technique for examining fundus photographs for the presence of microaneurysms and haemorrhages. To find lesions associated with diabetic retinopathy, the scientists used Gaussian interval type-2 fuzzy membership functions and subgroup optimisation.MATLAB modelling programme employing the DR2 and Messidor datasets produced the experimental results.
III. METHODOLOGY
The suggested system goes through 6 stages: data collection, pre-processing of the data, DL model based on transfer learning, training and model evaluation. The 6 phases were carried out in the order.
The initial stage involves gathering retinal images, which was done via freely accessible datasets like Kaggle, Eye PACS, etc. The suggested work utilised 9573 fundus pictures in total. The dataset's picture count for both cases of healthy and diabetic retinopathy was then checked. The dataset was then divided into proportions for training and testing (80:20).
TABLE I
TRAINING and TESTING DATASETS DISTRIBUTION
Type of Image |
Training data |
Testing data |
No DR |
1532 |
383 |
Mild DR |
1532 |
383 |
Moderate DR |
1532 |
383 |
Severe DR |
1532 |
383 |
Proliferative DR |
1531 |
382 |
TOTAL |
7659 |
1914 |
In the second phase, data pre-processing is performed. All the images are resized to a consistent size, suitable for input to the deep learning model such as to 224 × 224 pixels. Then pixel values are normalized to a range of 0-1. Next data augmentation techniques like rotation, horizontal flipping, and zooming are applied to increase the dataset's size and improve the model's generalization capability. In the third phase, a Transfer Learning-based Deep Learning model is deployed. In our proposed system, we have used DenseNet201 pre-trained model. Over a predetermined number of epochs (for example, 50 epochs), or until the model's performance converges, the model is trained on the training dataset. The model's performance is monitored during training and early stopping is applied to prevent overfitting. The best performance of the model is saved. In the fourth phase, model evaluation is performed. The trained model is evaluated on the testing dataset to measure its performance on unseen data. Accuracy, precision, F1 score, and Area Under Curve of the area under the receiver operating characteristic curve are performance metrics that are calculated. Next, the proposed DenseNet201 model's performance was compared with other existing models for DR detection such as InceptionResnetV2, MobileNet and NASNetLarge.
Upon completing phase 5, if the model's performance is not satisfactory, consider unfreezing some of the base model's layers and fine-tuning them along with the custom classification layer. Then the training and testing process as described above needs to be repeated.
IV. RESULTS
The below table shows the classification performance of the DenseNet201 model for different DR stages.
TABLE ?
DIABETIC RETINOPATHY CLASSIFICATION TRAINED BY DENSENET201 MODEL RESULTS
ACTUAL STAGE |
PREDICTED STAGE |
PREDICTION (%) |
0 |
0 |
45.24 |
0 |
1 |
29.53 |
1 |
1 |
46.43 |
2 |
2 |
39.75 |
3 |
3 |
47.47 |
4 |
4 |
65.68 |
4 |
4 |
74.32 |
1 |
0 |
32.54 |
2 |
3 |
39.72 |
From the above table, it is noted that the DenseNet201 model produced the highest performance for No DR and proliferative DR cases (Stage 0 and Stage 4) while there have been fewer errors for other intermediate stages as shown in Table ?.
Diabetic retinopathy (DR) can endanger patients\' vision if it is found after it is already advanced. To reduce the risk of blindness, DR must be identified and treated as soon as possible. Manual diagnosis of DR with worsening DR symptoms is ineffective. Hence, automatic DR diagnosis saves time, money, and effort. The current work has demonstrated that the correct classification of various fundus images of the retinal portion of the eye affected by diabetic retinopathy can be done with high accuracy when using transfer learning with deep learning models. We employed a DenseNet201 model in our system, which was trained on the acquired dataset. An accuracy of 73% is achieved in the classification of the fundus images using our suggested model. Less type 2 errors, which is particularly desirable in medical categorization issues, are indicated by an AUC (Area Under the receiver operating characteristic Curve) of 74%. High accuracy for problems involving vision has been made attainable by the development of deep learning approaches with transfer learning.
[1] T. Nazir, M. Nawaz, J. Rashid, Detection of Diabetic Eye Disease from Retinal Images Using a Deep Learning based CenterNet model, Vol.21, no.16, 2021. [2] C.Lam, D. Yi, M. Guo, and T.Lindsey, Automated Detection of Diabetic Retinopathy Using Deep Learning, Vol.147, 2019. [3] Shubham Joshi, B. Partibane, Wesam Atef Hatamleh, Hussam Tarazi, Chandra Shekhar Yadav, and Daniel Krah, Glaucoma Detection Using Image Processing and Supervised Learning for Classification, March 2022. [4] Silky Goel, Siddharth Gupta, Avnish Panwar, Sunil Kumar, Madhushi Verma, Sami Bourouis, and Mohammad Aman Ullah, Deep Learning Approach for Stages of Severity Classification in Diabetic Retinopathy Using Color Fundus Retinal Images, November 2021. [5] R Gargeya and T. Leng, \"Automated identification of diabetic retinopathy using deep learning,\" Ophthalmology, vol. 124, no. 7, pp. 962-969, 2017. [6] Q. H. Nguyen, R. Muthuraman, L. Singh et al., \"Diabetic retinopathy detection using deep learning,\" in Proceedings of the 4th International Conference on Machine Learning and Soft Computing. Haiphong City, Vietnam, January 2020. [7] J. K. Andersen, W. K. Juel, J. Grauslund, and T. R. Savarimuthu, \"Using fully convolutional networks for semantic segmentation of diabetic retinopathy lesions in retinal images,\" in Proceedings of the International Conferences on Modelling, Simulation and identification, Alberta, Canada, July 2018. [8] P. Porwal, S. Pachade, R. Kamble et al, \"Indian diabetic retinopathy image dataset (1DRiD); a database for diabetic retinopathy screening research,\" Data, vol. 3, no. 3, p. 25, 2018. [9] R. Sarki, K.Ahmed, H.Wang, and Y.Zhang, Automated Detection of Mild and Multi-Class Diabetic Eye Diseases Using Deep Learning, Vol.8,no.1,2020. [10] S.H. Khan, Z. Abbas and S.D.Rizvi, Classification of Diabetic Retinopathy Images Based on Customized CNN Architecture, June 2019.
Copyright © 2023 Shivaprasad T K, Sameeksha , Sathvik K S, Yashmitha Yashawantha P. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET51209
Publish Date : 2023-04-28
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here