Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Preetheka C V, Shyamala S, Subhiksha K, Karthikeyan B
DOI Link: https://doi.org/10.22214/ijraset.2022.43569
Certificate: View Certificate
Diabetic Retinopathy (DR) is an eye condition, caused by diabetes and can lead to vision loss or blindness. It has affected over 290 million people worldwide, including 69.2 million in India, and the number of individuals affected is expected to rise tremendously in the future years. The manual diagnosis process of DR by ophthalmologists is time-consuming and cost-consuming and prone to misdiagnosis unlike computer-aided diagnosis systems. This paper focuses on severity of DR as Mild DR and Severe DR using Convolutional Neural Network approach. The dataset which we have used is available in Kaggle. The raw input images are gray scaled and then they are processed using trained convolutional neural network model, which extracts the features of fundus images. A standard report which includes the patient details and the severity of DR along with affected percentage is generated in a pdf format.
I. INTRODUCTION
Diabetic Retinopathy (DR) is a rare disease caused by high blood sugar levels which results in the damage of the retinal nerves and inflammation of the blood vessels which eventually results in loss of vision or complete blindness in the patient.[1] It is an incurable disease but can be prevented by keeping the blood glucose levels in check. Diabetic Retinopathy is detected by Ophthalmologists using various
eye tests. DR can be kept in check by monitoring the blood glucose levels and by regular eye check-ups. Hence the proposed system is a machine learning algorithm designed to detect Diabetic Retinopathy in the most effective and reliable manner. Figure shows the comparison between normal retina and diabetic retinopathy
There are two stages of Diabetic retinopathy.
II. EXISTING WORK
Research has been carried out on methods for the classification of DR with encouraging results. Gardner et al used small dataset of around 200 images and split each image into patches and then to classify the patches for features clinicians were required before implementation pf SVM. He used Neural Networks and pixel intensity values to achieve sensitivity and specificity results of 88.4% and 83.5% respectively for yes or no classification of DR.
Thanapong, C., et al developed extraction of blood vessels from retinal fundus image based on fuzzy C-median clustering algorithm. He proposed an automated method of detecting and extracting the blood vessels in retinal images [7]. The proposed algorithm is composed of 3 steps as follows:
Matched filtering, Fuzzy C-Median (FCMED) clustering, Label filtering.
Matched filtering is used to enhance visualization of the blood vessels. Fuzzy C-Median (FCMED) clustering is used to keep the spatial structure of vascular tree segments. Label filtering is used to remove misclassified pixels. This algorithm has been evaluated in terms of specificity and sensitivity and performs well in analyzing anatomical structures in retinal images.
Acharya et al has created an automated method for identifying and classifying DR. The features which were extracted from the raw data using a higher-order spectra method, are fed into the SVM classifier and capture the changes in the shapes and contours in the images. This SVM method showed an average accuracy of 82%, sensitivity of 82% and specificity of 88%.
III. PROPOSED WORK
In this paper we proposed that detection of Diabetic Retinopathy in fundus images of retina using CNN (Convolutional Neural Network). Convolutional networks (ConvNets) have recently seen a lot of success in image and video recognition on a big scale. In Convolutional Neural Network, we deployed the VGG-16 architecture.
This architecture achieves 92.7% top-5 test accuracy on ImageNet dataset which contains 14 million images belonging to 1000 classes.
The process of detecting Diabetic Retinopathy are as follows:
A. Data Set
The dataset used for this paper is taken from the Kaggle APTOS competition. This dataset is a large set of high resolution retina images taken by Asia Pacific Tele Ophthalmology Society (APTOS) Symposium under a variety of imaging conditions. Images are labelled with a 12- digit code containing both alphabets as well as numeric.
For e.g., image labelled as 017a165a0bb0.png is the image of a random sample in the dataset. Special type of cameras and models are required to capture these images. Fundus Photography is used for capturing the image of the damaged/healthy retina.
The data is unclean and noisy. 4088 training images with 3-class labels (No, mild, severe, stage) are present. The dataset consists of color photographs that vary
in dimensions.
B. Pre-processing
In preprocessing, the raw image is converted into a grayscale image and resized. The RGB image to grayscale conversion is done for ease of processing.
1. Resizing the Images: The purpose of resizing is to transform all the different size of images to a single two-dimensional (height and width) image. Since the training method uses images, resizing is important. As a result, the processing time is reduced. The image was resized using Python's opencv2 package.
C. Feature Extraction
We have used Gray Level Co-occurrence Matrix (GLCM) method to extract the feature od images.
We have considered the below features from retinal images which are derived from the GLCM matrix.
1. Contrast: Contrast measures the local variations in the grey level of the pixels in the co-occurrence matrix. The intensity of the grey image pixels is calculated using the following equation
2. Dissimilarity: Dissimilarity is a measure of local intensity variation defined as the mean absolute difference between the neighboring pairs. A larger value correlates with a greater disparity in intensity values among neighboring voxels
3. Homogeneity: Homogeneity is the distribution of the pixels which measures and gives the value of the closeness of the elements of GLCM to the elements in the diagonal of a GLCM matrix. Homogeneous images have very less grey pixel values, which produces high measurement for probability of x,y resulting in high values of sum of square. Value of Homogeneity is 1 for a diagonal. It is calculated by the following equation
4. Angular Second Moment (ASEM): Angular Second Moment (ASEM) represents the uniformity of distribution of grey level in the image. Before preprocessing the images to the CNN, the image is converted to an array and the array is mapped in the range of 0 to 1. We set an epoch as 30 to attain a deep network. We have used Adam optimizer and the batch size is 32. We generate several images from the existing image dataset using parameters such as rotation range, shear range, zoom range and horizontal flip, width shift range, height shift range to the image data generator in order to train the model with more images since we have only 120 images for training.
D. CNN Classifier
Convolutional Neural Networks (CNN or ConvNet) are complex feed forward neural networks. CNNs are used for image classification and recognition because of its high accuracy.
CNNs are particularly useful for finding patterns in images to recognize objects, faces, and scenes. They can also be quite effective for classifying non-image data such as audio, time series, and signal data
IV. RESULTS AND DISCUSSION
From this project, we have observed and classified the retinal fundus images as No DR, Mild DR, Severe DR depending upon the percentage as follows:
The model accuracy and loss are shown in Fig 8 and Fig 9.
Most people suffer from diabetes, which is a main cause of DR. With proper treatment, its symptoms can be minimized. Therefore, a model must be developed that can detect DR without any expert guidance. CNN is a Deep learning algorithm that works effectively and accurately for the classification of images. In this project, we have also viewed the accuracy and loss of trained and validation images. We have implemented the algorithm by modifying some parameters such as the number of convolution and pooling layer optimizer to achieve better efficiency. The future scope of this project is that we would train the model on more datasets such that it can identify and classifying diabetic retinopathy, which would help the physicians.
[1] Aptos Blindenss Detection Dataset is available at https://www.kaggle.com/c/aptos2019-blindness-detection/data [2] Detection of lesions and classification of diabetic retinopathy using fundus images, M. P. Paing, S. Choomchuay and M. D. Rapeeporn Yodprom, 9th Biomedical Engineering International Conference (BMEiCON), Laung Prabang, doi: 10.1109/BMEiCON.2016.7859642. (2016). [3] Region Growing Based Segmentation Using Forstner Corner Detection Theory for Accurate Microaneurysms Detection in Retinal Fundus Images, R. D. Badgujar and P. J. Deore, 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 2018, pp. 1-5, doi: 10.1109/ICCUBEA.2018.8697671. (2018). [4] Automatic Lesions Detection and Classification of Diabetic Retinopathy Using Fuzzy Logic, Afrin, R., & Shill, P. C. International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST). doi:10.1109/icrest.2019.8644123 (2019). [5] Image Texture Feature Extraction Using GLCM Approach” P. Mohanaiah , P. Sathyanarayana ,. GuruKumarInternational Journal of Scientific and Research Publications, Volume 3, Issue 5, May 2013 1 ISSN 2250-3153 (2013) [6] Sun, Y., & Zhang, D. (2019). Diagnosis and Analysis of Diabetic Retinopathy based on Electronic Health Records. IEEE Access, 7, 86115-86120. [7] Enhancement of exudates for the diagnosis of diabetic retinopathy using Fuzzy Morphology, A. B. Mansoor, Z. Khan, A. Khan and S. A. Khan, 2008 IEEE International Multitopic Conference, Karachi, 2008, pp. 128-131, doi: 10.1109/INMIC.2008.4777722.
Copyright © 2022 Preetheka C V, Shyamala S, Subhiksha K, Karthikeyan B. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET43569
Publish Date : 2022-05-30
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here