Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Arooj Hakeem, Monika Mehra, Ravinder Pal Singh
DOI Link: https://doi.org/10.22214/ijraset.2022.47407
Certificate: View Certificate
Diabetic retinopathy (DR) is one of the disease which is unobservable to the people and it is also an underlying disease that causes eye-related disorders due to collective damage to small retinal blood vessels. As a result both eyes can be affected leading to partial or complete vision loss. Diabetic Retinopathy is linked with uncontrollable sugar level or diabetic level. To prevent the initial damage and permanent blindness, its early detection must be followed. Whereas, there are automated diagnostic systems are implied in early detection and diagnosis of severe eye complications by providing helping hand to the ophthalmologists. The proposed research achieved improvements to enhance the performance in terms of accuracy, loss and speedy detection. In continuation with the same, the research has been done by implying Convolution Neural Network (CNN) along with Inception Version 3 (V3) model which yield accuracy of 99.35% with 0.02 loss. The proposed model makes it more accurate due to less epochs i.e., 10 epochs. The proposed model has been supported with Diabetic Retinopathy Detection 2015 and Aptos 2019 Blindness Detection and they both were received from Kaggle so to develop our reliable approach for identifying different Diabetic Retinopathy.
I. INTRODUCTION
The eyes, an organ of sight, are the most important organ of our body and have several components. The retina senses light and creates electrical impulses that correlate with the brain to handle the pictorial data. There are numerous eye diseases resulting in vision loss but the most common one is diabetic retinopathy (DR), which is related to diabetes. Each person having diabetes is at risk of developing DR. It is found that roughly one in three individuals having diabetes has DR to some extent [1]. Biomedical imaging is a powerful way to obtain a visual representation of the internal organs of the body for clinical purposes or for study of anatomy and physiology. The main reason for the DR is the contradictory increase of the blood glucose level, which results in damage to the vessel endothelium and increases the retinal vessel permeability. The growth of DR results in the retinal detachment. DR patients are not aware of any symptoms until visual impairment develops, it will be the less effective treatment. By using laser photocoagulation, the earlier stages of DR can be treated and may prevent from vision loss. Diabetic patients are advised to undergo the regular eye check up to ensure the presence of DR. abnormalities associated with the retinal-related diabetic disease are diabetic macular edema, age-related macular degeneration, cataract, conjunctivitis, and glaucoma. Most of the people harmed by DR do not visit an eye-care professional unless the DR situation extends to the severe NPDR or PDR stage. Also the accepted measures to identify DR involve ophthalmologists for identifying and diagnosing capability, which is time-consuming and very costly work. Thus, it became critical to present efficient DL-based methods. NPDR are likely to produce PDR during a year. Severe NPDR is an extreme and severe condition where there are many characteristics by which a severe NPDR is determined. [2] There is around 50% chance that severe NPDR can turn into PDR over a year. PDR is the liberal level where deficiency of oxygen in the retina causes development of new, fragile blood vessels in the retina and vitreous, where the gelatinous fluid occupies the back of the eye. [3] The different stages of DR, are mild NPDR, moderate NPDR, severe NPDR, and PDR.
A. Diabetic Retinopathy
It is a term applied to the effects of diabetes in the eye, or more specifically, the specialized neural tissue in the eye, the retina. DR is a highly specific vascular complication of both types 1 and 2 diabetes, with prevalence strongly related to the duration of diabetes. DR is the most frequent cause of new cases of blindness among adults aged 20-74 years. In addition to the duration of diabetes, other factors that increase the risk of, or are associated with, retinopathy include chronic hyperglycemia, nephropathy and hypertension [4]. This disease is usually asymptomatic in its early stages and, as a consequence, diabetics do not consider being examined on a regular basis. However, once DR has been detected in the retina, ocular examinations by an eye care specialist will require more frequent monitoring and visits. DR affects nearly half of the population with diabetes [5]. The global prevalence of diabetes has been continually increasing and current projections estimate that 438 million adults will be affected by 2030.
B. Diabetic Retinopathy Grading and Classification
Accurately grading diabetic retinopathy can be a significant challenge for an untrained person. Medical community establishes a standardized classification based on four severity stages (Wilkinson et al., 2003) determined by the type and number of lesions (as micro-aneurysms, hemorrhages and exudates) present in the retina: class 0 referring to no apparent Retinopathy, class 1 as a Mild Non-Proliferative Diabetic Retinopathy (NPDR), class 2 as Moderate NPDR, class 3 as a Severe NPDR and class 4 as a Proliferative DR. Any of the stages can have no or few symptoms. Therefore, periodic dilated eye examinations are crucial for the detection and evolution study of the disease. Furthermore, diabetic macular edema can develop at any of these stages due to damaged and leaky blood vessels, affecting patient vision quality. In the following sections diabetic retinopathy disease levels (Wilkinson et al., 2003) are described: There are several factors that contribute to the development and progression of DR.
C. Diabetic Retinopathy Detection
An infallible automatic detection method is essential since it is vital to categorize and avoid the severity degree of DR. The majority of DR research formerly relied on feature extraction using machine learning approaches; however, the difficulties of manual feature extraction led researchers to shift to deep learning. Data mining, image processing, machine learning, and deep learning are just a few of the computer-assisted technologies that have emerged as a result of more medical research. Deep Learning, on the other hand, has gained popularity in recent years in fields including sentiment analysis, handwriting recognition, stock market prediction, and medical image analysis, to name a few. CNN with Inception V3 model in deep learning produces positive results when it comes to photo classification. People with diabetes may have an eye disease that is called diabetic retinopathy. This happens when high blood sugar levels cause damage to blood vessels in the retina. These blood vessels can also swell and leak. On the other hand, they can close, stopping blood from passing through. Sometimes abnormal new blood vessels develop on the retina.
D. Convolutional Neural Networks
Classification of DR includes the weighting of various features and the location of such features [6]. This is hectic for clinicians. Computers are able to obtain much quicker and better classifications once trained, giving the ability to aid clinicians in real-time classification. The efficacy of automated grading for DR has been an interesting area of research in computer imaging with encouraging conclusions [7] [8].
Remarkable work has been done in detecting the features of DR using automated methods such as support vector machines and k-NN classifiers [9]. The majority of classification techniques are on two class classification for DR or no DR.
Convolutional Neural Networks (CNNs) is a branch of deep learning, has a splendid record for applications in image analysis and interpretation that also included medical imaging. Network architectures designed to work with image data were routinely built already in 1970s [10] with useful applications and surpassed other approaches to challenging tasks like handwritten character recognition [11]. However, it wasn’t until several breakthroughs in neural networks such as the implementation of dropout [12], rectified linear units [13] and the accompanying increase in computing power through graphical processor units (GPUs) that they became feasible for more complex image recognition problems.
Presently, large CNNs are used to successfully tackle highly complex image recognition tasks with many object classes to an impressive standard. CNNs are used in many current state-of-the-art image classification tasks such as the annual ImageNet and COCO challenges [14] [15].
Two main problems that exist within automated grading and particularly CNNs are: One is achieving a desirable offset in sensitivity (patients correctly identified as having DR) and specificity (patients correctly identified as not having DR). Furthermore, overfitting is a major issue in neural networks. Skewed datasets cause the network to over-fit to the class most prominent in the dataset. Large datasets are often massively skewed.
E. Inception Version 3
The Inception-v3 network along with CNN is used for this proposed research. This architecture consists of five convolutional layers, two max-pooling layers, 11 inception modules, one average pooling layer and one fully connected (fc) layer, which produces an image-wise categorization. The Inception-v3 network groups the similar sparse nodes into a dense structure to enhance both the depth and width of the network and reduce the computation process efficiently. The Inception V3 is a deep learning model based on Convolutional Neural Networks (CNN), which is used to classify images. The inception V3 is a better version of the basic model Inception V1 that was introduced as GoogLeNet in 2014.
II. LITERATURE REVIEW
There has been a significant amount of research on this topic. As a result, it is crucial to gather information, interpret it, classify it, and summarize the findings of earlier investigations. The proposed thesis developed pertinent keywords to find anything useful for a thorough review of recurrent convolution network-based collection Diabetic Retinopathy diagnosis systems. Our search was limited to articles published in respected journals and conferences. After developing a selection criterion, an analysis, and a design, the suggested search yielded 25 pertinent research publications. These papers underwent in-depth analysis and research from several angles. Although recent improvements in Diabetic Retinopathy detection technologies are encouraging, there is still room for improvement in the current diagnostic methods.
Chutatape, O et al. (1997) [16] presented at the 19th IEEE Conference on “Engineering in Medicine and Biology Society”, detect exudates using thresholding and region growing. A flatbed scanner was used to scan the images of their fundi that were captured with a non-mydriatic fundus camera. In this thesis, the report various research on the characteristics of exudates, blood vessels, and microaneurysms, three aspects of diabetic retinopathy. The characteristics allow us to categorise DR stages as healthy, mildly non-proliferative, moderately non-proliferative, severely non-proliferative, and proliferative. The stages are categorised using Support Vector Machine, Random Forest, and Naive Bayes classifiers. The highest accuracy, sensitivity, and specificity, which are 76.5%, 77.2%, and 93.3% respectively, are discovered in Random Forest, making it the best method.
Hsu et al. (2000) [17] presented at the IEEE Conference on “An Effective Approach to Detect Lesions in Color Retinal Images,” extract colour features then use a feed forward neural network to identify retinal lesions. By utilising an MDD classifier based on statistical pattern recognition techniques, lesions can be tentatively recognised. An efficient pre-processing step, the brightness adjustment approach, is suggested to address the issue of non-uniform lighting in retinal pictures. This ensures that dim lesion patches that are dispersed in darker background would not be ignored and would not be treated as background. A local window feature D is then employed to confirm the classification outcome. With this, they could continue to classify the retinal pictures that were actually normal with an accuracy of 70%. The amount of retinal images that must be manually evaluated by medical specialists each year is significantly reduced as a result.
Kavitha and Shenbaga (2005) [18] proposed median filtering and morphological operations for blood vessel detection. They remove bright regions thought to be the optic disc or exudates using multilevel threshold holding. They identify the optic disc as the place at where the blood vessels converge before identifying the other bright areas as exudates. Low-contrast photos weren't handled well by the approach. With the use of support vector machine (SVM) and naive Bayes classifiers, they presented a number of experiments on feature selection and exudates categorization. They started by fitting the naïve Bayes model to a training set made up of 15 features taken
from each of 115,867 examples of exudate pixels that were positive and an equal number that were negative. Following that, they applied feature selection to the naive Bayes model, continuously eliminating each feature from the classifier until classification performance ceased to advance. They repeatedly added the previously-removed features to the classifier after starting with the best feature set from the naive Bayes classifier in order to find the optimal SVM. They use a grid search to find the optimal setting for the hyper parameters v (tolerance for training mistakes) and for each combination of features (radial basis function width). The best feature sets from both classifiers were then used to compare the best naive Bayes and SVM classifiers to a reference closest neighbor (NN) classifier.
Sopharak et al. (2011) [19] in the paper titled “Automatic microaneurysm detection from non-dilated diabetic retinopathy retinal images using mathematical morphology methods” implemented to detect MAs from retinal images by morphological operators. Here, mathematical morphology is used for pre-processing, and a shade correction approach is employed to extract the blood vessels. This paper investigates a set of optimally adjusted morphological operators used for microaneurysm detection on non-dilated pupil and low-contrast retinal images. The detected microaneurysms are validated by comparing with ophthalmologists' hand-drawn ground-truth. As a result, the sensitivity, specificity, precision and accuracy were 81.61, 99.99, 63.76 and 99.98%, respectively.
Kaizau et al (2019) [20] in the paper titled “Microaneurysm imaging using multiple en face OCT angiography image averaging: morphology and visualization” showed that while OCT detects retinal vascular abnormality, fundus images capture the retina’s interior part, namely OD, macula, blood vessels. The leaking of the retinal vasculature is found via fluorescein angiography. There are two approaches to capture fundus pictures. One method of obtaining the fundus image is to use tropicamide (eye drops) to dilate the retina and then take what is known as a Mydriatic fundus image. In 31 eyes from 25 individuals, 415 microaneurysms could be counted and examined. Microaneurysms were discovered in 144 (34.7%), 227 (54.7%), 285 (68.7%), and 306 (73.7%) of single image, 3, 5, and 10 averaged OCTA images, respectively. With more image averaging, the capacity to detect microaneurysms was greatly improved. There was a significant correlation between microaneurysm morphology and microaneurysm visibility by the image-averaging process for 4 morphologies, in particular the focal bulge types (P 0.01), but no correlation between microaneurysm detection with OCTA, retinal thickness, FA leakiness, indocyanine green angiogram detection, or the number of averaged images. In DR, multiple image averaging is helpful for enhancing OCTA's ability to detect microaneurysms, particularly for focal bulge-type microaneurysms.
Daniel S.W. Ting, et al. (2019) [21] in the paper titled “Deep learning in estimating prevalence and systemic risk factors for diabetic retinopathy” determined that there are various steps for the detection of DR by fundus images, such as pre-processing, segmentation of images, analysis, and grading of the image according to the disease’s severity. In order to ascertain the prevalence and systemic cardiovascular risk factors for DR on fundus pictures in patients with diabetes, this study will employ DLS as the grading method rather than human assessors. This cross-sectional study, which included 18,912 patients (n = 93,293 pictures), was multi-site (8 datasets from Singapore, the USA, Hong Kong, China, and Australia), multi-ethnic (5 races), and multi-site. The time required for DR assessment by DLS versus 17 human assessors (10 retinal specialists/ophthalmologists and 7 professional graders) was compared with these results. For all DR, referable DR, and vision-threatening DR (VTDR), the estimation of DR prevalence by DLS and human assessors is equivalent (Human assessors: 15.9, 6.5%, and 4.1%; DLS: 16.1%, 6.4%, and 3.7%).
For any DR, referable DR, and VTDR, both evaluation techniques revealed similar risk variables (with equal AUCs), such as younger age, longer diabetes duration, greater HbA1c, and systolic blood pressure (p > 0.05). 93,293 fundus pictures were evaluated by DLS in a total time of 1 month as opposed to 2 years by human assessors. In conclusion, a DLS could accurately predict the prevalence and systemic risk factors for DR in a multiethnic community in a lot less time than human assessors. In future epidemiology or therapeutic trials for DR grading in the worldwide community, this work underlines the possible application of AI.
Nagpal et al. (2021) [22] in the paper titled “Recent advancement for diagnosing diabetic retinopathy” emphasized that diagnosis of early treatment of DR can help the patient abate the risk of vision loss. The primary goal of this essay is to inform readers of the research that has been done thus far regarding the automatic detection and grading of DR. First, numerous lesion types—including MAs, HEM, and exudates—have been examined, along with various methods for their identification that have been reported in the literature. To precisely understand the benefits and drawbacks of the approach, analyses of all clinical indicators have been conducted. The author then discusses the gaps in the literature and the future of DR. It is anticipated that this review will be beneficial for scientists studying medical images.
Shefali Yadav & Prashant Awasthi (2022) [23] in the paper titled “Diabetic Retinopathy Detection Using Deep Learning And Inception-V3 Model” work considers a deep learning methodology specifically a Densely Connected Convolution Network iNCEPTION-v300, which is applied for the early detection of diabetic retinopathy. According to the severity levels, it divides the fundus images into two categories: No DR and Yes DR. Diabetic Retinopathy Detection 2015 and Aptos 2019 Blindness Detection, both obtained from Kaggle, are the datasets that are taken into account. Their suggested model has an accuracy rate of 88.1%. The primary goal of this effort is to create a reliable method for automatically detecting DR.
III. PROBLEM FORMULATION AND METHODOLOGY
The main objective of this work is to build a stable system for detection of diabetic retinopathy. This work employs the deep learning methodology using CNN and Inception V3 for detecting the diabetic retinopathy based on severity level (No DR, Moderate and Severe). Many processes were carried out before feeding the images to the network. We trained models in this work and then the better accuracies were obtained while testing.
A. Research Gap
B. Objectives
C. Methodology
The major goal of this study is to develop a reliable, noise-tolerant method for diagnosing diabetic retinopathy. This study employs deep learning technology to recognize varying degrees of diabetic retinopathy severity (No DR, Moderate and Severe). Before the pictures were uploaded to the network, several procedures were finished.
4. Determining the path of dataset: Once the dataset has been gathered and saved, it must be imported into the program to be processed further. In some situations, the dataset must be processed before being trained on by the model. In this case, the train-test split method has been utilized, where random images are both learned and tested.
5. The photographs in the dataset need to be preprocessed in order to convert them to the standard format because they are quite polluted, including images that are out of focus, have excessive exposure, have extra lighting, have a dark background, etc. The preliminary process involves completing the following tasks:
6. Removing the black border: The black border surrounding the pictures has been removed because it does not offer any information to the fundus image and is therefore superfluous.
7. Remove the black corner: The fundus image is round, so even after the black border was removed, there were some dark corners left. In this step, the dark edges of the image are removed.
8. Image resizing: The images have been shrunk to 120*120 pixels (width*height).
9. Adding the Gaussian Blur: By setting the kernel size, the pictures are given a Gaussian blur. Gaussian noise can be reduced with the use of this method.
10. Data augmentation is an integral process in deep learning, as in deep learning a need of large amounts of data is required and in some cases it is not feasible to collect thousands or millions of images, so data augmentation comes to the rescue.
A. Operations in Data Augmentation
The most commonly used operations are-
5. Visualizing the Designed Model: The Inception Version (V3) model is designed and visualized in order to get a clear idea of all the layers incorporated in the model design.
6. Training the Model: The model now has to be trained after being designed. Multiple epochs are used to feed the dataset into the layers, which results in the creation and application of an appropriate learning mechanism.
7. Plotting the Graph: With respect to epochs, the pattern of losses and accuracy in training and validation is plotted. The training process is ended and the model is saved if the training parameters and validation parameters, such as training loss, training accuracy, validation loss, and validation accuracy, do not improve as the number of epoch’s increases.
8. Plot Confusion Matrix: The model saved is no tested and results obtained are plotted in the form of Confusion Matrix.
IV. IMPLEMENTATION AND RESULTS
Deep learning and Inception V3 are the two most prominent machine learning methods for identifying pictures using CNN. In order to create a CNN model, Keras is the finest Python library to use. Another library used in this architecture, Tensorflow, contributes to providing the basis for backend operations. Additional applications for Keras include testing, training, and design. Matplotlib is the other one that has been used. It allows for the plotting of graphs. Utilize Numpy to carry out mathematical calculations and arrayize data. Scikit-Learn is used to plot the confusion matrix, which is shown in the results section.
A. Data Pre-processing and Classifications
Pre-processing is the term used to describe the changes made to the data before the algorithm receives it. Data Pre-processing is a method for transforming unclean data into clean data sets. In other words, anytime data is acquired from various sources, it is done so in a raw manner that makes analysis impossible. The image's width and height is kept at w=h=250 in this instance of data pre-processing. The data set here consists of images of three classes i.e., Healthy, Moderate and Severe.
B. Data Split To Train, Validation And Test
Train Test Split method is used. The train test split () method is used to split data into train and test sets.
First, the data is to be divided into features (X) and labels (y). The data frame gets divided into X train, X test, y train and y test. X train and y train sets are used for training and fitting the model. The X test and y test sets are used for testing the model if it’s predicting the right outputs/labels. The size of the train and test sets can be explicitly tested. It is suggested to keep our train sets larger than the test sets.
C. Confusion Matrix
Next is the “plot confusion matrix” method. It plots the confusion matrix on the basis of outputs predicted by our model.
An N x N matrix called a confusion matrix is used to assess the effectiveness of a classification model, where N is the total number of target classes. In the matrix, the actual goal values are contrasted with those that the machine learning model anticipated.
Figure 4.3a represents the graphs for training and validation recall for 10 epochs and 3 classes. X-axis represents the epochs and Y-axis represents recall percentage. The blue line represents the training recall and the orange line represents the validation recall. It is clear from the graph that the training and validation recall increases as the epochs increase.
Figure 4.3(b) represents the graphs for training and validation loss for 10 epochs and classes. X-axis represents the epochs and Y-axis represents loss percentage. The blue line represents the training loss and the orange line represents the validation loss. It is clear from the graph that the training and validation loss decreases as the epochs increase. However, the validation loss was 0.66.
Reason for fluctuation is use very small batch-size. So it’s like you are trusting every small portion of the data points. When the batch-size is larger, such effects would be reduced but it would also make training go slow with increased memory requirements.
The number of ophthalmologists worldwide may not be able to meet regulatory screening demands because of the exponential growth in the predicted future diabetes mellitus patient population and unintentionally the victims subject to DR. An effective solution to the issue has been a safe, clinically useful automated detection algorithm. It was discovered that the majority of New Fundus Algorithm entries, including those using sophisticated ML models, continue to operate under the maximization of likelihood principle. As a result, they can only identify the disease once it has progressed significantly. The New Fundus Algorithms entries that employ ML algorithms (including those employing deep learning algorithms) only consider brightness to identify the optical disc and exudates in the retina based on the widely held belief that the optical disc must be the brightest region in the retina. As a result, the optical disc cannot even be correctly detected by the majority of systems. This limits the system\'s capacity for early detection, which is important for preventing visual loss. Additionally, it was discovered that the majority of New Fundus Algorithm entries, including those intended to identify diseased or healthy tissues, can only function with images that have a fixed and extremely low pixel count for both length and width. For instance, even algorithms intended to \"pinpoint\" exudates are typically created to operate on photos that have been manually pre-processed and measure 160 by 240 pixels. Early detection has become nearly difficult as a result of the widespread practise of reducing the photographs to specified pixel sizes. This is due to the fact that the diseased tissues (such as microaneurysms) are frequently too small to be seen on such a low-resolution image in the early stages of DR. Many of the submissions from the New Fundus Algorithms that use Pure CNN have switched to using RGP-CNN (such ResNet50), as there are many programming packages for these RGP-CNN that are easily available. The ML models used by the majority of the models are not tailored specifically for the job of automated detection of DR due to this merely adaption of the existing, general-purpose structure.
[1] T. Y. Wong and C. Sabanayagam, “Strategies to tackle the global burden of diabetic retinopathy: from epidemiology to artificial intelligence,” Ophthalmologica. [2] R. F. Mansour, “Evolutionary computing enriched computer-aided diagnosis system for diabetic retinopathy: a survey,” IEEE Rev. Biomed. Eng, [3] R. Shalini and S. Sasikala, “A survey on detection of diabetic retinopathy,” in 2nd Int. Conf. I-SMAC (IoT in Social, Mob, Anal. and Cloud)(I-SMAC) I-SMAC (IoT in Soc. Mob., Anal. and Cloud)(I-SMAC). [4] Agurto C, Murillo S, and Murray V et al. “Detection and Phenotyping of Retinal Disease using AM-FM Processing for Feature Extraction”. 42nd IEEE Asilomar Conference on Signals, Systems and Computers. 2008 [5] V Murray, C Agurto, S Barriga, MS Pattichis, and P Soliz, \"Real-time Diabetic Retinopathy Patient Screening using Multiscale AM-FM Methods,\" accepted to IEEE International Conference on Image Processing (ICIP), 2012 [6] Grading diabetic retinopathy from stereoscopic color fundus photographsan extension of the modified airlie house classification: Etdrs report number 10. Ophthalmology 1991;98(5):786–806. [7] Philip, S., Fleming, A.D., Goatman, K.A., Fonseca, S., Mcnamee, P., Scotland, G.S., et al. The efficacy of automated disease/no disease grading for diabetic retinopathy in a systematic screening programme. Brit J Ophthalmol 2007;91(11):1512–1517. [8] Fleming, A.D., Philip, S., Goatman, K.A., Prescott, G.J., Sharp, P.F., Olson, J.A.. The evidence for automated grading in diabetic retinopathy screening. Current Diabetes Reviews 2011;7:246 – 252. [9] Mookiah, M.R.K., Acharya, U.R., Chua, C.K., Lim, C.M., Ng, E., Laude, A.. Computer-aided diagnosis of diabetic retinopathy: A review. Comput Biol Med 2013;43(12):2136–2155. [10] Fukushima, K. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern 1980; 36 (4):193–202. [11] Cun, Y.L., Boser, B., Denker, J.S., Howard, R.E., Habbard, W., Jackel, L.D., et al. Advances in neural information processing systems 2.Citeseer. ISBN 1-55860-100-7; 1990, p. 396–404. [12] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.Dropout: A simple way to prevent neural networks from overfitting. J Mach Learn Res 2014;15(1):1929–1958. [13] Nair, V., Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning (ICML-10). 2010, p. 807–814. [14] Ioffe, S., Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015;URL: arXiv:1502.03167. [15] He, K., Zhang, X., Ren, S., Sun, J. Deep residual learning for image recognition. arXiv 2015;URL: arXiv:1512.03385. [16] Liu, Z.; Chutatape, O.; Krishna, S.M. Presented at the 19th IEEE Conference on Engineering in Medicine and Biology Society, Chicago, USA, Oct 30–Nov 2, 1997. [17] Wang, H.; Hsu, W.; Goh, K.G.; Lee, An Effective Approach to Detect Lesions in Color Retinal Images, M.L. Presented at the IEEE Conference on Computer Vision and Pattern Recognition, South Carolina, USA, June 13– 15, 2000. [18] Kavitha, D.; Shenbaga, S.D. Presented at the 2nd ICISIP Conference on Intelligent Sensing and Information Processing, Madras, India, Jan 4–7, 2005. [19] Sopharak, A, Uyyanonvara, B & Barman, S 2011, ‘Automatic microaneurysm detection from non-dilated diabetic retinopathy retinal images using mathematical morphology methods’, IAENG International Journal of Computer, Science, vol. 38, no. 3, pp. 295– 301 [20] Kaizu Y , Nakao S , Wada I , et al. Microaneurysm imaging using multiple en face OCT angiography image averaging: morphology and visualization. Ophthalmol Retina 2020;4:175–86. [21] Danielle S.W. Ting, et al. Deep learning in estimating prevalence and systemic risk factors for diabetic retinopathy: A multi-ethnic study NPJ Digit. Med. (2019). [22] Nagpal, D., Panda, S.N., Gupta, N., 2021. Recent advancement for diagnosing diabetic retinopathy. J. Comput. Theor. Nanosci. 17 (11), 5096–5104. [23] Shefali Yadav, Prashant Awasthi 2022, Diabetic Retinopathy Detection Using Deep Learning and Inception-V3 Model, e-ISSN: 2582-5208.
Copyright © 2022 Arooj Hakeem, Monika Mehra, Ravinder Pal Singh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET47407
Publish Date : 2022-11-10
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here