Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Nadipelli Niranjan, Enagandula Akshitha, Daramalla Shruthi, Dr. Vijaya Saradhi
DOI Link: https://doi.org/10.22214/ijraset.2023.51807
Certificate: View Certificate
One of the most hazardous types of skin cancer is melanoma because it spreads quickly and accounts for the majority of skin cancer fatalities because, if untreated, it is considerably more likely to migrate to other body regions. If melanomas are not found in their early stages, they cannot be treated.Therefore, melanoma treatment relies heavily on early detection. Melanoma is difficult to detect since it frequently looks benign and is misdiagnosed as such. Previous attempts to use neural networks to detect melanoma using the ABCD worked best with small datasets and low accuracy. The cloud technique requires a significant amount of time to train the dataset\'s images. The ensemble approach does not perform any image preprocessing, hence the final findings fall short of expectations.It is suggested to use an upgraded encoder-decoder network with separate encoder and decoder sub-networks connected by a number of skip paths. For the ISIC dataset and PH2 dataset, preprocessing strategies for pictures are suggested that aid to achieve high sensitivity and high specificity in lesion border segmentation to get greater accuracy compared to existing models.
I. INTRODUCTION
Melanocytes, which contain pigment, give rise to melanoma, a malignant tumour.
Among all the skin malignancies, it has the fastest rising death rate. Sun exposure people to UV light, which is the main and most significant external cause causing melanoma. According to projections by the American Cancer Society, the US will see 96,480 new cases of melanomas in 2019 and an estimated 7,230 deaths from the disease. 90% of deaths from skin cancer are caused by the most deadly type of skin malignancy, cutaneous melanoma. 90% of cutaneous tumor-related deaths are attributed to melanomas, according to Garbe et al. Additionally, they looked into the incidence rates, which are approximately 25 new instances of melanoma per 100,000 people in Europe, 30 per 100,000 individuals in the USA, and about sixty per 100,000 people in Australia, where the highest incidence rate is noted.Nevertheless, if discovered and found early enough, melanoma can be treated with quick excision.
Even with the help of seasoned dermatologists, the diagnosis of melanoma from skin lesions can be imprecise and time-consuming. These approaches include visual inspection, clinical screening, dermoscopic analysis, biopsy, and histological investigation of skin lesions. This is a result of the complex visual properties of skin lesions, including their variability in size, form, and border fuzziness, as well as their low contrast in relation to the surrounding skin and the presence of noise elements such skin hair, lubricants, bubbles, and air. For the detection and diagnosis of melanoma cancer, the creation of an effective Computer Aided Diagnosis (CAD) system is necessary. As a result, melanoma diagnoses will increase, and early identification will increase the likelihood of effective treatment and lower the disease's fatality rate.
II. LITERATURE SURVEY
This is an explanation of the background research on the early identification of melanoma. To find melanoma, numerous techniques have been developed. In the ABCD rule, asymmetry means that two sides do not match while they match for the symmetry. This can assist in distinguishing benign from malignant skin lesions. The benign skin lesions are not harmful while the malignant are cancerous and harmful. This rule has always been applied by many hand-crafted methods for the analysis of skin lesions images for melanoma detection. These methods termed hand-crafted are limited by the noise present on the skin lesion and also the low contrast and irregular border features of skin lesions. These methods lack deep supervision and this leads to a loss of detailed information during training thus experiencing difficulty in analyzing the complex visual characteristics of the skin lesion.
Codella et al. proposed a system that combines recent developments in deep learning with established machine learning approaches to create ensembles of methods that are capable of segmenting skin lesions for melanoma detection. Even though those methods have achieved great success, there still remain several challenges to the skin cancer segmentation task due to the complex nature of skin lesion images. Skin lesions images are characterized by fuzzy borders, low contrast between lesions and the background, and variability in size.
Simoyan noted how the deeper Visual Geometry Group model (VGG), based on the learning of models with a bigger number of image descriptors used as inputs (such as color, symmetry, contour, etc.), can ensure a better efficacy in melanoma identification in 2014, and this architecture can further improve melanoma detection. The VGGs can also be applied to the search box in question, depending on the blocks and the filter being used. The most popular models are VGG 11, 16, and 19, which have 8, 13, and 16, respectively, different convolutional layer counts from one another.
In the framework of the IoMT, the first architectures made up of Edge, Fog, and Cloud resources initially appeared in 2017. These designs facilitate anticipatory learning. The majority of management and analysis techniques for IoMT data found in the literature are cloud-based. By decentralizing computing power for machine and deep learning techniques on network nodes, which are used as microdata center mesh network nodes, it is possible to improve individual user data security, resolution in the exchange of medical images, data archiving, and the capacity to improve diagnosis response times.
III. EXISTING SYSTEM
Some of the current models can only process grayscale photos; they cannot process colored images. In the current systems, the models can distinguish between images with lesions and images without lesions, but the training time, or the processing time for images, is very long.The models function well with tiny datasets, but their precision is not up to standards.
A. Advantages
B. Disadvantages
IV. PROPOSED SYSTEM
Due to two main factors—(i) the consequences of inaccurate detection and (ii) the requirement for exceptional accuracy in detection—skin cancer detection is a difficult task. Effective soft computing and machine learning techniques could be used to solve the accuracy issue. Although many methodologies have shown to produce results that are generally accurate, there is still significant room for improvement. In order to determine whether an image contains melanoma or not, it is necessary to create a model that can distinguish between melanoma and benign lesions.
Different deep learning models, such as VGG19, ResNet50, and ResNet101, are utilised to train the model for image categorization. For processing images, these are some highly potent techniques. In order to classify photos into benign and malignant conditions, it is important to develop a model that best yields reliable findings.
V. DESIGN
It is important to detect melanoma early because it is a deadly form of skin cancer when compared to benign, which is not as deadly. This will help people and doctors in accurately diagnosing melanoma. The project's goal is to identify melanoma and non-melanoma lesions because melanoma often resembles benign and discriminating against them is a bit challenging.
Our model mainly consists of four Phases:
A. Feature Extraction
Identifying the difference between a skin lesion and healthy skin is a common challenge in the identification of melanoma. The photos contain artefacts and noise, such as air bubbles and hair. For improved segmentation and lesion analysis, we must therefore eliminate all unnecessary pixels and enhance the images. To achieve proper feature extraction and segmentation of the lesion area, which results in a high degree of diagnostic accuracy, preprocessing techniques are applied.
B. Image Pre-processing
There are numerous phases proposed in the pre-processing method for picture detection. The proposed hair removal method consists of the steps listed below. First, use the ABCDE melanoma rule as soon as the image is inputted. Asymmetry, border, colour, diameter, and evolving are all abbreviations for ABCDE. When identifying and categorising melanomas, clinicians look for certain features of skin damage.
C. SoftMax Classification
In neural network models that forecast a multinomial probability distribution, the SoftMax function serves as the activation function in the output layer. In other words, SoftMax is employed as the activation function for issues involving multiple classes in which more than two class labels must be class members.In multispectral/HSI classification, pixel-wise classification is frequently used to assign each pixel vector to a certain category by taking advantage of the spectral properties of both the individual pixel and its nearby neighbours in the local area.
D. Lesion Classification
The Lesion Classifier helps to identify and classifies the resultant image into either melanoma or non-melanoma based on the training and testing results.
VI. IMPLEMENTATION
In this project we will show the difference between the accuracy of different Deep learning models. With every model we will start the training of the model by pre-processing the data(but in this project the data will be loaded only once to compare the models).The basic pre-processing includes
After that we evaluated the performance of the models. After we fit the model to our requirements, we plotted the graph and get to know the accuracy and loss of each model.
A. Visual Geometry Group (VGG-19)
According to reports, the VGG-19 CNN architecture can analyse huge picture datasets like ImageNet with great accuracy. About 143 million parameters make up the VGG-19 model, which was trained on 1.2 million general object photos from 1,000 different object categories in the ImageNet dataset. Convolutional and completely connected layers, as well as max pooling, drop out, and fully connected layers, are among the 19 trainable layers in the VGG-19.
In our project we got ACCURACY: 80.33333420753479
B. Residual Neural Network (ResNet-50)
The Residual Blocks idea was created by this design to address the issue of the vanishing/exploding gradient. We apply a method known as skip connections in this network. The skip connection bypasses some levels in between to link layer activations to subsequent layers. These residual blocks are stacked to create ResNets.A 50-layer deep convolutional neural network called ResNet-50 consists of 48 convolutional layers, 1 maxPool layer, and 1 average pool layer. A particular kind of artificial neural network called a residual neural network creates a network by piling up residual blocks. A pre-trained version of the network that has been trained on more than a million photographs will be loaded from the ImageNet database. Images can be categorized by 1000 different item types using a network.
In our project we got ACCURACY: 85.29166579246521
C. Residual Neural Network (ResNet-101)
ResNet-101 is a modified version of the 50-layer ResNet, a convolutional neural network with 101 layers. The ImageNet database contains a pre-trained version of the network that has been trained on more than a million photos. The pretrained network is capable of classifying photos into 1000 different object categories, including various animals, a keyboard, a mouse, and a pencil. The network has therefore acquired rich feature representations for a variety of images. The network accepts images with a resolution of 224 by 224.
In this project we got ACCUARCY: 85.83333492279053.
Deep convolutional neural networks with three different methodologies are used in the project\'s development to detect melanoma cancer using deep learning techniques. In the first method, the model is trained with VGG-19, which consists of 19 convolutional layers and provides superior accuracy (80%), but the issue is that it causes overfitting, also known as vanishing gradient. The second method uses the ResNet-50 model to train the data, which fixes the vanishing gradient problems and provides an accuracy of 85.2%. In contrast, the third strategy uses ResNet-101, a modified version of ResNet-50 that is simpler to train than a straightforward deep convolutional neural network. It also fixes the issue of accuracy decay, producing 85.8% accuracy.
[1] National Cancer Institute, PDQ Melanoma Treatment. Bethesda, MD, USA. (Nov. 4, 2019). PDQ Adult Treatment Editorial Board. Accessed: Dec. 9, 2019. [Online]. Available: https://www.cancer.gov/types/skin/hp/melanoma-treatment-pdf [2] Cancer Statistics Center. (2019). American Cancer Society. [Online]. Available: https://cancerstatisticscenter.cancer.org [3] C. Garbe, K. Peris, A. Hauschild, P. Saiag, M. Middleton, L. Bastholt, J.-J. Grob, J. Malvehy, J. Newton-Bishop, A. J. Stratigos, H. Pehamberger, and A. M. Eggermont, ‘‘Diagnosis and treatment of melanoma. European consensus-based interdisciplinary guideline—Update 2016,’’ Eur. J. Cancer, vol. 63, pp. 201–217, Aug. 2016. [4] J. A. Curtin, K. Busam, D. Pinkel, and B. C. Bastian, ‘‘Somatic activation of KIT in distinct subtypes of melanoma,’’ J. Clin Oncol., vol. 24, no. 26, pp. 4340–4346, Sep. 2006. [5] H. Tsao, M. B. Atkins, and A. J. Sober, ‘‘Management of cutaneous melanoma,’’ New England J. Med., vol. 351, no. 10, 2004, Art. no. 998e1012. [6] M. E. Celebi, H. A. Kingravi, B. Uddin, H. Iyatomi, Y. A. Aslandogan, W. V. Stoecker, and R. H. Moss, ‘‘A methodological approach to the classification of dermoscopy images,’’ Computerized Med. Imag. Graph., vol. 31, no. 6, pp. 362–373, Sep. 2007. [7] G. Capdehourat, A. Corez, A. Bazzano, R. Alonso, and P. Musé, ‘‘Toward a combined tool to assist dermatologists in melanoma detection from dermoscopic images of pigmented skin lesions,’’ Pattern Recognit. Lett., vol. 32, no. 16, pp. 2187–2196, Dec. 2011. [8] Q. Abbas, M. Celebi, C. Serrano, I. F. N. García, and G. Ma, ‘‘Pattern classification of dermoscopy images: A perceptually uniform model,’’ Pattern Recognit., vol. 46, no. 1, pp. 86–97, Jan. 2013. [9] A. Adegun and S. Viriri, ‘‘An enhanced deep learning framework for skin lesions segmentation,’’ in Proc. Int. Conf. Comput. Collective Intell. Cham, Switzerland: Springer, 2019,
Copyright © 2023 Nadipelli Niranjan, Enagandula Akshitha, Daramalla Shruthi, Dr. Vijaya Saradhi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET51807
Publish Date : 2023-05-08
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here