Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Atharva Kulkarni, Chirag Mahajan, Tejas Hasabnis
DOI Link: https://doi.org/10.22214/ijraset.2022.46516
Certificate: View Certificate
Contrary to inspection with the untrained eye, Dermatoscopy is a frequently used diagnostic procedure that enhances the identification of benign and malignant pigmented skin diseases. To train artificial neural networks to automatically identify pigmented skin lesions, dermatoscopic images are also a good source. An artificial neural network was previously trained to successfully distinguish melanocytic nevi from melanomas, the deadliest kind of skin cancer. This was done using dermatoscopic pictures. Although the results were encouraging, the study suffered from a limited sample size and a lack of dermatoscopic pictures besides melanoma or nevi, like most prior investigations. Recent improvements in graphics card power and machine learning methods have boosted hopes for the availability of automated diagnostic systems that can quickly identify all types of pigmented skin lesions without the assistance of a human specialist. A lot of annotated pictures are needed for the training of neural network-based diagnosis algorithms, however the quantity of high-quality dermatoscopic images with accurate diagnoses is small or restricted to a small number of disease classes.
I. INTRODUCTION
The most prevalent human malignancy, skin cancer, is largely detected visually, starting with an initial clinical screening and maybe progressing to dermoscopic analysis, a biopsy, and histological investigation. Due to the fine-grained variability in how skin lesions occur, automated categorization of skin lesions using photographs is a difficult problem. Deep convolutional neural networks (CNNs) have the ability to perform generic and a wide range of tasks on a wide range of fine-grained object categories. Here, we show how a single CNN can be trained end-to-end from images, using just pixels and illness labels as inputs, to classify skin lesions. We use a dataset of 129,450 clinical photos, which is two orders of magnitude bigger than prior datasets made up of 2,032 distinct illnesses, to train a CNN. We evaluate its effectiveness in two crucial binary classification use cases—keratinocyte carcinomas against benign seborrheic keratoses and malignant melanomas vs benign nevi—against 21 board-certified dermatologists using biopsy-proven clinical pictures. The first instance illustrates the most prevalent malignancies, whereas the second case illustrates the worst skin cancer. The CNN performs equally well on both tasks as all tested specialists, proving that artificial intelligence is capable of classifying skin cancer with a degree of proficiency equivalent to dermatologists. Mobile devices with deep neural networks may enable dermatologists to treat patients outside of the clinic. By 2023, there are expected to be 6.3 billion smartphone subscriptions, which may provide widespread, affordable access to critical diagnostic care.
II. LITERATURE SURVEY
In India, 5.4 million new instances of skin cancer are reported annually. In their lifetime, one in ten Indians will be diagnosed with skin cancer. In India, melanomas account for less than 1% of all skin cancers, yet they cause over 10,000 fatalities yearly and account for 75% of all deaths associated with skin cancer. The predicted 5-year survival rate for melanoma reduces from over 99% if found in its initial stages to around 14% if detected in its later stages, hence early identification is crucial. We created a computer technique that might help people and medical professionals keep track of skin lesions and find cancer early. We are able to design a deep learning system for automated dermatology by developing a unique disease taxonomy and a disease-partitioning method that maps individual illnesses into training classes.
Due to a dearth of data and an emphasis on standardized tasks like dermoscopy and histological image classification, previous work in dermatological computer-aided classification has fallen short of medical practitioners' capacity to generalize. Histological pictures are obtained by invasive biopsy and microscopy, whereas dermoscopy images are obtained using a specialized tool; both modalities produce highly standardized images. The variety of photographic photos (such as those from smartphones) in terms of magnification, perspective, and illumination makes categorization a lot more difficult. We address this difficulty by employing a data-driven strategy. Classification is resilient to photographic variability because of 1.41 million pre-training and training photos. Prior to classification, several older approaches called for laborious pre-processing, lesion segmentation, and the extraction of domain-specific visual characteristics.
Our approach, in contrast, does not require manually created features; instead, it is trained from beginning to end using raw pixels and picture labels using a single network for both photographic and dermoscopic images. The amount of work that is currently available employs limited datasets, often less than a thousand skin lesion photos, and as a result does not generalize well to new images. With the use of a brand-new dataset of 129,450 clinical photos, including 3,374 images from dermoscopy, we show generalizable classification.
III. PROPOSED SYSTEM
CNNs are neural networks with a particular design that have shown to be particularly effective in fields like classification and image recognition. CNNs are used in robots and self-driving automobiles because research has shown that they can recognise faces, objects, and traffic signs more accurately than humans. In essence, CNNs have two parts: the hidden layers, in which the features are extracted, and, at the conclusion of the processing, the fully connected layers, which are employed for the actual classification job. CNNs learn the link between the input objects and the class labels. Recent research has demonstrated that deep learning algorithms outperform humans in visual tasks including playing Atari games, playing Go, and object identification. These algorithms are powered by computational breakthroughs and incredibly big datasets. In this article, we present the creation of a CNN that can equal the performance of dermatologists at three crucial diagnostic tasks: the classification of melanoma, the classification of melanoma using dermoscopy, and the classification of carcinoma. We only compare categorization methods based on images.
The Department of Dermatology at the Medical University of Vienna, Austria, and Cliff Rosendahl's skin cancer clinic in Queensland, Australia, are where the 10015 dermatoscopic pictures for the HAM10000 training set were gathered during a 20-year period. Images and meta-data were kept on the Australian website in PowerPoint presentations and Excel spreadsheets. The Austrian website began gathering photographs before the advent of digital cameras and has kept images and metadata in various forms throughout time.
IV. DESIGN AND IMPLEMENTATION
There are a total of 16 levels in the CNN model that was employed in this case to classify skin cancer. Each of the three channels in the dataset's pictures has a dimension of 75x100. Following that, a convolution layer with 16 filters, each with a kernel size of 3x3, is fed the picture. The first two convolutions are fed the picture to this layer. After the first two convolutions, an unparameterized max pooling layer is added and applied to the picture. With four more convolution layers and a maximum number of pooling layers using various numbers of filters, filters, and activation functions, the procedure is repeated. The resultant vector is flattened after the convolutional layers and then sent to a number of fully connected layer networks. Lastly, because it's a classification, the result should be probabilities, or values between 0 and 1. Therefore, the sigmoid function, which restricts the input to the range between 0 and 1, would serve as the last layer's activation function. The probability total should equal one since the outputs should be in the form of probabilities. Consequently, a function is used to normalize all of the outputs.
Seven main categories of dermatological disorders are specifically categorized under the system:
A. Nv (Melanocytic Nevi)
These are innocuous melanocyte neoplasms that can take many different forms, all of which are covered in our series. From a dermatoscopic perspective, the variations can be very different.
B. Mel (Melanoma)
A malignant tumour originating from melanocytes, melanoma can take several forms. Early surgical removal can cure it if caught early enough. Invasive or non-invasive melanoma can occur (in situ). All melanoma subtypes, including those that develop in situ, were included; however, non-pigmented, subungual, ophthalmic, and mucosal subtypes were not.
C. Bkl (Benign Keratosis)
Seborrheic keratoses, also known as "senile warts," solar lentigo, which is a flat variety of seborrheic keratosis, and lichen-planus like keratoses (LPLK), which is a seborrheic keratosis or a sun lentigo with inflammation and regression, are all considered to be benign keratoses. Even though the three categories had varied dermatoscopically, we classified them together because physiologically, they are comparable. Dermoscopically, lichen planus-like keratoses are particularly difficult since they frequently undergo biopsies or excisions for diagnostic purposes and might exhibit morphologic characteristics that mimic melanoma.
D. Bcc (Basal cell carcinoma)
A typical epithelial skin cancer variety called basal cell carcinoma seldom metastasizes but, if left untreated, spreads destructively. This collection has all of its many morphologic manifestations (flat, nodular, pigmented, cystic, etc.).
E. Akiec (Actinic Keratoses)
Two frequent non-invasive forms of squamous cell carcinoma that can be treated locally without surgery are actinic keratoses (also known as solar keratoses) and intraepithelial carcinoma (also known as Bowen's disease). Except in cases of Bowen's illness, which is brought on by a human papilloma virus infection rather than UV radiation, both forms are brought on by UV light, hence the surrounding skin is typically characterised by significant sun damage.
F. Vasc (Vascular skin lesions)
The dataset includes a variety of vascular skin lesions, including cherry angiomas, angiokeratomas, and pyogenic granulomas. This category also includes haemorrhage.
G. Df (Dermatofibroma)
A benign skin lesion known as a dermatofibroma is thought to be either a benign growth or an inflammatory response to minor trauma. It is frequently brown and dermoscopically has a core zone of fibrosis.
The system is set up so that both computer users and mobile users may access it. To meet the UI requirements, simple HTML, CSS, and JavaScript were employed. The model is cached in memory after a website visit to speed up performance and decrease load times. Due to the fact that systems like these deal with people's private lives, privacy is of the highest significance. To provide users with the highest confidentiality, no user-uploaded picture is ever transferred to an external server; instead, all supplied photographs are saved on the device itself since the model is run on the device. We can effectively analyse photos using Tensorflow.js.
V. RESULTS
We looked at the internal aspects that CNN learnt. Each point is a picture of a skin lesion projected into two dimensions from the CNN's last hidden layer's 2,048-dimensional output. There are groups of points belonging to the same clinical category. Consequently, this assignment is imprecise, and objects could be assigned to the wrong class. To evaluate a classifier, one has to be aware of the real class of the objects. As a consequence, the items may be divided into the following four groups:
The cardinality of these subgroups may now be used to calculate statistical values for the classifier. Although accuracy is a popular and widely used indicator, its use depends on how uniformly distributed the dataset's various classes are.
Here, we show that deep learning in dermatology is beneficial when used to treat both common skin problems and particular malignancies. It was discovered that the categorization accuracy was 81%.
VI. ACKNOWLEDGEMENT
We would like to thank everyone who assisted us during this project's development as we appreciate the successful conclusion of this endeavour. Additionally, we would like to express our gratitude to all the designers of the websites, programmes, and other features that served as inspiration or references for the development of this system. We hope that this research will accomplish its goals and be adopted as a real system utilised by physicians and hospitals to identify dermatological problems in the future.
VII. ACKNOWLEDGEMENT We would like to thank everyone who assisted us during this project\'s development as we appreciate the successful conclusion of this endeavour. Additionally, we would like to express our gratitude to all the designers of the websites, programmes, and other features that served as inspiration or references for the development of this system. We hope that this research will accomplish its goals and be adopted as a real system utilised by physicians and hospitals to identify dermatological problems in the future.
[1] Peter J. Bevan and Amir Atapour-Abarghouei, Detecting Melanoma Fairly: Skin Tone Detection and Debiasing for Skin Lesion Classification, arXiv:2202.02832v4 [eess.IV] 29 Jul 2022. [2] Fábio Perez, Cristina Vasconcelos, Sandra Avila, Eduardo Valle, Data Augmentation for Skin Lesion Analysis, arXiv:1809.01442v1 [cs.CV] 5 Sep 2018. [3] R. R. Subramanian, R. Ramar, “Design of Offline and Online Writer Inference Technique”, International Journal of Innovative Technology and Exploring Engineering, vol. 9, no. 2S2, Dec. 2019, ISSN: 2278-3075. [4] R.L. Siegel, K.D. Miller, A. Jemal, Cancer. statistics, Cancer statistics 2019, CA Cancer J. Clint. 69 (1) (2019) 7–34. [5] N. Tajbakhsh, J.Y. Shin, S.R. Gurudu, R.T. Hurst, C.B. Kendall, M.B. Gotway, J. Liang, Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE Trans. Med. Imaging 35 (5) (2016) 1299–1312. [6] A. Esteva, B. Kuprel, R.A. Novoa, J. Ko, S.M. Sweater, H.M. Blau, S. Thrun, Dermatologist-level classification of skin cancer with deep neural networks, Nature 542 (7639) (2017) 115.M. Shell. (2002) IEEEtran homepage on CTAN. [Online]. Available: http://www.ctan.org/tex-archive/macros/latex/contrib/supported/IEEEtran/ [7] H. Kittler, H. Pehamberger, K. Wolff, M. Binder, Diagnostic accuracy of dermoscopy, Lancet Oncol. 3 (3) (2002) 159–165. [8] G. Litjens, T. Kooi, B.E. Bejnordi, A.A.A. Setio, F. Ciompi, M. Ghafoorian, J.A. Van Der Laak, B. Van Ginneken, C.I. Sánchez, A survey on deep learning in medical image analysis, Med. Image Anal. 42 (2017) 60–88.
Copyright © 2022 Atharva Kulkarni, Chirag Mahajan, Tejas Hasabnis. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET46516
Publish Date : 2022-08-29
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here