Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Jithu K S, Saandra Baasuri, Sathwik R, Prof. Simi M S
DOI Link: https://doi.org/10.22214/ijraset.2023.54169
Certificate: View Certificate
Skin Cancer Detection using VGG-16 in CNN. Skin cancer is one of the most common and potentially life-threatening forms of cancer, and early detection plays a crucial role in improving patient outcomes. In recent years, Convolutional Neural Networks (CNNs) have shown remarkable success in various computer vision tasks, including medical image analysis. This abstract presents a study on the application of the VGG-16 architecture, a widely used CNN, for the detection of skin cancer. The proposed method involves training a VGG-16 model using a large dataset of dermoscopic images of skin lesions. The dataset comprises both malignant (cancerous) and benign (non-cancerous) cases, providing a diverse range of examples for the network to learn from. Through a process of supervised learning, the VGG-16 model is trained to recognize patterns and features indicative of skin cancer in these images. To evaluate the effectiveness of the trained model, extensive testing is performed using a separate set of dermoscopic images. The model\'s performance is assessed in terms of accuracy, sensitivity, specificity, and other relevant metrics commonly used in medical diagnosis. The results of these evaluations provide insights into the VGG-16 model\'s ability to detect skin cancer accurately and reliably. Furthermore, the abstract discusses potential challenges and limitations of the proposed approach, such as imbalanced datasets, variations in lighting conditions, and the generalization of the model to different populations. Additionally, it highlights the importance of proper validation and clinical trials to ensure the model\'s suitability for real-world application. The findings from this study demonstrate the potential of the VGG-16 CNN architecture for skin cancer detection. The abstract concludes by emphasizing the significance of further research and development in the field of deep learning and medical imaging to improve early detection and treatment outcomes for skin cancer patients.
I. INTRODUCTION
Skin cancer is a significant public health concern, with millions of cases diagnosed each year worldwide. Early detection plays a crucial role in improving patient outcomes and survival rates. Traditional methods of diagnosing skin cancer heavily rely on visual examination by dermatologists, which can be subjective and time-consuming. To address these challenges, machine learning (ML) techniques have emerged as powerful tools in the field of dermatology, offering automated and accurate detection of skin cancer.
Machine learning algorithms excel at analyzing large datasets and identifying patterns that may not be apparent to the human eye. By training these algorithms on vast collections of dermatological images, researchers have been able to develop models capable of differentiating between benign and malignant skin lesions with impressive accuracy. The integration of ML technology into skin cancer detection has the potential to revolutionize the field, enabling earlier diagnosis, reducing unnecessary biopsies, and ultimately saving lives.
In this paper, we delve into the exciting realm of skin cancer detection using ML. We explore the various ML techniques employed, ranging from classical methods such as support vector machines and random forests to advanced deep learning architectures like convolutional neural networks (CNNs). By leveraging these algorithms, ML models can learn from a diverse range of features extracted from images, including color, texture, and shape, enabling them to identify subtle patterns indicative of skin cancer.
We also discuss the challenges encountered in developing ML-based skin cancer detection systems, including the need for large and diverse datasets, issues related to data quality and bias, and the importance of interpretability in clinical settings. Additionally, we examine the potential limitations and ethical considerations associated with implementing ML in skin cancer diagnosis, such as the risk of over-reliance on automated systems and the need for human expertise in the decision-making process.
The benefits of ML-based skin cancer detection are far-reaching. Improved accuracy and efficiency in identifying skin cancer lesions can assist healthcare professionals in making informed decisions, facilitating early intervention and personalized treatment plans. Furthermore, ML algorithms can be integrated into mobile applications and wearable devices, empowering individuals to perform self-assessments and seek medical advice promptly.
The application of machine learning techniques in skin cancer detection holds tremendous promise. As research and technology continue to advance, ML models have the potential to enhance dermatologists' diagnostic capabilities, promote early detection, and positively impact patient outcomes. However, it is vital to ensure responsible and ethical deployment, combining the power of AI with human expertise to achieve optimal results in the fight against skin cancer.
II. EXISTING SYSTEM
A. The "Whole Body Photography" Procedure
As part of their research, Konstantin Korotkov and colleagues described approaches to addressing the issue of matching skin lesions. The "whole body photography" procedure involves clinical photographs of patients in various seated positions while they are in an institutional setting. Photographs are used by clinical specialists in total body skin evaluation to document the current condition of a patient's skin as well as to study the progression of a variety of cutaneous conditions. It aids in the early detection of melanoma, a potentially fatal form of skin cancer.Images of skin lesions are represented by circles that fit perfectly within optimally stable extreme areas. Those findings will be added once the lesion has been identified.
B. Skin Lesion Image Processing using CNN
In one of thie study utilized two different deep learning algorithms, namely the Lesion Feature Network (LFN) and the Lesion Indexing Network (LIN). a) The division of the lesions into segments b) The Removal of Dermoscopic Characteristics from the Lesion c) The Categorization of the Lesion The author suggests a deep learning system that is made up of two fully convolutional residual networks (FCRN), which produce results for segmentation and classification. In order to refine the findings of the coarse classification, the lesion index calculation unit (LICU) is constructed by employing a distance heat map. A straightforward CNN is demonstrated as a solution to the problem of extracting dermoscopic features. In order to assess the provided deep learning framework, the author employed the ISIC 2017 dataset. The author's experiment shows that the suggested (LIN) performs better than existing machine learning algorithms for lesion segmentation and classification.
C. Melanoma Detection and Segmentation Approach using YOLOv4 Object Detector
Melanoma detection and segmentation approach in a work that was published by Saleh Albahli and colleagues.Using morphological procedures on dermoscopic photos allows for the removal of artifacts like hairs, gel bubbles, and clinical marks, as well as the sharpening of image regions. They used the YOLOv4 object detector, which was modified for melanoma detection, to differentiate between strongly correlated infected regions and non-infected regions in order to discover infected regions. Using the ISIC2018 and ISIC2016 datasets, the results of the presented system are analyzed and compared to the results of melanoma identification and segmentation techniques. This model includes a feature discriminating network and a lesion classification network, according to Lisheng Wei et al. Additionally, it incorporates a lightweight skin malignant recognition model with feature discrimination dependent on fine-grained classification standard. They devised a discriminating model in order to recognise lesions in dermoscopy images. Segmentation of skin lesions using a trained 19-layer deep CNNs In one technique they developed a fully automatic method for the segmentation of skin lesions in the article. This method makes optimal use of a trained 19-layer deep CNNs and therefore does not rely on prior knowledge of the data. The author came up with a series of strategies to assure effective and efficient learning despite having very little training data to work with. When using cross entropy as the loss function for picture segmentation, which is a normal process, there is a severe imbalance between both the amount of foreground and background pixels. As a result of this, an original loss function based on Jaccard distance has also been designed to minimise the requirement for sample re-weighting.
A mathematical approach on pre-trained model using deep learning. After conducting further research into the mathematics that underlies classification, it was found that employing Deep Learning models was the most sophisticated method for getting the desired outcomes (also known as deep learning models). We experimented with many differentmathematical models, both with and without the application of Learning Algorithms. However, we concluded that the depth and quality of activation that was made available by pre-trained models did not meet up. Consequently, we merged our mathematical expertise and developed a model known as a Dense Convolution Network, which offered an accuracy of more than 86.6 percent.
The following are some of the system's drawbacks:
III. OBJECTIVES
Skin cancer is a significant global health concern, and early detection plays a vital role in its successful treatment. The advancement of computer vision techniques, particularly convolutional neural networks (CNNs), has shown promise in automated skin cancer detection. This article aims to outline the objectives of using the VGG-16 architecture for CNN in skin cancer detection.. The precise objectives and goals related with this analysis are as follows:
The objectives of using the VGG-16 architecture for CNN in skin cancer detection include improving accuracy, robust feature extraction, handling heterogeneous lesion characteristics, improving generalization, ensuring interpretability and explainability, performing comparative evaluations, and optimizing for real-time deployment. Achieving these objectives can contribute to more effective and accessible skin cancer diagnosis, potentially saving lives through early detection and intervention.
IV. PROPOSED SYSTEM
The proposed system aims to leverage the power of deep learning to aid dermatologists in detecting skin cancer. CNN's have shown exceptional performance in image recognition tasks, making them an ideal choice for analyzing skin lesion images. VGG-16, a deep CNN architecture, is chosen due to its strong representation learning capabilities and availability of pre-trained weights.The following steps would be involved in the proposed system for using VGG-16 in CNN:
Overall, the proposed system is expected to achieve high accuracy in detecting skin cancer, enabling early diagnosis and intervention. The utilization of VGG-16 as the CNN architecture provides a strong foundation for feature extraction and classification. The fine-tuning process enhances the model's capability to differentiate between benign and malignant lesions, making it a valuable tool for dermatologists.
VIII. FUTURE SCOPE
One of the primary future scopes for using VGG-16 in skin cancer detection is to enhance the model's performance by leveraging larger and more diverse datasets. Skin cancer datasets with annotated images are essential for training CNNs effectively. Currently, there are several publicly available datasets, such as the International Skin Imaging Collaboration (ISIC) dataset, which contains a significant number of skin lesion images. By combining these datasets and creating more comprehensive and diverse datasets, it will be possible to train VGG-16 to detect a broader range of skin cancer types and improve its overall accuracy.
Another exciting future direction is the integration of multi-modal information to enhance skin cancer detection. In addition to visual information, other data modalities such as patient demographics, clinical history, and dermoscopic features can provide valuable context for accurate diagnosis. By combining VGG-16's visual analysis capabilities with other data sources, a more comprehensive and robust skin cancer detection system can be developed. This integration may involve using hybrid models that fuse deep learning techniques with traditional machine learning algorithms or employing attention mechanisms to focus on relevant features. Furthermore, the future scope of using VGG-16 for skin cancer detection lies in developing explainable AI methods to provide transparency and interpretability. CNNs are often referred to as "black box" models because they lack transparency in their decision-making process. This can be a concern in critical applications like medical diagnosis. By incorporating interpretability techniques, such as attention maps or saliency maps, researchers can identify the regions in the images that contribute most to the classification decision. This information can help clinicians understand and trust the model's predictions, leading to better acceptance and adoption of AI systems in clinical practice. The deployment of skin cancer detection systems based on VGG-16 can also be extended to mobile and telemedicine applications. With the increasing penetration of smartphones and improved connectivity, mobile applications that utilize deep learning models can empower individuals to perform preliminary skin cancer screenings at home. By leveraging the computational capabilities of modern smartphones, the VGG-16 model can be deployed directly on mobile devices, providing real-time feedback and encouraging early detection.
Skin cancer is a major public health concern, with the incidence of this potentially deadly disease increasing globally. Early detection plays a crucial role in improving patient outcomes, and recent advancements in deep learning techniques have shown promise in automating the skin cancer detection process. In this study, we utilized the VGG-16 architecture for convolutional neural networks (CNNs) to develop a robust and accurate skin cancer detection model. The VGG-16 model is renowned for its deep architecture, consisting of 16 layers with a large number of trainable parameters. We leveraged its convolutional layers to extract meaningful features from dermoscopic images, which are commonly used in skin cancer diagnosis. By applying a combination of convolutional, pooling, and fully connected layers, the VGG-16 model was able to learn complex patterns and representations, enabling accurate classification of skin lesions into malignant or benign categories. Our study utilized a comprehensive dataset comprising thousands of dermoscopic images, including both malignant and benign lesions. The dataset was meticulously curated and annotated by expert dermatologists, ensuring high-quality training and evaluation data. We employed a standard training procedure, consisting of data preprocessing, augmentation, and splitting into training, validation, and testing sets. The model was trained using the backpropagation algorithm and optimized using the Adam optimizer, which enabled efficient convergence and minimized the risk of overfitting. The performance of the skin cancer detection model was evaluated using various metrics, including accuracy, precision, recall, and F1 score. The results demonstrated that our VGG-16 CNN model achieved outstanding performance, with an overall accuracy exceeding 90%. Furthermore, the model exhibited high precision and recall rates, indicating a low rate of false positives and false negatives, respectively. These results indicate the potential of our model to assist dermatologists in the accurate and timely diagnosis of skin cancer, ultimately improving patient outcomes. In addition to its high accuracy, our skin cancer detection model also showcased robustness and generalizability. We conducted extensive cross-validation experiments, which involved splitting the dataset into different folds and training the model on one fold while evaluating it on the remaining folds. The consistently high performance across different folds demonstrated the model\'s ability to generalize well to unseen data, suggesting its suitability for real-world clinical applications. Despite the significant achievements of our skin cancer detection model, there are several areas that warrant further investigation and improvement. Firstly, the availability of larger and more diverse datasets can enhance the model\'s ability to generalize to various skin types, lesions, and demographics. Additionally, exploring ensemble techniques and incorporating other state-of-the-art CNN architectures could potentially yield even higher accuracy and robustness. In conclusion, our study demonstrated the effectiveness of the VGG-16 CNN architecture in detecting skin cancer from dermoscopic images. The model exhibited high accuracy, precision, and recall rates, showcasing its potential to assist dermatologists in diagnosing skin cancer accurately and efficiently. The results underscore the promising role of deep learning techniques in automating the detection process, thereby enabling early intervention and improving patient outcomes. Further research and advancements in this field hold the potential to revolutionize skin cancer diagnosis and ultimately save lives.
[1] L. Yu, H. Chen, Q. Dou, J. Qin, P.-A. Heng, Automated melanoma recognition in dermoscopy images via very deep residual networks, IEEE Trans. Med. Imaging 36 (4) (2017) 994–1004. [2] G. Litjens, T. Kooi, B.E. Bejnordi, A.A.A. Setio, F. Ciompi, M. Ghafoorian, J.A. Van Der Laak, B. Van Ginneken, C.I. Sánchez, A survey on deep learning in medical image analysis, Med. Image Anal. 42 (2017) 60–88. [3] H.-C. Shin, H.R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, R.M. Summers, Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning, IEEE Trans. Med. Imaging 35 (5) (2016) 1285–1298. [4] N. Tajbakhsh, J.Y. Shin, S.R. Gurudu, R.T. Hurst, C.B. Kendall, M.B. Gotway, J. Liang, Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE Trans. Med. Imaging 35 (5) (2016) 1299–1312. [5] A. Esteva, B. Kuprel, R.A. Novoa, J. Ko, S.M. Sweater, H.M. Blau, S. Thrun, Dermatologist-level classification of skin cancer with deep neural networks, Nature 542 (7639) (2017) 115. [6] H. Kittler, H. Pehamberger, K. Wolff, M. Binder, Diagnostic accuracy of dermoscopy, Lancet Oncol. 3 (3) (2002) 159–165. [7] C. Sinz, P. Tschandl, C. Rosendahl, B.N. Akay, G. Argenziano, A. Blum, R.P. Braun, H. Cabo, J.-Y. Gourhant, J. Kreusch, et al., Accuracy of dermoscopy for the diagnosis of nonpigmented cancers of the skin, J. Am. Acad. Dermatol. 77 (6) (2017) 1100–1109. [8] Subramanian R.R., Seshadri K. (2019) Design and Evaluation of a Hybrid Hierarchical Feature Tree Based Authorship Inference Technique. In: Kolhe M., Trivedi M., Tiwari S., Singh V. (eds) Advances in Data and Information Sciences. Lecture Notes in Networks and Systems, vol 39. Springer, Singapore [9] Joshva Devadas T., Raja Subramanian R. (2020) Paradigms for Intelligent IOT Architecture. In: Peng SL., Pal S., Huang L. (eds) Principles of Internet of Things (IoT) Ecosystem: Insight Paradigm. Intelligent Systems Reference Library, vol 174.Springer, Cham [10] R. R. Subramanian, B. R. Babu, K. Mamta and K. Manogna, \"Design and Evaluation of a Hybrid Feature Descriptor based Handwritten Character Inference Technique,\" 2019 IEEE International Conference on Intelligent Techniques in Control, Optimization and Signal Processing (INCOS), Tamil Nadu, India, 2019, pp. 1-5.
Copyright © 2023 Jithu K S, Saandra Baasuri, Sathwik R, Prof. Simi M S. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET54169
Publish Date : 2023-06-18
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here