Skin diseases are prevalent among humans, affecting millions worldwide. These conditions, often harboring hidden risks, not only erode self-confidence and induce psychological distress but also elevate the likelihood of developing skin cancer. Experts and high-level instruments are required to diagnose the skin diseases due to the non-availability of visual resolution in skin disease images. The framework includes machine learning techniques such as CNN architecture and three predefined models called Efficient Net, ResNet, and InceptionV..
Introduction
I. INTRODUCTION
In recent years, these illnesses have garnered significant attention due to their sudden emergence and the heightened complexities that elevate risks to life. These dermatological conditions are highly contagious and necessitate early intervention to prevent further spread. The primary cause of most conditions is prolonged exposure to excessive Ultraviolet Radiation (UR). While benign types are relatively less perilous and can be effectively treated, malignant melanoma represents the most lethal form of skin abnormality. Consequently, researchers are exploring the application of deep learning models to classify skin diseases based on images of affected areas.
On the other hand, skin cancer can be fatal and the problem is related to the temporary nature of skin cancer.One of the most common complaints of people around the world is skin diseases. Basal cell carcinoma (BCC), melanoma, carcinoma in are cases of skin cancer (SCC).The current incidence of skin cancer is higher than other new types of lung cancer and breast cancer.Some skin conditions can take a long time to treat because symptoms can develop for months before being detected.
As a result, computer-aided disease diagnosis emerged, which can provide results with greater accuracy in a shorter period of time than human analysis using laboratory procedures. Deep learning is the most widely used technology in skin disease prediction. This study demonstrates a robust mechanism for identifying skin diseases using a monitoring process that reduces diagnostic costs. This has led researchers to consider using deep-learning models to classify skin diseases based on images of the affected area.
Classification using deep learning algorithms has shown promise in accurately identifying various skin conditions based on visual cues [3]. Additionally, studies have explored the integration of other modalities such as dermoscopy and multispectral imaging to improve diagnostic accuracy [4]. Some efforts were made to develop smartphone applications and web-based tools for remote diagnosis and consultation, aiming to enhance accessibility to dermatological expertise, particularly in underserved regions [5]. Skin disease detection methodologies underscore a shift towards more efficient, accessible, and accurate diagnostic approaches, with the potential to significantly impact clinical practice and patient care.
II. LITERATURE SURVEY
Some researchers have proposed techniques based on image processing to detect skin diseases. Here we briefly review some of the techniques reported in the literature. In [1], a system is proposed to dissect skin diseases using color images without medical intervention. The system includes two stages, the first stage is to detect infected skin areas using color image processing techniques, k-means clustering techniques and color gradients to identify diseased skin areas, and the second stage is to detect infected skin areas. The second is to classify the type of disease using artificial neural networks.
The system was tested on six types of skin diseases with an average accuracy of 95.99% for the first stage and 94.016% for the second stage. In method[2], image feature extraction is the first step in detecting skin diseases. In this method, the more features extracted from the image, the higher the accuracy of the system. The author[2] applied this method to 9 types of skin diseases with an accuracy of up to 90%.
Melanoma is a type of skin cancer that can lead to death if not diagnosed and treated early. The authors of [3] focused on researching different segmentation techniques that can be applied to detect malignant tumors using image processing. The segmentation process is described and falls on the boundaries of infected points to extract further features.
The work of [4] proposed the development of a melanoma diagnostic tool for dark skin using a dedicated algorithmic database consisting of images from multiple melanoma resources. Similarly, [5] discussed the classification of skin diseases such as melanoma, basal cell carcinoma (BCC), moles, and seborrheic keratosis (SK) using support vector machine technique. support (SVM). It provides the best accuracy among a variety of other techniques. On the other hand, the spread of chronic skin diseases in different regions can have serious consequences. Therefore, [6] proposed a computer system capable of automatically detecting eczema and determining its severity.
The system consists of three stages, the first stage is effective segmentation by skin detection, the second stage extracts a set of features, namely color, texture, contour and phase the third determines the severity of eczema using a support vector machine (SVM). The role of computer vision is to extract features from images while machine learning is used to detect skin diseases. The system has been tested on six types of skin diseases with 95% accuracy.
III. METHODOLOGY
A. Technologies used
Python a high-level programming language known for its integrated nature, offers an object-oriented approach facilitating clear and logical code writing for projects of varying scales. Dynamically typed and featuring garbage collection, Python supports functional and structural programming, along with built-in functions for operations like filtering, mapping, and reduction. It serves as a machine learning algorithm’s foundation and libraries, supporting data structures such as lists, dictionaries, sets, and generators. Python code is platform-independent, enabling execution across different environments like Anaconda. Anaconda, a freely available, open-source distribution tailored for scientific computing, streamlines package management and deployment processes. Developed and maintained by Anaconda, Inc., Anaconda includes Python and R programming languages for applications such as large-scale data processing, data science, and predictive analytics. A more compact variant, Miniconda, includes essential components like conda, Python, and requisite packages. Jupyter Notebook, also renowned as IPython Notebook, is an interactive web-based computational tool for creating documents integrating various entities. Utilizing the ".ipynb" extension, Jupyter Notebook offers open-source software with interactive communication means and high integration with multiple language sets, providing flexibility in configuration and integration across systems without prior distortion, ensuring efficiency throughout any chosen system.
B. VGG-19
VGG-19 architecture is used in the model. It is a well known model of deep learing which is known to be simple and effective in image classification. VGG-19 consists of 19 layers, including 16 convolutional layers and 3 fully connected layers. The initial layers of VGG-19 extract low-level features such as edges, textures, and basic shapes through convolutional and pooling operations.
C. Experimental Details
Datasets:In 2019, the APTOS dataset2 was unveiled on the Kaggle website as part of a public competition focused on Skin Cancer Detection. The primary objective was to leverage fundus imaging to categorize disease severity by generating probabilities for images falling into one of five clusters: No Mild, Moderate, and Severe, skin samples collected by the Aravind Skin Hospital in India, the dataset comprised around 13,000 images. However, access was limited to the ground truth labels of only 3662 images.
Data Preprocessing: The uninformative black areas on the sides of images were first trimmed then a circular crop was applied to have a centered retinal image. Moreover, a filtering technique was exploited to enhance the clarity of visual bio-markers, and described by the following equations: Let X' = α × X + β × X0 + γ…… (1), where X0 = G(σx ) *X ……(2). Here, X represents the input data, while G(σx ) denotes a 2D Gaussian kernel with a standard deviation of σx = 15 in the x-direction, and * signifies the convolution operation. The values of α, β, and γ were empirically determined as 5, -4, and 70,respectively. Subsequently, each image was normalized to fall within the range of [0, 1], resized to dimensions of (256×256) using bilinear interpolation, and then decoded to a 32-bit floating point. The resultant X' represents the input and output resulting from the pre-processing step.
Padding:Padding is a technique commonly employed to increase the height and width of the output. Its primary purpose is often to align the output dimensions with those of the input. By incorporating additional filler pixels along the edges of the input image, we effectively augment the image's overall size. Typically, these extra pixel values are assigned a null or zero value.
Pooling:Developers may choose to incorporate a pooling layer following several SoftMax layers, also referred to as a down-sampling layer. Within this category, numerous alternatives exist, with the most prevalent being max pooling. This method employs a filter with a consistent stride (typically of size 2x2) applied across the input volume, outputting the maximum value within each subregion.
Ensembled Model: The proposed solution is a combination of three different algorithms. The origin of those algorithms come under Convolution Neural Network and Deep Learning. The algorithms are Efficient Net, ResNet, and InceptionV which are merged into one model .
D. Use case Diagram
Conclusion
In summary, our framework combines computer vision, as well as machine learning techniques, yielding promising results in disease detection. It has the potential toassist individuals globally and facilitate productive work. Utilizing freely available tools, the system can be deployed at no cost. The developed application is lightweight and suitable for machines with limited system specifications, featuring a user-friendly interface for ease of use. Integration of deep learning and machine learning algorithms have been made successfully. Additionally, we have designed a doctor-patient interaction portal for skin disease prediction and appointment booking, incorporating a user-friendly chatbot application. An Ensemble model comprising ResNet, Inception, and EfficientNet was utilized to train the model, with doctor, patient, and chatbot modules integrated into a web application.The convolutional neural network-based system was implemented for classifying diseases in input skin images. Images of various shapes and sizes depicting skin diseases are segregated using the trained images. The proposed system achieved 90% accuracy in the process of classifying the different types of skin diseases.
References
[1] Shamsul Arifin, M., Golam Kibria, M., Firoze, A., Ashraful Amini, M., & Hong Yan. (2012). Dermatological diagnosis using color skin images. In 2012 International Conference on Machine Learning and Cybernetics. doi:10.1109/icmlc.2012.6359626.
[2] Jana,E.,Subban,R,&Saraswathi S.(2017).Research on Skin Cancer Cell Detection Using Image Processing.2017 IEEE International Conference on Computational Intelligence and computing research(ICCIC) dio:10.1109/iccic.2017.8524554
[3] Mhaske, H. R., & Phalke, D. A. (2013). Melanoma skin cancer detection and classification based on supervised and unsupervised learning. 2013 International Conference on Circuits, Controls and Communications (CCUBE). doi:10.1109/ccube.2013.6718539
[4] Alfed, Khelifi,F.Bouridane, A.,& Seker H.(2015).Pigment network-based Skin Cancer detection.2015 37th Annual International Conference of the IEEE Engineering in Medicine And Biology Society (EMBC). Dio:10.1109/embc.2015.7320056.
[5] Lau, H.T., &A1-Jumairly.A.(2009).Automatically Early Detection of Skin Cancer.Study Based on Neural Network Classification.2009 International Conference of Soft Computing and Recognition. Dio:10.1109/socpar.2009.80.
[6] Dual, P., Bhatt, S., Joglekar, C., & Patii, S.(2017).Skin Cancer disease detection and classification.2017 6th International Conference of Electrical Engineering and Informatics (ICEFI).dio:10.1109/iceei.2017.8312419.