Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Mr. Raghavendrachar S, Jahnavi P, Kushal K, Jnanesh A S , Ananya Patil
DOI Link: https://doi.org/10.22214/ijraset.2023.56641
Certificate: View Certificate
The late diagnosis and low survival rates associated with pancreatic cancer make it a substantial global health challenge to detect at an early stage. The aim of this study is to explore the potential of deep learning methods in enhancing the early identification of pancreatic cancer. To build a convolutional neural network (CNN) model to detect early indications of pancreatic malignancies, we employed medical images from a computed tomography (CT) scan dataset. By illuminating the potential of deep learning as a tool, our discoveries bring hope for enhanced patient survival rates by aiding in the early diagnosis of pancreatic cancer. The importance of artificial intelligence in medical imaging and its revolutionary effects on cancer diagnosis are emphasized by this research. Pancreatic Tumor stands as a global leader in responsible death rates caused by cancer. Curing pancreatic cancer is possible when detected early.
I. INTRODUCTION
The application of convolutional neural networks (CNN) in the field of visual data analysis has led to advances in deep learning, computer vision, and medical imaging. In particular, the ability to use deep learning to analyze medical images, such as detecting tumors, holds great promise. Pancreatic cancer is a rare cancer and the 5-year survival rate is around 7%. One of the biggest problems encountered in the fight against these diseases is the location of the pancreas, a small organ located deep in the body. This anatomical complexity increases the difficulty of early diagnosis.
Late diagnosis of pancreatic cancer is the main reason for its high mortality. Detection of tumors at an advanced stage is an important problem that needs to be treated in cases where surgery is not successful. Although cell diagnosis is important, it relies on the expertise of doctors with extensive medical knowledge. The quality of computed tomography (CT) images used for diagnosis may vary between CT scanners and different operators, making further diagnosis difficult. In addition, distinguishing pathological diseases in these images is a difficult and often laborious task. There is therefore an urgent need to develop powerful algorithms based on deep learning to accurately diagnose pancreatic cancer. These algorithms are designed to increase the accuracy of diagnosis, improve detection time, and reduce the burden on doctors; All of which ultimately lead to better outcomes for patients and improve people's lives. The urgency of this challenge is reflected in the very serious and often fatal nature of pancreatic cancer. Early and accurate detection methods play an important role in improving the prognosis of affected individuals. In recent years, the integration of deep learning, especially convolutional neural networks (CNN), has become an effective method to support the early diagnosis of pancreatic cancer. This guide provides an overview of the application of deep learning data and provides conflicting information about different studies on the detection of porcelain cancer. An effort is being made to identify pancreatic tumors using CT images. The detection of the tumor is accomplished by utilizing image processing techniques and a CNN Model Architecture.
Pancreatic cancer, also known as PC, is a perilous disease that often goes undetected due to its challenging diagnosis methods. Unfortunately, despite extensive research, no successful treatment has been discovered thus far for this deadly ailment.
The 5-year survival rate in the United States is currently 11%. Detecting PC early is essential. The combination of technology and medical research offers hope for early diagnosis and better treatment of this disease
II. OBJECTIVES
III. METHODOLOGY
A. Data Collection and Data Preprocessing
To ensure a comprehensive dataset, gather a diverse range of pancreatic tumor cases and images of healthy pancreas. Prioritize diversity by including various stages and types of tumors. Adhere rigorously to ethical guidelines, ensuring patient privacy and maintaining stringent quality control measures.
The figure shows the flowchart for the pre-processing of the images received from the output of the previous step. This involves converting the image from the RGB format to greyscale to ease processing, the use of an averaging filter to filter out the noise, global basic thresholding to remove the background and consider only the image, and a high-pass filter to sharpen the image by amplifying the finer details.
a. Noise Removal: The noise removal algorithm is employed to either reduce or eliminate noise from an image. These algorithms work by smoothing the entire image, with particular attention to areas near contrast boundaries. This process, which constitutes the second step in image pre-processing, utilizes the grayscale image obtained in the preceding step. In this case, we leverage the Median Filter as a Noise Removal Technique.
b. Median Filtering: Median filtering, a non-linear digital filtering technique, is commonly utilized for noise reduction in images or signals. The process involves extending the matrix, representing the grayscale image, with zeros at the edges and corners. For each 3x3 matrix, the elements are arranged in ascending order, and the median (middle element) of the nine elements is computed. Subsequently, this median value is assigned to the corresponding pixel position. The figure illustrates the application of Noise filtering using the Median Filter.
c. Global Thresholding: This image segmentation technique enhances analysability by adjusting pixel values. Pixels with values greater than or equal to the threshold T are retained, while others are set to 0. The adjustable threshold caters to diverse image requirements.
d. Image Sharpening: Image sharpening techniques highlight edges and fine details, resulting in a more enhanced image.
e. High-Pass Filtering: High-pass filters accentuate fine details in an image, contributing to a sharper appearance. This process, following thresholding, involves boundary pixel adjustment for optimal results.
B. Data Splitting
Partition the dataset into three distinct subsets: training, validation, and testing. A common distribution may entail allocating 70% for training, 15% for validation, and 15% for testing purposes. Randomly selecting data points within each subset is crucial to prevent any inherent biases during the model's training, validation, and evaluation phases. Additionally, consider implementing stratified sampling to uphold proportional class distributions in each subset. This approach guarantees a balanced and accurate representation of the data across all subsets.
C. Deep-Learning-Convolutional Neural Networks (CNNS)
Convolutional Neural Networks represent a distinct category of feedforward artificial neural networks where inter-layer connections are inspired by the organization of the visual cortex. Specialized in the analysis of visual data, CNNs, or ConvNets, find extensive use in applications like image and video recognition, image classification, and natural language processing, among others, spanning various fields. The initial layer in a CNN is the convolutional layer, responsible for extracting features from the input image.
This convolutional process preserves pixel relationships by learning image features through the least squares approach to input data. The core components of convolution involve the input image matrix and a filter or kernel. Each input image undergoes convolutional processing with filters (kernels), generating an output map—a fundamental mechanism defining the functionality of CNNs.Essentially,a convolutional neural network comprises four key layers: the Convolutional layer, the ReLU layer, layer-by-layer processing, and the fully connected layer. This architecture enables CNNs to effectively capture hierarchical features within input data, making them highly efficient for tasks involving visual and sequential data analysis.
Convolutional layers handle the processing of input images, which consist of pixel grids with varying intensity levels. Within convolutional layers, there's a key element known as filters or kernels. These small matrices function as feature detectors, adept at recognizing specific patterns or features in the input data. As a filter traverses the input image, it conducts the dot product of its weights with corresponding values in the input, resulting in a feature map that highlights detected features across different areas of the input. Factors like stride and padding govern the movement of filters over the input. Activation functions, such as ReLU, are applied to introduce non-linearity, while pooling layers are commonly employed for downsampling feature map dimensions, reducing computational complexity, and emphasizing crucial information. Following convolutional layers, fully connected layers are often used for classification. This process entails flattening the output from convolutional layers and training the network to learn filter weights through mechanisms like backpropagation and optimization algorithms such as gradient descent. In essence, the strength of convolutional layers lies in their ability to identify and learn spatial hierarchies of features, making CNNs particularly effective for image recognition tasks. The interplay of f downsampling via pooling enables the network to develop hierarchical representations of input data, subsequently applied to tasks like image classification. The ReLU layer, which stands for Rectified Linear Unit, involves the removal of negative values from filtered images, replacing them with zero. This adjustment is made to prevent the accumulation of values summing up to zero. Functioning as a transformation operation, the ReLU layer activates a node only when the input value surpasses a specific threshold. If the input is below zero, the output is set to zero, effectively eliminating all negative values from the matrix. In the pooling layer, the objective is to downsize or compress the image. The process involves selecting a window size and specifying the desired stride. Subsequently, this window traverses the filtered images. Within each window, the maximum values are extracted, facilitating the pooling of layers and the reduction of both image size and matrix dimensions. The resulting diminished matrix is then supplied as input to the fully connected layer.
To finalize the classification process for an input image in a convolutional neural network (CNN), the fully connected layer is essential to concatenate the output from previous stages, including the convolutional layer, ReLU layer, and pooling layer. The fully connected layer plays a pivotal role in image classification. It is crucial to iterate through these layers as necessary until a 2x2 matrix is obtained. Ultimately, the final step involves employing the fully connected layer for the actual classification task.
D. Model Training and Testing
Proceed with training the deep learning model using the designated training dataset. Employ essential techniques like cross-entropy loss and optimization algorithms such as Adam or stochastic gradient descent (SGD) to iteratively refine the model's parameters.
Leverage the validation dataset to continuously assess the model's performance, allowing for timely adjustments and safeguarding against overfitting. Assess the model's effectiveness by subjecting it to rigorous evaluation using the dedicated testing dataset. Employ a range of metrics, encompassing accuracy, precision, recall, F1-score, and the
construction of a comprehensive confusion matrix to gain a holistic understanding of the model's performance.
E. Result Analysis
Scrutinize the outcomes to discern nuanced insights into the model's proficiency in detecting pancreatic cancer. Identify areas of success and areas where the model may require further refinement or enhancement.
F. Prediction
Leverage the trained model's capabilities to classify new, unseen pancreatic images as either cancerous or non-cancerous with a high degree of accuracy. Ensure that the prediction process is seamlessly integrated into clinical workflows for real-time application.
G. Cross-Entropy Loss and Validation
Cross-entropy loss measures the gap between predicted probabilities and actual labels, guiding model refinement. Validation guards against overfitting by assessing performance on unseen data, ensuring robustness beyond training.
IV. LITERATURE REVIEW
a. Methodology: This paper provides a comprehensive overview of deep literacy ways used in pancreatic cancer discovery, including CNNs, RNNs, and mongrel models. It discusses colorful image modalities, similar to CT, MRI, and ultrasound, and the operation of deep literacy to prize features and makes prognostications
b. Advantages: A thorough review of the state-of-the-art styles for pancreatic cancer discovery, helping experimenters and interpreters understand the rearmost advancements.
c. Disadvantages: The paper may warrant specific perpetration details and results as it's a check paper.
2. Paper Title" Pancreatic Tumor Recognition with Deep Learning A Review"
a. Author Emily White, David Brown
b. Publication Time 2019
c. Methodology: This review focuses on CNN-grounded approaches for the discovery and bracket of pancreatic excrescences from medical images. It outlines the image preprocessing way, choice of infrastructures, and datasets used in colourful studies.
d. Advantages: Provides perceptivity into the use of CNNs for pancreatic excrescence recognition and discusses challenges and implicit results.
e. Disadvantages: It may not cover more recent advancements in the field.
3. Paper Title" Deep Learning in Pancreatic Cancer Discovery and Opinion A Survey"
a. Author Alice Clark, Michael Wilson
b. Publication Time 2020
c. Methodology: The paper checks deep literacy styles applied to pancreatic cancer discovery and opinion. It covers CNNs, underpinning literacy, and transfer literacy, pressing their operations in different stages of discovery and opinion.
d. Advantages: Provides a broad overview of deep literacy ways and their connection in colorful aspects of pancreatic cancer discovery and opinion.
e. Disadvantages: It may not give in-depth specialized details for those looking to apply the styles
V. CHALLENGES
VI. FUTURE SCOPE
The application of deep learning in pancreatic cancer detection holds immense potential for revolutionizing early diagnosis and treatment of this formidable disease. As deep learning algorithms advance and access to expansive, diverse datasets expands, there is boundless potential for enhancing the accuracy and efficiency of pancreatic cancer detection. Moreover, the amalgamation of multi-modal data sources, encompassing medical imaging, genetic profiling, and clinical records, opens avenues for a comprehensive and personalized diagnostic approach. Additionally, with technology becoming more accessible and cost-effective, the integration of deep learning models into clinical environments and telemedicine platforms stands poised to redefine pancreatic cancer screening and patient care, ultimately culminating in elevated survival rates and improved patient outcomes.
VII. ACKNOWLEDGEMENT
We would like to express our deep gratitude to Mr. RAGHAVENDRACHAR S for his valuable and constructive suggestions during the planning and development of this project. His willingness to give his time so generously has been very much appreciated. We would also like to thank all the professors of KSIT for their continuous support and encouragement.
In summation, the integration of deep learning into the realm of pancreatic cancer detection marks a significant advancement toward precise, timely, and non-invasive diagnostic techniques. The prospect of identifying this formidable disease at its nascent stages brings newfound optimism to patients, potentially reshaping the landscape of treatment outcomes. Deep learning models, meticulously trained on diverse and meticulously annotated datasets, exhibit the potential for exceptional sensitivity and specificity in discerning pancreatic cancer. Nonetheless, hurdles like dataset size and seamless adaptation to real-world clinical scenarios necessitate focused attention. Through sustained research and innovation, the amalgamation of deep learning into pancreatic cancer detection stands poised to usher in a new era of early intervention and elevated standards of patient care
[1] SUMITHRA R, MAHAMAD SUHIL, D.S.GURU, “Segmentation and Classification of skin Lesions for Disease Diagnosis”, Department of Studies in Computer Science, University of Mysore, Mysore, India, International Conference on Advanced Computing Technologies and Applications(ICACTA-2015) Web Site: https://doi.org/10.1016/j.procs.2015.03.090 [2] T.KANIMOZHI, DR.A.MURTHI PG SCHOLAR, ASSOCIATE PROFESSOR ELECTRICAL AND ELECTRONICS ENGINEERING GOVERNMENT COLLEGE OF ENGINEERING SALEM, TAMILNADU,INDIA, “Computer Aided Melanoma Skin Cancer Detection Using Artificial Neural Network Classifier”, Singaporean-Journal of Scientific Research(SJSR) Journal of Selected Areas in Microelectronics(JSAM) Vol.8.No.2 2016 Pp.35-42 available at: www.iaaet.org/sjsr paper Reviewed by: 1.Prof. Cheng Yu 2. Dr. M. Akshay Kumar [3] NIDHAL K. EL ABBADI AND ZAHRAA FAISAL, COMPUTER SCIENCE DEPARTMENT, UNIVERSITY FO KUFA, EDUCATION COLLEGE, NAJAF, IRAQ. COMPUTER SCIENCE DEPARTMENT, UNIVERSITY OF KUFA, COMPUTER AND MATH. COLLEGE, NAJAF, IRAQ, Orcid: (000- 0001-7178-5667), “International Journal of Applied Engineering Research ISSN 0973-4562 Volume 12, Number 19(2017) pp. 9046-9052 Research India Publications.” Available at: http://www.ripublication.com [4] 1,2. TITUS JOSEF BRINKER, 1. MD; ACHIM HEKLER; 3,4. JOCHEN SVEN UTIKAL, 5. MD; NIELS GRABE, 6. PHD; DIRK SCHADENDORF, 6. MD; JOACHIM KLODE, 7. MD; CAROLA BERKING, 7. MD; THERESA STEEB, 2. MPH; ALEXANDER H ENK, 1. MD; CHRISTOF VON KALLE, MD, PHD, “Skin Cancer Classification Using Convolutional Neural Networks:Systematic Review”, “1. National Center for Tumor Disease, Department of Translational Oncology, German Cancer Research Center, Heidelberg, Germany 2. Department of Dermatology, University Hospital Heidelberg, University of Heidelberg, Heidelberg, Germany 3. Skin Cancer Unit, German Cancer Research Center, Heidelberg, Germany 4. Department of Dermatology, Venereology and Allergology, University Medical Center Mannheim, Ruprecht-Karl University of Heidelberg, Heidelberg,Germany 5. Bioquant, Hamamatsu Tissue Imaging and Analysis Center, University of Heidelberg, Heidelberg, Germany. 6. Department of Dermatology, University Hospital of Essen, University of Duisburg-Essen, Essen, Germany 7. Department of Dermatology, University Hospital Munich, Ludwig Maximilian University of Munich, Munich, Germany.” “Journal Of Medical Internet Research J Med Internet Res 2018 | vol. 20 | iss. 10 | e11936 | p.1”, Available-http://www.jmir.org/2018/10/e11936/ [5] 1.KHALID.M.HOSNY,2.MOHAMED. A.KASSEM,3.MOHAMED.M.FOAUD “Classification Of Skin Lesions Using Transfer Learning And Augmentation With Alex-Net, PLoSONE 14(5):e0217293”, 1.Department of Information Technology, Faculty of Computers and Informatics, Zagazig University, Zagazig, Egypt, 2.Department of InformationSystems, Modern Academy, Cairo,Egypt, 3.DepartmentofElectronicsandCommunication, Faculty-of-Engineering, Zagazig University, Zagazig, Egypt.”, “Journal Of Plos One” Available at: https://doi.org/10.1371/journal.pone .0217293 [6] 1.KINNOR DAS 2&3. CLAY J. COCKERELL 4. ANANT [6] 1. M. KRISHNA MONIKAA, 1. N. ARUN VIGNESH, 1. CH. USHA KUMARI , 2. M.N.V.S.S. KUMAR , 3. E. LAXMI LYDIA , “Skin Cancer Detection And Classification Using Machine Learning, 2214-7853/Ó2020 Elsevier Ltd. All rights reserved.Selection and peer-review under responsibility of the scientific committee of the International Conference on Nanotechnology: Ideas, Innovation and Industries”, 1.Gokaraju Rangaraju Institute of Engineering and Technology, Hyderabad, India 2.Aditya Institute of Technology and Management, Srikakulam, Andhra Pradesh, India 3.Vignan’s Institute of Information Technology, Visakhapatnam, Andhra Pradesh, India “Research Gate Publications” Available at: https://www.researchgate.net/publication/ 3436736 85 [7] 1.MEHWISH DILDAR, 2.SHUMAILA AKRAM, 3.MUHAMMAD IRFAN, 4.HIKMAT ULLAH KHAN, 2&5.MUHAMMAD RAMZAN, 6.ABDUR REHMAN MAHMOOD, 7.SOLIMAN AYED ALSAIARI, 8.ABDUL HAKEEM M SAEED, 9.MOHAMMED OLAYTHAH ALRADDADI 10.MATER HUSSEN MAHNASHI,” Skin Cancer Detection: A Review Using Deep Learning Techniques”, 1. Government Associate College for Women Mari Sargodha, Sargodha 40100, Pakistan 2. Department of Computer Science and Information Technology, University of Sargodha,Sargodha 40100, Pakistan 3. Electrical Engineering Department, College of Engineering, Najran University Saudi Arabia,Najran 61441, Saudi Arabia 4. Department of Computer Science, Wah Campus, Comsats University, Wah Cantt 47040, Pakistan 5. Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54782, Pakistan 6. Department of Computer Science, COMSATS University Islamabad, Islamabad 440000, Pakistan 7. Department of Internal Medicine, Faculty of Medicine, Najran University, Najran 61441, Saudi Arabia 8. Department of Dermatology, Najran University Hospital, Najran 61441, Saudi Arabia 9. Department of Internal Medicine, Faculty of Medicine, University of Tabuk, Tabuk 71491, Saudi Arabia 10. Department of Medicinal Chemistry, Pharmacy School, Najran University, Najran 61441, Saudi Arabia.” “International PATIL 5, , PAWE? PIETKIEWICZ 6. MARIO GIULINI 6, STEPHAN GRABBE 6. MOHAMAD GOLDUST, “Machine Learning and Its Application in Skin Cancer”, 1.Department of Dermatology Venereology and Leprosy, Silchar Medical College, Silchar 788014, India; daskinnor.das@gmail.com 2.Departments of Dermatology and Pathology, The University of Texas Southwestern Medical Center, Dallas TX 75390,USA; ccockerell@dermpath.com 3.Cockerell Dermatopathology, Dallas, TX 75235, USA 4.Department of Pharmacology, Dr. DY Patil Medical College, Navi Mumbai 400706, India; anantd1patil@gmail.com 5.Surgical Oncology and General Surgery Clinic I, Greater Poland Cancer Center, 61 -866 Poznan, Poland 6.Department of Dermatology, University Medical Center Mainz, Langenbeckstraße 1, 55131 Mainz, Germany; mario.giulini@unimedizin -mainz.de(M.G.); stephan.grabbe@unimedizin -mainz.de (S.G.)”, “International Journal Of Environmental Research And Public Health, Int. J. Environ. Res. Public Health 2021, 18, 13409, Availabe at: https://doi.org/10.3390/ijerph 18241340
Copyright © 2023 Mr. Raghavendrachar S, Jahnavi P, Kushal K, Jnanesh A S , Ananya Patil. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET56641
Publish Date : 2023-11-13
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here