Our work aims to develop a system for breast cancer detection using Image Processing Techniques. It addresses the limitations of mammography, such as painful procedures, by introducing thermal image analysis. For mammogram analysis, morphology techniques are applied to enhance the images before feature extraction. The preprocessed image used to classify mammograms into three categories: Normal Fatty Breast, Abnormal Fatty Breast, and Glandular Breast. In the case of thermal images, the system analyses the distribution of heat around the breast. Variations in heat levels are used to identify potential cancerous areas. Additionally, the project includes the development of a user-friendly Graphical User Interface (GUI) to enhance accessibility for users. The proposed system provides option on architectures namely CNN, VGG16 and Densenet121 using which the result is obtained.
Introduction
I. INTRODUCTION
Due to variations in tissue density, the breast can be categorized into two primary types: adipose-rich breasts and glandular breasts. An adipose-rich breast contains a higher proportion of fat tissues compared to fibro glandular tissues, whereas a glandular breast has a greater concentration of fibro- glandular tissues than fat tissues. The emergence of breast cancer is attributed to the rapid, uncontrolled proliferation and alteration of breast tissues, leading to the formation of abnormal masses or lumps of excessive tissue growth. These anomalous growths are referred to as tumors, which can be either benign or malignant. Breast cancer stands as one of the most prevalent malignancies affecting women, and it is a leading cause of mortality among women in their middle age. Based on statistics obtained from the 2011 World Health Statistics of the Global Health Observatory (GHO), breast cancer exhibits the highest fatality rate (11.81%) among females on a global scale, with an incidence rate of 15.80%. Early detection plays a pivotal role in ensuring the efficacy of breast cancer treatment. Presently, two imaging methodologies, namely thermography and mammography, are employed to detect abnormal masses in the breast.
II. LITREATURE SURVEY
The study combines mammography and deep learning, using a convolutional neural network (CNN) to classify mammogram images into benign and malignant tumors. It employs image pre-processing and achieves an impressive training accuracy of over 99%. The method demonstrates high prediction accuracy and excels in precision, recall, and F1 score metrics. This innovative approach has the potential to revolutionize breast cancer detection, improving patient outcomes [3]. An innovative deep learning approach, using DenseNet-201 CNNs, enhances breast cancer detection in histopathology images. It aims to differentiate between Benign and Malignant conditions, achieving an impressive 95.58% classification accuracy, with precision and recall metrics at 0.90 and 0.99, and an F1-score of 0.89. Leveraging the BreakHis dataset, this method offers reliable results, advancing early breast cancer diagnosis and improving medical interventions [4]. Breast cancer diagnosis, critical for one in eight women globally, benefits from modern medical imaging and machine learning integration. These systems surpass manual methods in detecting cancerous cells, reducing labor intensity and human errors. Strategies like transfer learning and fine-tuning optimize pre-trained CNNs, making them efficient with limited data. Deep learning, particularly suited for medical imaging, achieves an impressive accuracy of up to 88.86%, promising enhanced breast cancer detection and improved healthcare outcomes [5]. Breast cancer is a significant health concern, especially in developing countries, with traditional detection methods being time-consuming and error-prone. This study introduces an automated breast cancer detection approach using Gaussian Blur and Detail Enhanced filters for pre- processing and a Convolutional Neural Network (CNN) classifier. The model achieves promising accuracy rates of 87.49%, 88.46%, and 88.10% for different cases, highlighting its robustness and potential to enhance efficiency and accuracy in breast cancer detection compared to conventional methods [6].
This research paper addresses the importance of early breast cancer detection using histopathological images. It employs Convolutional Neural Networks (CNNs) on the BreakHis datasets at 400× and 100× resolutions. DenseNet 121 excels with 98% accuracy for the first dataset, while ResNet 34 dominates with an impressive 99.6% for the second dataset. Interestingly, the study suggests that excessive magnification negatively impacts results, favouring ResNet 34 without extensive magnification. These findings underscore the value of advanced machine learning, especially CNNs, in accurately classifying breast tumor types from histopathological images [7].
III. PROPSED WORK
The BCD CNN approach employs a convolutional neural network to categorize mammography images as either benign or malignant, aiming to assist experts in quicker and more accurate breast cancer diagnosis. It begins with pre-processing to adapt visual data for computer analysis and optimize the CNN classifier's performance. Using transformed images as training data, the CNN creates a predictive model that can identify cancer-indicative features. This method seeks to empower medical professionals with a tool to streamline breast cancer identification and classification, potentially improving the speed and accuracy of medical interventions.
IV. METHODOLOGY
A. Deep Learning
Deep learning is a subset of machine learning that focuses on neural networks with multiple layers, known as deep neural networks. These networks are designed to automatically learn hierarchical representations of data, making them exceptionally powerful for complex tasks like image and speech recognition. Deep learning models can autonomously extract intricate features from raw data, eliminating the need for manual feature engineering. At the core of deep learning are artificial neural networks with numerous hidden layers, which enable the network to capture and understand intricate relationships within the data. This hierarchical learning approach allows deep learning models to excel in tasks ranging from natural language processing and recommendation systems to autonomous driving and medical diagnosis. One of the key strengths of deep learning is its ability to continuously improve its performance as more data becomes available. This adaptability, combined with the increasing computational power, has driven significant advancements in various fields. Deep learning has become a cornerstone of artificial intelligence, enabling machines to perform tasks that were once considered highly complex, and it continues to shape the future of technology and innovation.
B. Convolutional Neural Network (CNN)
Convolutional Neural Networks (CNNs) are a pivotal innovation in the realm of deep learning, particularly tailored for image and spatial data analysis. These neural networks have revolutionized computer vision tasks by excelling in the recognition of intricate patterns and features within images. They are built on a hierarchical structure of interconnected layers, which enables them to process visual data in a manner inspired by the human visual system. The key characteristic of CNNs is their use of convolutional layers, which apply filters to input data. These filters scan the input image, focusing on small regions at a time, and extract relevant information. This process of feature extraction allows CNNs to identify essential patterns like edges, textures, and shapes within the data, enabling them to make sense of complex visual information. CNNs find wide-ranging applications in fields such as image classification, object detection, facial recognition, and even medical image analysis. Their versatility and effectiveness have made them the go-to choice for tasks involving visual data. Their deep learning capabilities enable them to learn and adapt to different datasets, making them a crucial tool in modern artificial intelligence and computer vision research.
C. Visual Geometry Group (VGG16)
VGG16 is a widely recognized and influential convolutional neural network (CNN) architecture in the field of deep learning and computer vision. Developed by the Visual Geometry Group at the University of Oxford, VGG16 is celebrated for its simplicity and effectiveness. It is characterized by its deep architecture, consisting of 16 weight layers, including 13 convolutional layers and 3 fully connected layers. This deep structure enables VGG16 to learn and represent intricate features within images, making it highly proficient in image classification tasks. VGG16 achieved significant success in the ImageNet Large Scale Visual Recognition Challenge, showcasing its ability to classify objects within images with remarkable accuracy. Due to its straightforward design and powerful performance, VGG16 remains a valuable benchmark and a foundation for many subsequent CNN architectures, contributing significantly to the advancement of computer vision applications.
VI. ACKNOWLEDGEMENT
I would like to express my sincere thanks and indebtedness to my esteemed institution, The National Institute of Engineering, Mysuru which has provided me with an opportunity to fulfil my desire and reach my goal. I would like to sincerely thank my guide Smt. K R Sumana, Assistant Professor for guiding and monitoring me throughout with constant encouragement.
Conclusion
In conclusion, deep learning, with its prominent technique of Convolutional Neural Networks (CNNs), has emerged as a powerful tool in the realm of image classification. Its applications span diverse industries, with medical imaging being a notable beneficiary. By leveraging large and varied datasets, deep learning algorithms are able to achieve remarkable classification accuracy without relying on specialized human expertise.
The focus of our research was on utilizing CNNs to classify breast cancer tumors in MRI images. Through a comparative analysis of two CNN models, we aimed to identify the most effective approach for this specific task. The results were promising, as our trained models achieved an impressive accuracy rate of up to 93% in detecting breast cancer. This study exemplifies the potential of deep learning to revolutionize medical diagnostics and decision-making processes. The accurate and efficient identification of breast cancer through MRI images showcases the ability of CNNs to enhance early detection and patient care. As deep learning continues to advance, it holds the promise of transforming various industries, contributing to more accurate predictions and improved outcomes across a wide range of applications.
References
[1] Neural Networks and Deep Learning, Charu C. Aggarwal, 2018
[2] Fundamentals of Deep Learning, Nithin Buduma, Nikhil Buduma, Joe Papa, 2022
[3] R. Angane, G. Bhogale, S. Lanjekar, A. Gholkar and R. Chaudhari, Breast Cancer Analysis using Convolutional Neural Network, 2022 International Conference on Breakthrough in Heuristics And Reciprocation of Advanced Technologies (BHARAT), Visakhapatnam, India, 2022, pp. 133-137, doi: 10.1109/BHARAT53139.2022.00037.
[4] G. Wadhwa and A. Kaur, A Deep CNN Technique for Detection of Breast Cancer Using Histopathology Images, 2020 Advanced Computing and Communication Technologies for High Performance Applications (ACCTHPA), Cochin, India, 2020, pp. 179-185, doi: 10.1109/ACCTHPA 49271.2020.9213192.
[5] S. V and G. Vadivu, Breast Cancer Detection Mammogram Imagesusing Convolution Neural Network, 2023 International Conference on Artificial Intelligence and Knowledge Discovery in Concurrent Engineering (ICECONF), Chennai, India, 2023, pp. 1-5, doi: 10.1109/ICECONF57129.2023.10083530.
[6] M. A. Alahe and M. Maniruzzaman, Detection and Diagnosis of Breast Cancer Using Deep Learning, 2021 IEEE Region 10 Symposium (TENSYMP), Jeju, Korea, Republic of, 2021, pp. 1-7, doi: 10.1109/TENSYMP52854.2021.9550975.
[7] M. M. Askar, A. A. Salama, H. M. Elkamchouchi and A. M. Al-Fahar, Breast Cancer Classification Using Various CNN Models, 2023 International Telecommunications Conference (ITC-Egypt), Alexandria, Egypt, 2023, pp. 91-95, doi: 10.1109/ITC-Egypt58155.2023.10206336.