Automatic recognition of key chest X-ray results to help radiologists with clinical workflow tasks like time-sensitive triage, pneumothorax (CXR) is crucial Case screening and unanticipated discoveries. Deep learning (DL) models have become a promising prediction technique with near-human accuracy, but usually suffer from a lack of explain ability. Medical professionals can treat and diagnose illnesses more precisely thanks to automated picture segmentation and feature analysis. Because diverse equipment provides images with varying image quality, automatic segmentation of medical images is difficult. According to one research,15% of 104 patients with pleural effusion died within 30 days. In this paper, we propose a model for automatic diagnosis of 14 different diseases based on chest radiographs using machine learning algorithms. Chest X-rays offer a non-invasive (perhaps bedside) method for tracking the course of illness. A severity score prediction model for COVID-19 pneumonia on chest radiography is presented in this study.
Introduction
I. INTRODUCTION
In many therapeutic applications, automated identification of significant abnormalities such as pneumothorax (PTX), pneumonia (PNA), or pulmonary edema (PE) on inch radiography (CXR) is an extensively explored area. The Convolutional Neural Network is one of the most often used deep neural networks for image categorization (CNN). By using joint localization and classification algorithms to explicitly implement localization without the use of local annotation to direct image-level classification, we both address the current drawbacks, including the lack of interpretability and the requirement for costly local annotation, in this work. The respiratory system's key organ, the lung serves as an organ for exchanging oxygen from the air and carbon dioxide from the blood. As a result, they are crucial parts of the respiratory system in humans. The respiratory system is immediately impacted by lung injury, which has the potential to be fatal. Pleural effusion alone accounts for 2.7% of other respiratory disorders in Indonesia, with an estimated global frequency of more than 3,000 cases per million people annually.
As the first nation examines stay-at-home strategies [Wilson and Molson, 2020], deaths from COVID-19 continue to rise [O'Grady et al., 2020]. The growing pressure of the pandemic on health systems around the world Many doctors have turned to new strategies and techniques. Chest x-ray (CXR) provides a non-invasive (potentially bedside) tool for monitoring disease progression [Yoon et al., 2020; Ng et al., 2020]. In early March 2020, Chinese hospitals used artificial intelligence (AI)-assisted computed tomography (CT) image analysis to screen for COVID-19 cases and simplify diagnosis [Jinet al., 2020]. Since then, many teams have launched AI initiatives to help triage COVID-19 patients (i.e., discharge, general admission, or ICU care) and allocate hospital resources (i.e., from direct non-invasive ventilation to invasive ventilation). [strickland, 2020]. Although these latter tools use clinical data, there is still a lack of practically deployable CXR-based prediction models.
II. RELATED WORK
This section gives a quick explanation of the algorithms utilised in this work, CNN and its Component, as well as a review of various other studies that employed comparable algorithms to complete a goal, such as classifying chest radiographs.
A. Classifying Chest X-ray Images on CNN
In the medical area, particularly in the identification of anomalies based on X-ray pictures, machine learning techniques, particularly deep learning, have gained popularity recently.
One of the experiments examined the accuracy of the rice field in 10 situations for the diagnosis of 14 illnesses based on chest radiography and CNN, as carried out by . (Including edema, pleural effusion, pneumonia, etc.) Radiation exposure (after special test).Reliable accuracy for 14 diseases.
B. Convolutional Neural Network
Artificial neural networks (ANNs), sometimes referred to as neural networks or CNNs, are one form of ANN (NN). It is a machine learning method that draws inspiration from the organisation of the human brain (also known as deep learning because it often includes numerous layers). network of neurons. CNNs and standard NNs are quite similar. In contrast to a standard NN, a CNN has a specific initial layer called the convolution layer that harvests data, particularly picture characteristics. As a result, tasks including image classification, object identification, and picture segmentation are frequently carried out using CNNs. Using Computer Vision, the unstructured data seen in photos and other unstructured data is where this algorithm excels . CNNs are generally divided into convolution layers, combination layers and fully connected layers .
III. METHODOLOGY
There are many steps in this section. We first provide a brief description of the pre-processing methods employed on the data in this work before moving on to the CNN architecture and suggested meta-parameters. Finally, we discuss model implementation strategies, model interpretation methods, and performance measures at the end of this section.
A. Dataset
The biggest chest radiography dataset, Chest X-ray, has a total of 112,120 pictures from 30,805 patients with a variety of advanced lung illnesses. It was used in this study. taken from Kaggle: Chest CT Scan Images. Based on the kind of sickness and the level of health, this dataset is classified into 14 groups. Images were resized to 1024 by 1024 pixels, saved in PNG format, then labelled with one or more tags (overlapping with other diagnoses). They were acquired by means of natural language processing methods from radiologist reports.
81,176 data samples from X-ray scans were used in the investigation. With 1250 photos per dataset and 14 distinct classes, it provides X-ray images of 14 different lung diseases.
B. Proposed CNN Architecture
The visual geometry group's VGG19 CNN model was utilised in this study. A deep CNN model with 19 layers is called VGG19 (16 convolutional layers and 3 FC layers).
The ImageNet dataset served as the basis for the initial parameters of the VGG19 model, which was employed in this work (also called pre-trained model). a convolution layer modified to fit the applied data set in the model architecture's top layer (fully linked layer).
IV. PROPOSED & COMPARATIVE ARCHITECTURES
A. Image Network
We tested various architectures to pre-process image networks with pre-trained weights from ImageNet. The architectures evaluated are as mentioned below:
VGG 19
VGG 19 comprises two parts, VGG stands for Visual Geometry Group while 19 represents the deep architecture formed by 19 layers of convolutional neural networks. This is a pre-trained architecture that can classify 1000 different objects. VGG 19 accepts RGB input in the 224 by 224 resolution. The pre-processing that the input image in VGG 19 goes through is subtracting the mean RGB value from every pixel. The VGG deep layer network consists of 16 layers of CNNs along with three complete layer connections ending with a softmax layer. The width of the convolution layer in the starting is 64 channels which increase by a factor of 2 till it reaches 512 channels. The initial two fully connected layers are made up of 4096 channels while the last one performs the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) classification, so this has a thousand channels.
The architecture also uses ReLu to bring in non-linearity and decrease the computational time and improve the accuracy.
B. Model Comparison
We ran various tests on the training set, training the suggested technique and benchmark models, and afterward validating the model with the validation set.
Validation Loss
While evaluating we observed that different architectures behave differently on being trained over multiple epochs. VGG 19 validation loss was minimum which reduced from 3 to 0.8. While the other two algorithm validation losses did not improve much and remained almost constant over entire training. ResNet 50 although improved drastically from 6.3 to 2, but could not improve further.
2. Validation Accuracy
The accuracy parameter portrays the model’s efficiency and performance. It is evident from the below graph that VGG19 achieved the best accuracy among the entire sets of architectures used.
???????
Conclusion
In summary, to correctly identify the diagnosis of multiple diseases and identify particular symptoms of disease, integrated modifications in the CNN deep model structure and fine-tuning of the model using the Optimization technique throughout the model training process are required. To evaluate the suggested approach, we used a number of deep CNN models (VGG16, VGG19, Inception V3, ResNet34, ResNet50, ResNet101) with various module layouts and layer counts. Our results suggest that the overall performance of deep CNN models keeps improving with the superimposition of improvement stages. Structure modifications produce the most increase in prediction accuracy for single CNN models among the three phases.
The proposed ensemble model can deliver good results even for the objective of disease localization by precisely presenting an attention map that highlights lung region regions that are suspected of having disease. Results from qualitative and quantitative research demonstrate that our technique outperforms other reducing algorithms in terms of performance.
References
[1] S. A. Nasution, “Skrining Makroskopis Cairan Pleura dari Efusi Pleura di Unit Laboratorium Patologi Anatomi Rumah Sakit Umum Pendidikan Haji Adam Malik Medan,” J. AnLabMed Vo.1 No.1 Desember, 2019.
[2] I. Puspita, T. Umiana Soleha, and G. Berta, “Penyebab Efusi Pleura di Kota Metro pada tahun 2015,” J AgromedUnila , vol. 4, p. 25, 2017.
[3] J. T Puchalski, “Mortality of Hospitalized Patients with Pleural Effusions,” J. Pulm. Respir. Med., vol. 04, no. 03, 2014, doi: 10.4172/2161-105x.1000184.
[4] P. Rajpurkar et al., “CheXNet: Radiologist Level Pneumonia Detection on Chest X-Rays with Deep Learning,” Nov. 2017.
[5] P. Rajpurkar et al., “Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists,” PLoS Med., vol. 15, no. 11, Nov. 2018, doi: 10.1371/journal.pmed.1002686.
[6] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, “ChestX-ray8: Hospital scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, May 2017, vol. 2017-Janua, pp. 3462–3471, doi: 10.1109/CVPR.2017.369
[7] H. Wang and Y. Xia, “ChestNet: A Deep Neural Network for Classification of Thoracic Diseases on Chest Radiography,” 2018.
[8] Ž. Knok, K. Pap, and M. Hrn?i?, “Implementation of intelligent model for pneumonia detection,” Teh. Glas., vol. 13, no. 4, pp. 315–322, 2019, doi: 10.31803/tg- 20191023102807.
[9] I. B. L. M. Suta, R. S. Hartati, and Y. Divayana, “Diagnosa Tumor Otak Berdasarkan Citra MRI (Magnetic Resonance Imaging),” Maj. Ilm. Teknol. Elektro, vol. 18, no. 2, Jun. 2019, doi: 10.24843/mite.2019.v18i02.p01.
[10] L. Devnath, S. Luo, P. Summons, and D. Wang, “Tuberculosis (TB) Classification in Chest Radiographs using Deep Convolutional Neural Networks,” Int. J. Adv. Sci. Eng. Technol., vol. ISSN, no. 3, pp. 2321–9009, 2018.
[11] R. H. Abiyev and M. K. S. Ma’aitah, “Deep Convolutional Neural Networks for Chest Diseases Detection,” J. Healthc. Eng., vol. 2018, 2018, doi: 10.1155/2018/4168538.
[12] L. A. Andika, H. Pratiwi, and S. S. Handajani, “Klasifikasi Penyakit Pneumonia Menggunakan Metode Convolutional Neural Network Dengan Optimasi Adaptive Momentum,” Indones. J. Stat. Its Appl., vol. 3, no. 3, pp. 331–340, 2019.
[13] O. Stephen, M. Sain, U. J. Maduh, and D. U. Jeong, “An Efficient Deep Learning Approach to Pneumonia Classification in Healthcare,” J. Healthc. Eng., vol. 2019, 2019, doi: 10.1155/2019/4180949.
[14] R. Rokhana et al., “Convolutional Neural Network untuk Pendeteksian Patah Tulang Femur pada Citra Ultrasonik B-Mode,” JNTETI, vol. 8, no. 1, 2019.