Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Ashwini Bhalerao, Dipti Kale, Saylee Hadke, Piyush Banmare, Ritik Awachat , Prof. Mr. Punesh Tembhare
DOI Link: https://doi.org/10.22214/ijraset.2022.41762
Certificate: View Certificate
Wireless capsule endoscopy is an important and ongoing diagnostic procedure. It brings a lot of images throughout the journey to the patient\'s digestive tract and often requires automatic analysis. One of the most notable abnormalities in bleeding and spontaneous isolation of hemorrhage is an interesting research topic. We have developed a computer-based framework that utilizes the latest advances in artificial intelligence based on in-depth reading comprehension to successfully support the entire screening process from video transmission to automatic wound detection to reporting. More precisely, our method of handling multiple video uploads at the same time, automatically scans them for the purpose of identifying important video frames with potential lesions (subsequent analysis by endoscopies) and brings doctors to the methods of linking detected and previously detected lesions. or with current photographs and scientific information from relevant documents to obtain a more accurate final conclusion. Auto DL is considered useful for the site development of advanced read-through learning models. An untested endoscopic with a certain level of expertise can at least benefit from AutoDL support.
I. INTRODUCTION
Over the years, medical endoscopy has been shown to be an important technique for diagnostic imaging of several parts of the body as well as invasive surgery on the abdomen, joints, and other parts of the body. The word "endoscopy" comes from the Greek and refers to the process of "looking inside" the human body in an annoying way. This is achieved by injecting a medical device called an endoscope into an empty organ or body cavity. Depending on the condition of the body, the implantation is done by opening the natural body (e.g., to examine the esophagus or intestines) or by a small hole that serves as an artificial entry. With surgical procedures (e.g., gall bladder removal), additional notes are needed to include more surgical tools. Compared to open surgery this still causes very little trauma, which is one of the most important benefits of mindless surgery (also known as buttonhole or keyhole surgery). Many medical procedures were instituted by incorporating endoscopy; some were even empowered by this technology from the beginning.
Artificial intelligence (AI) is an exciting new technology that combines machine learning with sensory networks to improve current technology or create new ones.
Potential AI applications were introduced to help fight colorectal cancer (CRC). These include how AI will affect colorectal cancer as well as new methods of mass information gathering such as Geo AI, digital epidemiology and real-time data collection.
At the same time, it examines current diagnostic tools such as CT / MRI, endoscopes, genetics, and pathological tests that have also benefited greatly from the use of in-depth study. The power of AI in relation to the treatment of colorectal cancer shows great confidence in the field of exploration and translation of oncology, meaning better and more personalized treatments for those in need. AI is a method that can improve the processes by which physicians collect data for epidemiological purposes.
Deep automatic learning techniques (AutoDL), which allows faster search of correct neural structures and hyper parameters without encrypting complex, highly developed codes. This “standard” software or platform can be used without special AI technology and can be easily used in medical applications with simple thinking structures. In contrast, the performance of AI models recognized by old-fashioned data scientists and AutoDL models recognized by health care researchers in the field of intestinal endoscopy were not directly compared.
In addition, there is unusual data on human interaction with AI. For example, the response of endoscopies (i.e., approval, laziness, or negligence) to a diagnosis made using the AI ??model remains unknown. This study aims to create AutoDL models that differentiate the depth of gastric neoplasms attacks using endoscopic images and compare the diagnostic functioning of AutoDL models with previously developed CNN models. Additionally, the interoperability of endoscopies AI using a newly developed model was tested.
II. RELATED WORK
Stomach cancer (GC) is one of the most common malignant tumors of the gastric mucosa. According to international statistics, GC is the second leading cause of death from cancer, and the number of patients with GC is increasing due to changes in eating habits and years of life [1,2].
GC can be successfully treated if detected early. Therefore, early detection and treatment of stomach cancer is important. Radiography and endoscopy are used to diagnose GC. During endoscopy, the specialist inserts an endoscope into the patient's mouth or nose and looks directly at the mucous membranes of the digestive tract to detect any abnormalities.
The sensitivity of GC detection by endoscopy is high, and if the lesions are found during the examination then the tissues can be collected and simple treatment can be given [3].
However, specialists who perform the procedure need to detect abnormalities while using the endoscope, which makes the diagnostic process more complicated and complicated. This results in varying diagnostic accuracy, with some studies reporting ulcers missing in 22.2% of cases [4]. If doctors can use the results of computer image analysis and find abnormalities during testing, they can solve some of these problems and get a GC early. An in-depth study, of artificial intelligence technology, has recently been confirmed to have a high potential for image recognition through several studies conducted in the medical field [5 - 10]. Therefore, we focused on GC automation detection in endoscopic imaging using a computer-assisted diagnostic method based on in-depth learning technology.
There are many in-depth GC diagnostic studies using endoscopic imaging, which includes phase-by-phase studies between GC and healthy subjects and spontaneous detection of GC regions. Shichijo et al. investigated the prediction of Helicobacter pylori infection using the convolutional neural network (CNN) and found 88.9% sensitivity and 87.4% specificity [11]. Li et al. developed a method of differentiation between GC and normal tissue using narrow-band imaging (NBI) imaging [12]. They used Inception-v3 as a CNN split model and achieved 91.18% sensitivity and 90.64% specificity. Zhang et al. developed a method for differentiating dangerous diseases (polyp, ulcer, and erosion) using CNN and obtained 88.9% grade accuracy [13]. Hirasawa et al. developed a single-shot (SSD) multi-shooter detector, an object detection model, to automatically detect GC pre-phase [14].
The sensitivity of detection was 92.2% and the positive prediction was 30.6%. Sakai et al. it also creates a GC object detection method by separating GC regions from normal regions using micropatch endoscopic images [15]. The detection sensitivity and specificity of the method were 80.0% and 94.8%, respectively.
We proposed a way to extract the presence and invasive regions of the original GC using Mask R-CNN, which can perform both object detection and differentiation [16]. We showed that the sensitivity of the automatic detection of premature GC was 96.0% and that the concentration of the division was 71%. Although the method had sufficient sensitivity to detection, the average false positives (FP) were 0.10 per image (3.0 per patient).
The Mask R-CNN used in this study presented a model for the detection of common photographic images. Capture the clear contour of the object in the image, in order to identify lesions with a clear shape that caused the imbalance. On the other hand, most early GC lesions, when only the top of the abdominal mucosa was cancerous, were not properly detected by the acquisition model because the contours were not clear.
A. Objectives
There are some endoscopic devices which are already being developed but a more effective and efficient can be developed which will help doctors and patients to save more time. Which ultimately help patient to get treated as the tumors can be cancerous and can get worse with time. This can be achieved by building a software system which analyses the images rapidly this can be achieved by applying deep learning and using an advanced and effective scheduler. And a automatic report will be generated instantly if any suspected tumor is found.
III. PROPOSED SYSTEM
A. Data Augmentation
Rotation and inversion invariance can be established because the endoscope may capture images of the gastric condition from various angles. Therefore, various images may be created by rotating or flipping each of the collected images. In this study, to ensure stable deep learning performance, we prepared, rotated, and flipped original images for data augmentation and used them for training. Using our in-house software, we generated images by setting the rotation pitch of the images of GC and healthy subjects to 6? and 10? , respectively, so that the numbers of images of GC and healthy subjects were equal.
B. Initial Detection
We have removed the first GC candidate circuits for endoscopic imaging. In this work, we used U-Net, a semantic isolation method, first proposed in 2015 as a means of extracting cell regions from small images and widely used in fields outside of medical imaging ever since. The network structure is shown in Figure 2. U-Net consists of five layers of integration and integration, followed by five layers of encoding (ascending layers). When an image is provided in the input layer, the encoder layer in the first part releases the image elements. Then, the decoder layer in the second part produces a labeled image divided based on extracted features. Additionally, the encoder and decoder layers are connected to each other and high resolution information from the encoder layer is transmitted directly to the decoder layer on the other side, thereby enhancing the clarity of the label image. U-Net offers the location of the first GC pre-existing candidate with image capture. As for the U-Net parameters, the Dice coefficient is used as a loss function (dice index definition is described in Section 2.6), with Adam's algorithm as the efficiency algorithm, 0.0001 as the learning coefficient, 100 as the number of times training, and 8 as a collection size.
C. Box Classification
The detected candidate regions included many over-detected regions (FPs). These FPs may be recognized and reduced using a different approach from the segmented U-Net for segmentation. In the box classification part of the proposed U-Net R-CNN, FPs are eliminated from the candidate regions by another CNN as shown in figure below.
First, the input image was provided to U-Net and the output labeled image of U-Net was automatically binarized by Otsu’s method [20], followed by the labeling process to pick up individual candidate regions.
First, the input image was provided to U-Net and the output label image of U-Net was automatically integrated into the Otsu [20] method, followed by the labeling process to capture the regions of each candidate. The candidate box binding box was then cut and inserted into CNN to separate the candidate circuit as GC or false. Finally, CNN regions identified as GC were used as final acquisition results. With the development of CNN, we introduced VGG-16, Inception v3, and ResNet50, as well as DenseNet121, 169, and 201, the region was often seen in many images because the images were taken at multiple angles. Then choose the best model by comparing them. These CNN models are pre-trained using an image data set, with a much larger number of training image samples than our original data set.
In separating GCs and FPs, we replaced the fully integrated layers of the original CNN models into three layers with 1024, 256 and 2 units. CNN is used for image analysis. Variable layers combine inputs and transfer their effect to the next layer. This is stimulated by a neuron's response to the human visual cortex to a particular stimulus. A well-trained CNN contains a sequence of information such as edge, presence, object component, and object structure in image classification. One CNN architecture consists of a series of dynamic layers, interlocking layers, followed by a fully integrated layer. The main purpose of the convolutional layer is to extract readable features such as local symbols in the images. The special filter parameters, called convolution, are trained, and the mathematical function takes two inputs, such as an image matrix and a kernel. By reading consecutive letters, visual elements can be successfully extracted. This is similar to the visual cortex method. With the use of a filter bank, where each filter is a square match running over the first image, a conversion process can be performed. The moving grid image (pixel value) is abbreviated using filter weights, and many dynamic layer filters are used to produce multiple active maps. Convolution, an integral part of CNN, is critical to the success of image processing operations. Limited or scaled merge layers are used to effectively reduce the size of the active map. They also maintain the shape of the object and the location of the semantic elements found in the image. Therefore, merging makes the convolutional layer less susceptible to minor shifts or object distortions. In most cases, the upper extremity is powered by force. Incorporating a periodic integration layer between consecutive layers of CNN architecture changes is common. After several layers of transformation and integration, high-level neural network considerations occur in fully integrated layers, and all functional responses are compiled from the whole image to produce the final result.
V. FUTURE WORK
We expect that the proposed framework will lessen the burden on clinicians, saving them time and manual labor, while improving the accuracy of bleeding detection in CE images, at the same time. Moreover, as it reduces the required manual labor and thus its induced cost, the overall cost of CE will also be much lower. As a result, a larger population can a?ord this health technology. For future work, we plan to collect further clinical data, to carry out more extensive evaluation experiments. Similarly, leveraging our framework and advances, we plan to develop an entire system for computer-aided automatic bleeding detection in capsule endoscopy videos, and have clinicians assess it in practice.
With the rapid development of AI technology, medical professionals, including endoscopists, need to gain technical knowledge to understand AI skills and how AI can change endoscopy processes in the near future. We believe that these ML-based analytics tools will eventually be adopted in diagnostic performance. However, this prediction does not necessarily mean a complete overhaul of medical doctors. This “special change” is not really a good substitute but a strong complement to the unchanging and unbelievable human potential; therefore, it can improve overall efficiency. In this study, we developed an in-depth study model that can accurately determine the presence of the GC and its invasive environment using endoscopic imaging. In this paper, as an in-depth reading model, we have proposed the novel Net R-CNN which combines the process of splitting U-Net with CNN to split images to eliminate FP. As a result of experiments using endoscopic GC images of pre-stage and healthy subjects, the proposed method showed higher detection capabilities than previous techniques. These results show that our approach is effective in the automatic detection of premature GC in endoscopy.
[1] Fitzmaurice, C.; Akinyemiju, T.F.; Al Lami, F.H.; Alam, T.; Alizadeh-Navaei, R.; Allen, C.; Alsharif, U.; Alvis-Guzman, N.; Amini, E.; Anderson, B.O.; et al. Global, regional, and national cancer incidence, mortality, years of life lost, years lived with disability, and disability-adjusted life-years for 29 cancer groups, 1990 to 2016: A systematic analysis for the global burden of disease study global burden of disease cancer collaboration. JAMA Oncol. 2018, 4, 1553–1568. [PubMed] [2] Karger Publishers [Internet]. GLOBOCAN 2012: Estimated Cancer Incidence, Mortality, and Prevalence Worldwide in 2012. Available online: http://globocan.iarc.fr/Pages/fact_sheets_cancer.aspx (accessed on 31 October 2021). [3] Tashiro, A.; Sano, M.; Kinameri, K.; Fujita, K.; Takeuchi, Y. Comparing mass screening techniques for gastric cancer in Japan. World J. Gastroenterol. 2006, 12, 4873–4874. [PubMed] [4] Toyoizumi, H.; Kaise, M.; Arakawa, H.; Yonezawa, J.; Yoshida, Y.; Kato, M.; Yoshimura, N.; Goda, K.; Tajiri, H. Ultrathin endoscopy versus high-resolution endoscopy for diagnosing superficial gastric neoplasia. Gastrointest. Endosc. 2009, 70, 240–245. [CrossRef] [PubMed] [5] Teramoto, A.; Tsukamoto, T.; Yamada, A.; Kiriyama, Y.; Imaizumi, K.; Saito, K.; Fujita, H. Deep learning approach to classification of lung cytological images: Two-step training using actual and synthesized images by progressive growing of generative adversarial networks. PLoS ONE 2020, 15, e0229951. [CrossRef] [PubMed] [6] Yan, K.; Cai, J.; Zheng, Y.; Harrison, A.P.; Jin, D.; Tang, Y.B.; Tang, Y.X.; Huang, L.; Xiao, J.; Lu, L. Learning from Multiple Datasets with Heterogeneous and Partial Labels for Universal Lesion Detection in CT. arXiv 2020, arXiv:2009.02577. [CrossRef] [PubMed] [7] Sahiner, B.; Pezeshk, A.; Hadjiiski, L.M.; Wang, X.; Drukker, K.; Cha, K.H.; Summers, R.M.; Giger, M.L. Deep learning in medical imaging and radiation therapy. Med. Phys. 2019, 46, e1–e36. [CrossRef] [PubMed] [8] Toda, R.; Teramoto, A.; Tsujimoto, M.; Toyama, H.; Imaizumi, K.; Saito, K.; Fujita, H. Synthetic CT Image Generation of ShapeControlled Lung Cancer using Semi-Conditional InfoGAN and Its Applicability for Type Classification. Int. J. Comput. Assist. Rad. Surg. 2021, 16, 241–251. [CrossRef] [PubMed] [9] Tsujimoto, M.; Teramoto, A.; Dosho, M.; Tanahashi, S.; Fukushima, A.; Ota, S.; Inui, Y.; Matsukiyo, R.; Obama, Y.; Toyama, H. Automated classification of increased uptake regions in bone SPECT/CT images using three-dimensional deep convolutional neural network. Nucl. Med. Commun. 2021, 42, 877–883. [PubMed] [10] Teramoto, A.; Fujita, H.; Yamamuro, O.; Tamaki, T. Automated detection of pulmonary nodules in PET/CT images: Ensemble false-positive reduction using a convolutional neural network technique. Med. Phys. 2016, 43, 2821–2827. [CrossRef] [PubMed] [11] Shichijo, S.; Endo, Y.; Aoyama, K.; Takeuchi, Y.; Ozawa, T.; Takiyama, H.; Matsuo, K.; Fujishiro, M.; Ishihara, S.; Ishihara, R.; et al. Application of convolutional neural networks for evaluating Helicobacter pylori infection status on the basis of endoscopic images. Scand. J. Gastroenterol. 2019, 54, 158–163. [CrossRef] [PubMed] [12] Li, L.; Chen, Y.; Shen, Z.; Zhang, X.; Sang, J.; Ding, Y.; Yang, X.; Li, J.; Chen, M.; Jin, C.; et al. Convolutional neural network for the diagnosis of early gastric cancer based on magnifying narrow band imaging. Gastric Cancer. 2020, 23, 126–132. [CrossRef] [PubMed] [13] Zhang, X.; Hu, W.; Chen, F.; Liu, J.; Yang, Y.; Wang, L.; Duan, H.; Si, J. Gastric precancerous diseases classification using CNN with a concise model. PLoS ONE 2017, 12, e0185508. [CrossRef] [PubMed] [14] Hirasawa, T.; Aoyama, K.; Tanimoto, T.; Ishihara, S.; Shichijo, S.; Ozawa, T.; Ohnishi, T.; Fujishiro, M.; Matsuo, K.; Fujisaki, J.; et al. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer 2018, 21, 653–660. [CrossRef] [PubMed] [15] Sakai, Y.; Takemoto, S.; Hori, K.; Nishimura, M.; Ikematsu, H.; Yano, T.; Yokota, H. Automatic detection of early gastric cancer in endoscopic images using a transferring convolutional neural network. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 4138–4141. [16] Shibata, T.; Teramoto, A.; Yamada, H.; Ohmiya, N.; Saito, K.; Fujita, H. Automated Detection and Segmentation of Early Gastric Cancer from Endoscopic Images Using Mask R-CNN. Appl. Sci. 2020, 10, 3842
Copyright © 2022 Ashwini Bhalerao, Dipti Kale, Saylee Hadke, Piyush Banmare, Ritik Awachat , Prof. Mr. Punesh Tembhare. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET41762
Publish Date : 2022-04-23
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here