Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Saurabh Shete, Avinash Marbhal
DOI Link: https://doi.org/10.22214/ijraset.2022.47883
Certificate: View Certificate
Hepatocellular carcinoma (HCC) is the most common type of primary liver cancer. Hepatocellular carcinoma occurs most often in people with chronic liver diseases, such as cirrhosis caused by hepatitis B or hepatitis C infection. Early detection and accurate predictive analysis play a pivotal role in the totality of the human population and are of extreme importance for enhanced life expectancy. With the advent of computation, there are well-defined publicly available datasets that can be leveraged for an accurate and temporarily efficient understanding of HCC. There exists preliminary work on these data samples that leverage classical machine learning algorithms, however, the state of the art is heavily skewed towards the deep neural networks. To improve the existing approaches, this paper seeks to leverage Gaussian Dropout, a variant of the standard dropout, for its remedial action on overfitting and related qualities. The pipeline is also tested and experimented with Adadelta, to obtain the applicability of these additions to a standard feed-forward network. These experiments and the methodologies considered for appendage to the baseline network are thoroughly assessed and validated by using the accepted metrics on an iteratively imputed dataset on multiple train-test data distributions.
I. INTRODUCTION
Hepatocellular carcinoma (HCC) is the most common primary liver malignancy and is a leading cause of cancer-related death worldwide [1]. The numerous risk factors for developing hepatocellular carcinoma (HCC) include alcoholic cirrhosis, infection with either Hepatitis C or Hepatitis B, Wilson disease, hemochromatosis, and substances that cause hepatocellular cancer such as aflatoxin [1]. Hepatocellular carcinoma accounts for more than 90 percent of serious liver dis- orders and is the sixth most often tested for malignancy.
Clinicians evaluate the course of treatment for each patient based on evidence-based medicine, which may or may not apply to a specific patient due to the organic variability that exists across individuals. Throughout a long period, and particularly concerning the example in the case of hepatocellular carcinoma, certain examination considerations have been developing methodologies for assisting clinicians in dynamics Supported by organization x [1].
These methodologies make use of computational techniques to separate information from medical information. With the current advancements in the field of machine intelligence, many tasks can be computed with autonomy and with suitable accuracy, these technologies have shown superlative results in many fields like remote sensing [16], cyber forensics [20], and the related field of bioinformatics [5]. It can be intuitively said that machine learning methods can anticipate the danger of HCC advancement with high accuracy [17].
These methodologies can improve medical choice by contributing less tedious yet precise and viable early expectations of fibrosis and liver disease. Utilizing artificial intelligence and factual examination to foresee and perceive designs in huge datasets, machine learning methods can be utilized to anticipate hepatic maladies [26].
By heavily assessing and understanding the recent literature this paper strives to empirically analyze novel functional additions to a normal feed-forward neural network or a Multi-Layered Perceptron (MLP) [5], these experiments shall show- case a novel analysis for estimating an important subdomain of the HCC predictive computation, that is the survivability prognoses [17]. The paper mainly focuses on testing the applicability and societal implications of two deep learning methodologies, Gaussian Dropout [10] and Adadelta [27] [19]; they are thoroughly compared against the current frequent variants and methodologies in bioinformatics and other associated domains. The other tested methodologies for validation of the intuitions contain the standard dropout [18] methods and the Adam optimization technique as depicted in [2]. The structure of this paper is divided into four more sections, the related work or the literature survey, the methodology section, which is followed by the empirical analysis, and the inferred conclusion.
II. RELATED WORK
This section offers a condensed description of the available recent and relevant literature for a better understanding of the role of predictive analysis in hepatocellular carcinoma.
The research conducted at Taipei Medical University [15] proposed a study to construct a deep learning model that would use the trend and severity of each medical event from the electronic health record to reliably predict which patients would be diagnosed with HCC in one year by using convolutional neural networks.
The article [24] presented a comparative study between regression analysis and various machine learning methodologies on a dataset collected from 442 different patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006.
The article [3] relied on the Inception-V3 technology for predictions based on histopathology images. The paper [6] offered an in-depth review of traditional machine learning algorithms and various deep learning algorithms for HCC, further motivating this paper and the use of artificial neural networks, the paper also explained the utilities of various algorithms like Fuzzy Support Vector Machines and CART.
The article [11] also leveraged the paradigm of deep learning with an emphasis on Mask R-CNN for HCC and achieved a sensitivity of 84.8% with 4.80 false positives per CT scan on the test set. The paper [8] leveraged a dataset for 4423 CHC patients and several machine learning techniques like regression and decision trees to build HCC classification models for predicting HCC presence, and achieved an overall accuracy between 93.2% and 95.6%.
The article [22] also proposed the domain of machine learning with a study on clustering algorithms and SMOTE for HCC, and further justified the research present in the scope of this paper. Other relevant works concerning HCC include a critical review of ma- chine learning and predictive algorithms for estimating the therapeutic outcome of hepatocellular carcinoma [29], and the paper [23] which worked for diagnostic assistance.
The research presented here leverages Adadelta, the use of which is heavily inspired by the paper [27], the algorithm Adadelta dynamically adapts over time using only first-order information and has a minimal computational over- head beyond vanilla stochastic gradient descent, the constituent methodologies implicate a strong intuition for developing a superlative predictive neural architecture.
The use of Adadelta is also justified in the paper [19] which showed significant utilities for Pantograph and Catenary Comprehensive Monitor Status Prediction. This research also leverages Gaussian dropout as a regularization layer which has also depicted utility in the recent literature [12].
III. METHODOLOGY
The scope of this paper contains novel experiments on HCC survivability pre- dictions by enhanced or appended feed-forward neural networks. As depicted in the sections the primary experiments revolve around the use of dropout and the gaussian gate version of the original, along with a comparative analysis of the available optimizers to obtain the best possible combination and the highest relative utility of the same. This section is further divided into the dataset and the MLP subsections, the former explaining the attributes and the data distribution and the latter concentrates on the underlying functionalities of the neural architectural strategies.
A. Dataset
The dataset used in this study is publicly available and has been used in the paper [17] and is available by the source [4]. The dataset in its totality contains detailed information or a total of 49 usable attributes for 165 patients. The features have been chosen following the EASL-EORTC Clinical Practice Guidelines, which represent the current state of the art in the management of HCC. This was done in collaboration with a group of clinicians from CHUC’s Service of Internal Medicine A. The guidelines were developed by the EORTC (European Organization for the Research and Treatment of Cancer) and EASL (European Association for the Study of the Liver) [17].
This dataset contains the clinical characteristics that are thought to be the most important to the decision-making process that physicians go through when selecting the most appropriate therapy options and forecasting their results for each patient [17].
There are mainly two types of attributes, qualitative and quantitative, the former can be further bifurcated by the dichotomous and ordinal scale types, and for the latter, there exists only one subtype which is the ratio category. The detailed descriptions of the qualitative dichotomous attributes are mentioned in table 1. The ordinal variables are ‘Performance status’, ‘Encephalopathy’, and ‘Ascites’ with ranges ‘0,1,2,3,4’, ‘1,2,3’, and ‘1,2,3’ and percentage missingness as 0, 0.61, and 1,21 respectively [17].
Table 1: The prognostic factors or features which have a range of 0/1 and their corresponding percentage missing values. The information about the other data format for the quantitative type and the ratio subtype has been elaborated thoroughly in table 2. The overall missing data can be understood as a 10.22% portion of the whole dataset, and only 4.85% of all patients have a complete set of information [17].
Table 2: The quantitative attributes with their related ranges and the corresponding missing value percentages. The value to be predicted is the survivability of the patient, which is ex- pressed as a binary variable, as this work is based and focused on a one-year prediction distribution for the effective survivability, the dataset contains 102 cases that implicate positive survivability and 63 which implicate the opposite [17]. To remedy the missing values and to obtain the applicability of the tested methodologies, the missing values were appended using the Iterative Imputation method. The technique as used in [7] and available in the library [9] is based on the MICE functionality. The underlying functionality of MICE fills in missing data values in the attributes or categories of a data set by employing a strategy known as "divide and conquer," or, more simply expressed, by concentrating on one attribute at a time. Once the emphasis has been put on one field, MICE will utilize all the other variables or a subset of these characteristics that have been selected logically to forecast the amount of missing data in that variable [7]. The forecast is dependent on a regression model, the form of which is determined by the nature of the focal variable. Typically, this corresponds to a choice between the linear and the logistic regression paradigm.
B. MLP
The Backbone network as depicted in the aforementioned sections contains three ReLU activated hidden layers, an input layer with 49 neurons, and the SoftMax activated Hidden layer with two neurons to facilitate the use of binary cross- entropy resulting in a perceivable probabilistic distribution of the available data [5]. The same architecture is examined with either the standard dropout or the Gaussian dropout with both the Adadelta and Adam approaches. The schematic diagrams for the architecture or the pictorial description for a better understanding are mentioned below in figure 1.
C. Dropout
Usually, machine learning is leveraged to make predictions about outcomes based on a given collection of characteristics. Therefore, everything that implicates a higher generalization for the performance of a model is considered to be a positive step forward in this endeavor. At each iteration of the training phase, the dropout method aims to prevent a model from becoming too accurate by arbitrarily changing all of the outgoing edges of hidden neurons to the value of zero. This is done throughout the training phase [25]. By arbitrarily changing the output of a specific neuron to zero, the paradigm might make it easier for a model to generalize its findings [25]. When the output is set to 0, the loss function changes such that it is more sensitive to the activity of nearby neurons. This alters the way that the weights would be adjusted because of the process known as backpropagation.
The mathematical explanation refers to a method in which either the retained nodes are scaled accordingly by multiplying by 1/p at the training phase and the weights are not modified at the testing phase (where p refers to the standard or Bernoulli dropout), or each node is retained with a probability of p at training time, and the weights are scaled accordingly by multiplying by a factor of p at the testing phase [25]. A Gaussian gate takes the role of Bernoulli’s gate in the process known as Gaussian dropout, as a result, dropout may be seen as the process of multiplying each node by [p (1-p)] [25]. The random variable based on the Gaussian distribution produces the most entropy, whereas the random variable based on the Bernoulli distribution produces the lowest. According to the findings of the paper [25], a higher entropy produces better results. The anticipated magnitude of the observed activation does not change while using an implementation that is centered around a Gaussian Dropout.
As a result, unlike with conventional Dropouts, weight scaling is not necessary during the testing phase. All of the nodes in the graph are shown throughout each iteration of this technique for each training instance [25]. Therefore, the execution time will be longer as a result of this since the slowing that occurs during backpropagation will not occur [14]. To implement dropout, which requires a retention probability, the same values are used for each tested architecture in the normal dropout layer and the gaussian variation of the dropout layer. These values are accessible via the Keras suite [21].
D. Optimization
The study primarily tests two optimization strategies with an emphasis on Adadelta, whose applicability is thoroughly validated against the Adam optimization approach [28]. The stochastic optimization methodology of Adadelta is a component of the gradient descent method. This approach uses adaptive learning rate per dimension to solve two disadvantages: the continual delay of learning rate during training and the human selection of a global learning rate [27]. Utilizing the adjustable learning rate per dimension may eliminate both limitations. It may be seen as an extension of Adagrad, and its key advantage is that it adjusts the learning rate based on a moving window of gradient update, as opposed to accumulating all prior gradients. This helps it to forecast future gradients more precisely [27]. Due to this, Adadelta is able to continue its schooling despite having undergone many updates. In the original conception of Adadelta, selecting an initial learning rate is unnecessary.The Adam approach is based on adaptive moment estimate, and it adjusts the learning rate for each weight in the neural network based on these estimations of the first and second moments of the gradient. This allows the neural network to more effectively train. When dealing with significant issues that include a significant amount of data or parameters, the strategy is quite effective [13]. It requires first-order gradients, which results in less memory need, making it a more efficient algorithm overall. The hyperparameters in Adam have meanings that are obvious, and as a result, less adjusting intuitively is required. Adam has the drawback that it does not converge to the optimum solution as the SGD optimizer does, which is a significant limitation. The calculation that lies under the surface is based on the average initial moment, similar to RMSP. The method makes use of an average of the second moments of the gradients as opposed to changing the learning rates [13]. The exponential moving average of gradients and square gradients is computed with the help of this approach. In addition, specific parameters are used in order to exercise control over the decay rates of these moving averages [13]. The entire algorithm can be perceived as a combination of the ‘gradient descent with momentum’ algorithm and the ‘RMSP’ algorithm.
IV. RESULT AND DISCUSSION
All the mentioned approaches are implemented using the Keras suite, and on identical test-train splits to justify an unbiased study or analysis. Each approach is tested for two different test-train splits, 15%, and 25%, this measure is taken to generate a deeper viewpoint of these models’ functionality. The metrics chosen for assessing and comparing these models are Precision, Recall, F1-Score, and the percentage accuracies [16], a detailed comparison is mentioned in table 3. The weighted variants of the metrics were calculated except for the percentage accuracy. The total number of epochs and the related environment and hyper- parameters are kept identical for a thorough unbiased empirical understanding.
TABLE IIIII
Model |
Precision |
Recall |
F1-Score |
Percentage Accuracy |
MLP + ADM, 15 |
0.69 |
0.64 |
0.66 |
0.64 |
MLP + SD + ADM, 15 |
0.85 |
0.64 |
0.70 |
0.64 |
MLP + GD + ADM, 15 |
0.83 |
0.68 |
0.72 |
0.68 |
MLP + ADL, 15 |
0.65 |
0.64 |
0.64 |
0.64 |
MLP + SD + ADL, 15 |
0.92 |
0.72 |
0.77 |
0.72 |
MLP + GD + ADL, 15 |
0.77 |
0.56 |
0.64 |
0.56 |
MLP + ADM, 25 |
0.77 |
0.69 |
0.71 |
0.69 |
MLP + SD + ADM, 25 |
0.77 |
0.64 |
0.68 |
0.64 |
MLP + GD + ADM, 25 |
0.92 |
0.60 |
0.71 |
0.60 |
MLP + ADL, 25 |
0.90 |
0.40 |
0.54 |
0.40 |
MLP + SD + ADL, 25 |
0.57 |
0.57 |
0.57 |
0.57 |
MLP + GD + ADL, 25 |
0.47 |
0.45 |
0.45 |
0.45 |
Table 3: Empirical results for each approach, here GD depicts the use of Gaussian dropout and SD is the standard counterpart, the term ADA depicts the use of Adadelta and and ADM depicts the Adam technique. The terms 15 and 25 pertain to the used train test distribution. The base network is denoted as MLP
From the obtained results of the data distribution depicting a 25% test-train split the best performing model was the standard network, which was trained via Adam, where an increase of 53.33% was observed from the Adadelta variant. The model with the highest precision value was obtained as the MLP + GD + ADM, where a 19.48% increase was obtained from the most accurate architecture for the 25% split. For the 15% test-train distribution the best performing model was the MLP + SD + ADL, which does support the intuition of using Adadelta and also promotes experimentation with the algorithm. If direct comparisons are considered for use cases or the relative utility of the dropout approaches, the standard variant outperformed the Gaussian variant in 75% of scenarios, and for Adam and Adadelta, a 66.67% advantage was observed for a similar comparison. The temporal aspects of the conducted experiments are further elaborated in Table 4.
Table 4: Execution times for the predict functions for a 50% test-train split for 10 loops and 15 runs of the model predict functionality as available in Keras [21].
It can be inferred that the use of Gaussian dropout is beneficial especially for temporal efficiency, as the variant depicted a 2.95% and 2.73% decrease in the prediction time tests from the baseline MLP and the MLP with standard dropout. To also check the convergence of different approaches, and compare the ADL and ADM models, tests were conducted using the Early Stopping criteria as available in the Keras Suite [21] on the different model permutations. The max possible epochs were kept as 50 and relative epochs required to converge were calculated. This implicates the general trend or it offers plausible information regarding predicting the general range of required epochs. This however would not account for its predictive potential, but just the possible constrained limit for training a model in a minimum computing scenario.
This paper aimed to explain the applicability and the relevance of experimentation for Gaussian Dropout and Adadelta for the task of accurate predictions for Hepatocellular Carcinoma survivability. A publicly available dataset was lever- aged by imputing the missing values, which resulted in a synthetic rendition of the same, this step provided a feasible database for estimating or for perceiving the relative utility of the a forementioned approaches. After thorough tests with the possible neural architectural permutations for both the resultant accuracy and the temporal efficiency, the use of Gaussian Distribution can be justified and can also be recommended for related tasks where a trade-off between computation and accuracy is essential. The experiments on Adadelta can also be considered relevant and can be leveraged in experiments concerning predictive analysis.
[1] Hepatocellular carcinoma patient’s survival prediction using oversampling and machine learning techniques. In: 2021 2nd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST). IEEE (1 2021). https://doi.org/10.1109/icrest51555.2021.9331108, http://dx.doi.org/10.1109/icrest51555.2021.9331108 [2] B, C.L., F, P.G.: Adam and the ants: On the influence of the optimization algorithm on the detectability of DNN watermarks. Entropy (Basel, Switzerland) 22, 33279925 (Dec 2020). https://doi.org/10.3390/e22121379, https://dx.doi.org/10.3390/e22121379 [3] Coudray, N., Moreira, A.L., Sakellaropoulos, T., Fenyö, D., Razavian, N., Tsirigos, A.: Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning (10 2017). https://doi.org/10.1101/197574, http://dx.doi.org/10.1101/197574 [4] Dua, D., Graff, C.: Uci machine learning repository (2019), http://archive.ics.uci.edu/ml [5] Gajjar, P., Shah, P., Vegada, A., Savalia, J.: Triplet loss for chromosome classification. Journal of Innovative Image Processing 4(1), 1–15 (2 2022). https://doi.org/10.36548/jiip.2022.1.001, http://dx.doi.org/10.36548/jiip.2022.1.001 [6] Gogi, V.J., Vijayalakshmi, M.: Review of machine learning methods for the survey on HCC scenario and prediction strategy. In: 2020 4th International Conference on Trends in Electronics and Informatics (ICOEI) (48184). IEEE (6 2020). https://doi.org/10.1109/icoei48184.2020.9142968, http://dx.doi.org/10.1109/icoei48184.2020.9142968 [7] Hallam, A., Mukherjee, D., Chassagne, R.: Multiple imputation via chained equations for elastic well log imputation and prediction (4 2022). https://doi.org/10.31223/x57k6q, http://dx.doi.org/10.31223/x57k6q [8] Hashem, S., ElHefnawi, M., Habashy, S., El-Adawy, M., Esmat, G., Elakel, W., Abdelazziz, A.O., Nabeel, M.M., Abdelmaksoud, A.H., Elbaz, T.M., Shousha, H.I.: Machine learning prediction models for diagnosing hepatocellular carcinoma with hcv related chronic liver disease. Computer Methods and Programs in Biomedicine 196, 105551 (11 2020). https://doi.org/10.1016/j.cmpb.2020.105551, http://dx.doi.org/10.1016/j.cmpb.2020.105551 [9] Jurczyk, T.: Clustering with scikit-learn in python. Programming Historian (10) (9 2021). https://doi.org/10.46430/phen0094, http://dx.doi.org/10.46430/phen0094 [10] Karthik, R., Menaka, R., Kathiresan, G., Anirudh, M., Nagharjun, M.: Gaussian dropout based stacked ensemble CNN for classification of breast tumor in ultrasound images. IRBM (10 2021). https://doi.org/10.1016/j.irbm.2021.10.002, http://dx.doi.org/10.1016/j.irbm.2021.10.002 [11] Kim, D.W., Lee, G., Kim, S.Y., Ahn, G., Lee, J.G., Lee, S.S., Kim, K.W., Park, S.H., Lee, Y.J., Kim, N.: Deep learning–based algorithm to detect primary hepatic malignancy in multiphase CT of patients at high risk for HCC. European Radiology 31(9), 7047–7057 (3 2021). https://doi.org/10.1007/s00330-021-07803-2, http://dx.doi.org/10.1007/s00330-021-07803-2 [12] Kingma, D.P., Salimans, T., Welling, M.: Variational dropout and the local reparameterization trick (06 2015), http://arxiv.org/abs/1506.02557v2 [13] Landro, N., Gallo, I., Grassa, R.L.: Mixing ADAM and SGD: a combined optimization method (11 2020), http://arxiv.org/abs/2011.08042v1 [14] Li, Z., Gong, B., Yang, T.: Improved dropout for shallow and deep learning (02 2016), http://arxiv.org/abs/1602.02220v2 [15] Liang, C.W., Yang, H.C., Islam, M.M., Nguyen, P.A.A., Feng, Y.T., Hou, Z.Y., Huang, C.W., Poly, T.N., Li, Y.C.J.: Predicting hepatocellular carcinoma with minimal features from electronic health records: Development of a deep learning model (preprint) (52020). https://doi.org/10.2196/preprints.19812, http://dx.doi.org/10.2196/preprints.19812 [16] Mehta, N., Shah, P., Gajjar, P.: Oil spill detection over ocean surface using deep learning: a comparative study. Marine Systems amp; Ocean Technology 16(3-4), 213–220 (11 2021). https://doi.org/10.1007/s40868-021-00109-4, http://dx.doi.org/10.1007/s40868-021-00109-4 [17] MS, S., PH, A., PJ, G.L., A, S., A, C.: A new cluster-based oversampling method for improving survival prediction of hepaocellular carcinoma patients. Journal of biomedical informatics 58, 26423562 (Dec 2015). https://doi.org/10.1016/j.jbi.2015.09.012, https://dx.doi.org/10.1016/j.jbi.2015.09.012 [18] Neill, J.O., Bollegala, D.: Analysing dropout and compounding errors in neural language models (11 2018), http://arxiv.org/abs/1811.00998v1 [19] Qu, Z., Yuan, S., Chi, R., Chang, L., Zhao, L.: Genetic optimization method of pantograph and catenary comprehensive monitor status prediction model based on adadelta deep neural network. IEEE Access 7, 23210–23221 (2019). https://doi.org/10.1109/access.2019.2899074, http://dx.doi.org/10.1109/access.2019.2899074 [20] Rajendiran, K., Kannan, K., Yu, Y.: Applications of machine learning in cyber forensics. In: Advances in Digital Crime, Forensics, and Cyber Terrorism, pp. 29–46. IGI Global (2021). https://doi.org/10.4018/978-1-7998-4900-1.ch002, http://dx.doi.org/10.4018/978-1-7998-4900-1.ch002 [21] Reiser, P., Eberhard, A., Friederich, P.: Implementing graph neural networks with tensorflow-keras (03 2021), http://arxiv.org/abs/2103.04318v1 [22] Santos, M.S., Abreu, P.H., García-Laencina, P.J., Simão, A., Carvalho, A.: A new cluster-based oversampling method for improving survival prediction of hepatocellular carcinoma patients. Journal of Biomedical Informatics 58, 49–59 (12 2015). https://doi.org/10.1016/j.jbi.2015.09.012, http://dx.doi.org/10.1016/j.jbi.2015.09.012 [23] Sato, M., Morimoto, K., Kajihara, S., Tateishi, R., Shiina, S., Koike, K., Yatomi, Y.: Machine-learning approach for the development of a novel predictive model for the diagnosis of hepatocellular carcinoma. Scientific Reports 9(1) (5 2019). https://doi.org/10.1038/s41598-019-44022-8, http://dx.doi.org/10.1038/s41598-019-44022-8 [24] Singal, A.G., Mukherjee, A., Elmunzer, J.B., Higgins, P.D.R., Lok, A.S., Zhu, J., Marrero, J.A., Waljee, A.K.: Machine learning algorithms outperform conventional regression models in predicting development of hepatocellular carcinoma. American Journal of Gastroenterology 108(11), 1723–1730 (11 2013). https://doi.org/10.1038/ajg.2013.332, http://dx.doi.org/10.1038/ajg.2013.332 [25] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15 (2014) [26] Xu, K., Zhao, Z., Gu, J., Zeng, Z., Ying, C.W., Choon, L.K., Hua, T.C., Chow, P.K.: Multi-instance multi-label learning for gene mutation prediction in hepatocellular carcinoma. 2020 42nd Annual International Conference of the IEEE Engineering in Medicine Biology Society (EMBC) (05 2020). https://doi.org/10.1109/EMBC44109.2020.9175293, http://arxiv.org/abs/2005.04073v1 [27] Zeiler, M.D.: ADADELTA: an adaptive learning rate method (12 2012), http://arxiv.org/abs/1212.5701v1 [28] Zhu, Z., Sun, H., Zhang, C.: Effectiveness of optimization algorithms in deep image classification (10 2021), http://arxiv.org/abs/2110.01598v1 [29] Zou, Z.M., Chang, D.H., Liu, H., Xiao, Y.D.: Current updates in machine learning in the prediction of therapeutic outcome of hepatocellular carcinoma: what should we know? Insights into Imaging 12(1) (3 2021). https://doi.org/10.1186/s13244- 021-00977-9, http://dx.doi.org/10.1186/s13244-021-00977-9
Copyright © 2022 Saurabh Shete, Avinash Marbhal. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET47883
Publish Date : 2022-12-05
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here