Skip to main content

Comparative analysis of the DCNN and HFCNN Based Computerized detection of liver cancer

Abstract

Liver cancer detection is critically important in the discipline of biomedical image testing and diagnosis. Researchers have explored numerous machine learning (ML) techniques and deep learning (DL) approaches aimed at the automated recognition of liver disease by analysing computed tomography (CT) images. This study compares two frameworks, Deep Convolutional Neural Network (DCNN) and Hierarchical Fusion Convolutional Neural Networks (HFCNN), to assess their effectiveness in liver cancer segmentation. The contribution includes enhancing the edges and textures of CT images through filtering to achieve precise liver segmentation. Additionally, an existing DL framework was employed for liver cancer detection and segmentation. The strengths of this paper include a clear emphasis on the criticality of liver cancer detection in biomedical imaging and diagnostics. It also highlights the challenges associated with CT image detection and segmentation and provides a comprehensive summary of recent literature. However, certain difficulties arise during the detection process in CT images due to overlapping structures, such as bile ducts, blood vessels, image noise, textural changes, size and location variations, and inherent heterogeneity. These factors may lead to segmentation errors and subsequently different analyses. This research analysis compares two advanced methodologies, DCNN and HFCNN, for liver cancer detection. The evaluation of DCNN and HFCNN in liver cancer detection is conducted using multiple performance metrics, including precision, F1-score, recall, and accuracy. This comprehensive assessment provides a detailed evaluation of these models’ effectiveness compared to other state-of-the-art methods in identifying liver cancer.

Peer Review reports

Introduction

As stated by the World Health Organization (WHO), liver cancer is among the leading causes of cancer-related fatalities globally, primarily categorized into two types: Hepatocellular Carcinoma (HCC), which originates from hepatocytes, and intrahepatic cholangiocarcinoma, which arises in the bile ducts [1]. The liver, the largest organ in the human body, plays a crucial role in purifying blood, metabolizing drugs, and producing proteins essential for blood coagulation [2]. Risk factors for liver cancer include chronic viral hepatitis, cirrhosis, exposure to aflatoxins, obesity, type 2 diabetes, excessive alcohol consumption, and genetic conditions [3]. Symptoms often include weight loss, abdominal pain, jaundice, fatigue, and an enlarged liver [4]. Diagnosis typically involves imaging tests such as CT, Magnetic Resonance Imaging (MRI), or ultrasound, blood tests, and a liver biopsy for confirmation [5]. Liver cancer treatment varies by cancer stage and may include surgical resection, liver transplantation, and therapies such as ablation, embolization, or chemotherapy. Early detection significantly improves outcomes [6]. Prevention involves managing risk factors like hepatitis B vaccination, avoiding excessive alcohol consumption, and maintaining a healthy lifestyle [7]. Ultrasound imaging is a non-invasive technique that allows real-time visualization of the liver, as shown in Fig. 1 below, making it particularly useful for detecting liver tumors.

Fig. 1
figure 1

CT scan image of Liver cancer [4]

In recent years, imaging techniques have greatly enhanced liver cancer detection. CT, MRI, and ultrasound provide detailed liver images, aiding tumor identification. Emerging methods, such as texture analysis, 3D reconstruction, and DL algorithms, have enabled automated detection and segmentation of liver tumors [8]. Despite the advantages of these techniques, challenges persist, including lesion variability, complex liver anatomy, and limited annotated datasets for training segmentation algorithms [9]. The next section discusses various research on liver cancer segmentation, highlighting challenges like lesion variability, motion artifacts, and interobserver variability, which complicate the development of automated methods [10]. Although traditional methods to segment liver do not require significant resources [11,12,13,14,15,16], computational demand is an issue for advanced segmentation techniques. Additionally, advanced segmentation techniques, such as DL models, demand high computational resources, adding complexity to real-time clinical applications [17]. The HFCNN method, which combines convolutional neural networks (CNNs) with hierarchical feature fusion, offers a promising solution to these challenges. By capturing both local and global features in medical images, HFCNN enhances segmentation and classification accuracy, eliminating the need for manual feature engineering. This approach adapts to various imaging modalities and clinical scenarios, making it highly effective in medical image analysis [18].

The use of DL models for computerized liver cancer detection offers several advantages, including reduced diagnostic time, improved consistency, and the ability to process large datasets. However, challenges remain, such as the variability in tumor appearance, image quality, and the need for extensive annotated datasets for training these models. Despite these hurdles, the ongoing research and development in this area holds great promise for transforming liver cancer detection and providing more efficient tools for clinicians in the fight against liver cancer. Researchers continually work for superior proficiency and exactness of methods related to liver lesion segmentation to improve patient care and outcomes. To eliminate these issues and lead to bad performance of computerized detection. The foremost research contribution is summarized below:

  • Analysis of CT Image enhancement of liver through advanced filtering process for edge, texture, and contrast augmentation.

  • Computerized liver cancer segmentation and detection through DCNN and HFCNN frameworks.

  • Comparative assessment of DCNN and HFCNN technologies.

The following gives the established research structure: Sect. 2 specifies a detailed review of other state-of-the-art processes. Section 3 encompasses discussion-related datasets, clear understanding, and implementation of DCNN and HFCNN methodologies. Section 4 comprehends the application outcomes with discussions and comparisons gained commencing the implementation of methods for segmentation and detection. Section 5 delivers the conclusions and possible improvements of the work in the future.

Literature survey

Labor-intensive segmentation and identification are laborious for radiologists, particularly when dealing with 3D CT scans containing numerous injuries. The radiologist must meticulously review and delineate these lesions, which can be labor-intensive, potentially leading to delays in diagnosis and treatment planning. Among various segmentation methods, some operate in a fully automated manner, while others involve individual system input associated with semi-automatic attempts. Automatic 3D Affine Invariant Shape Parameterization: This method automatically segments the liver by creating the sampling continuously for diagnostics under 3D surface comparisons within the spatial parameters. It operates without direct user input [19]. Multistage Automatic Segmentation: This fully automated approach employs a multistage process. It sequentially segments the liver, tumors, and hepatic vessels by determining the threshold at the optimum rate under each stage [20]. Semi-Automatic Liver Segmentation: This method starts with an approximate liver model and then refines the segmentation by applying a Laplacian mesh optimization approach. User interaction is involved in the initial phase of modeling. These segmentation techniques demonstrate the variety of approaches available for liver lesion detection, ranging from fully-automated methods that require no user input to semi-automatic techniques that involve some user interaction in the process [21].

Zhang et. al [22] presented an innovative approach for image dehazing using a multi-level fusion and attention-guided CNN. The paper addresses the challenge of removing haze from images, which is a common issue in remote sensing, surveillance, and autonomous driving. The authors propose a CNN architecture that integrates multi-level fusion techniques and attention mechanisms to enhance the quality of dehazed images. The multi-level fusion enables the model to combine low-level and high-level features, while the attention mechanism allows the network to focus on the most relevant regions of the image, improving overall performance. This method has demonstrated improvements in both objective image quality metrics and subjective visual results, particularly in hazy conditions, making it a significant contribution to the field of image processing [23]. proposed a category-consistent deep network for vehicle logo recognition, a critical task in intelligent transportation systems (ITS) and automated vehicle technologies. The study aims to enhance the recognition accuracy of vehicle logos, which often face challenges such as variations in illumination, orientation, and occlusion. The authors introduce a category-consistent DL framework that incorporates both category consistency and deep feature learning. By leveraging this approach, the model learns to recognize vehicle logos more effectively by associating visual features with category-specific constraints, significantly improving recognition accuracy. The proposed method outperforms existing techniques in both accuracy and robustness across multiple datasets, highlighting its potential for real-world ITS applications.

Chen et. al [24] focused on solving complex optimization problems using a novel algorithm called the many-objective population extremal optimization (MPOEO) algorithm. Unlike traditional optimization algorithms that address single or bi-objective problems, the MPOEO algorithm is designed for problems with many objectives, which are often encountered in real-world engineering and scientific problems. The authors introduce an adaptive hybrid mutation operation that enhances the algorithm’s ability to explore and exploit the solution space more efficiently. This hybrid mutation operation combines both global and local search strategies, allowing for better performance in terms of convergence and diversity. The study highlights the versatility of the MPOEO algorithm in tackling complex many-objective problems and provides insights into the effectiveness of hybrid mutation strategies in evolutionary optimization [25]. address the issue of recurrent spontaneous abortion (RSA) prediction using an innovative evolutionary ML approach. RSA is a critical condition that affects many women, and its early prediction can significantly improve outcomes by facilitating early intervention. The authors propose a ML model based on a joint self-adaptive sime mould algorithm, which is designed to improve prediction accuracy by adapting to the changing characteristics of the data. This evolutionary algorithm integrates multiple strategies, including self-adaptive learning and optimization techniques, to better handle the complexities and uncertainties inherent in medical data. By applying this method to predict RSA, the authors demonstrate its potential to improve predictive accuracy over traditional models, thereby contributing to the advancement of personalized medicine in obstetrics.

Transfer learning [26, 27] has become a widely used technique for generating image representations, particularly in the field of medical image analysis. This method involves leveraging pre-trained models that have been developed on large-scale datasets for general image recognition tasks, and then fine-tuning them on smaller, domain-specific datasets. The advantage of transfer learning lies in its ability to exploit the learned features of a pre-trained model, reducing the need for large amounts of annotated data in specialized areas such as medical imaging. In medical image analysis, obtaining sufficient labeled data can be a significant challenge due to the high cost and expertise required for annotation. Transfer learning helps overcome this obstacle by using a pre-trained model (often trained on general datasets like ImageNet) as a starting point, which has already learned useful low-level features (such as edges, textures, and shapes) that are relevant across various image domains. These learned features can then be fine-tuned on medical image datasets (e.g., MRI, CT, or X-ray images) to adapt the model to the specific characteristics and nuances of medical images, enhancing its performance on tasks like segmentation, classification, or detection.

In the context of medical imaging, transfer learning has been particularly effective for tasks such as tumor detection, organ segmentation, and disease classification. It has been used to analyze various types of medical images, such as those from CT scans, MRI scans, and X-ray images, to automatically detect and classify conditions like liver cancer, lung diseases, and brain tumors. By reducing the amount of data required for training, transfer learning significantly lowers computational costs and speeds up the model development process. Furthermore, the use of pre-trained networks, especially CNNs, has enabled significant improvements in accuracy and efficiency, making transfer learning an invaluable tool in medical image analysis.

In addition to the well-established liver segmentation methods, there are dedicated techniques for segmenting vessels, bile ducts, and tumors. Not only segmentation of liver vessels but also labeling of them is difficult and automated methods are needed. Vessel segmentation, for instance, presents unique challenges, as featuring images often obscured or biased within the scale of standards into the resolution for image acquisition into the artifacts [28]. Similarly, automated labeling of liver vessels (portal and hepatic veins) is challenging [29]. Various techniques with high noise resistance and fast processing speeds have been developed to address these challenges. These techniques include using transforms like Contourlet, Wavelet, Curvelet, and Ridgelet, which have applications in medical image segmentation, especially in the context of vessels and other fine structures within medical images [30]. These specialized methods enhance the accuracy and efficiency of segmenting these intricate anatomical features. A hybrid densely connected UNet was designed by Li et al. for hepatocellular carcinoma (HCC) detection and liver segmentation [31]. While these advanced segmentation methods have demonstrated significant improvements over traditional techniques, the effectiveness of the proposed models can still be limited by challenges like segmentation into under/over based practice associated with the cause of contrast, noise, asymmetrical edges, and blur [32,33,34].

The author in [35] provided segmentation dependent on the development of a multi-scale framework nested-UNet; (MSN-Net) for reducing the issue related to gradient-descent with the built-in semantic gap. This parallel method of training led to computational complexity. SVM-based liver cancer analysis has been presented in MATLAB by [36] and provided an accuracy of 87%. Based on MRI images, as recommended in [37], a watershed approach for finding the separation of cancer cells commencing the MRI scan images. Later, the Otsu method improved the image quality. DL provides a straightforward approach to standardizing pixel values within images, ensuring that the extracted features accurately represent the image content [38]. The precision of the task heavily relies on the nature of these extracted features, particularly in pre-processed images. Ultimately, it is recognized as the foremost aspect under DL towards the category of object determination within an image, and this remains a central focus of current research efforts [39]. ML practice has substantially improved efficiency in radiological analysis and holds promise in addressing gaps within such classification process [40, 41]. Unlike the ML methods, the FCNN can discover features that do not exist in the practice of radiologics. It has been used for multiple sclerosis lesion segmentation in the recent medical sector.

For almost three times cross-validation, the results from the FCC for the recognition of lesion and liver work proposed by Ben-Cohen et al. [42] contrast with the small dataset being patched with the classification of CNN and its sparseness. Real-positive values attained at rates 0.89 and 0.8 false-positive under the fully automatic strategy. Besides, an unsupervised technique has also been anticipated in [43] for cancer detection. In this classification, the measurement of the optical approach is combined with the composition of the schemes combined with error-prone technology. Within the deep patch CNN for the detection and segmentation of cancer centered on the abnormalities in medical images [44]. The key benefit of this automated detection method lies in its remarkable precision, with the deep neural network classifier achieving an impressive 99.41% accuracy while incurring minimal validation loss. The primary method for liver tumor detection involves the utilization of a model of DNN built with the process of finding. Through the ANOVA approach, features were identified with a combination of hybrid feature selection (HFS) within the microarray. The study involves advanced studies like DCNN and HFCNN frameworks to address these challenges and overcome issues.

Methodology

At a theoretical level, DCNN and HFCNN exhibit distinct architectural designs and feature extraction mechanisms. DCNNs typically follow a sequential structure comprising convolutional layers, pooling layers, and fully connected layers, enabling them to hierarchically learn increasingly abstract features from input data. In contrast, HFCNNs extend this architecture by introducing a hierarchical fusion mechanism. In HFCNNs, feature maps from different layers of the network are fused at multiple scales and levels of abstraction, allowing for the integration of fine-grained local details with high-level global context. While DCNNs primarily rely on the sequential processing of convolutional layers to extract features, HFCNNs leverage hierarchical fusion to integrate features across multiple layers, capturing both local patterns and global context more comprehensively. Specifically, the HFCNN architecture includes a multi-level hierarchical fusion mechanism that integrates features at different scales to improve the detection accuracy for liver cancer. Unlike a traditional DCNN, which typically processes features in a sequential, layer-by-layer fashion, the HFCNN employs parallel pathways for feature extraction at multiple levels, followed by fusion at a higher stage to combine the complementary information from different levels. This fusion strategy enhances the model’s ability to capture both low-level and high-level features simultaneously. To aid in the understanding of these differences, we have included a visual representation of both architectures in the revised manuscript. The diagram shows the key layers of both networks, highlighting the fusion points in the HFCNN and how they differ from the standard DCNN structure. This integration of multi-scale and multi-level features makes HFCNNs particularly well-suited for tasks requiring nuanced understanding of visual data, such as medical image analysis. Consequently, while DCNNs are versatile for various computer vision tasks, HFCNNs excel in tasks where the integration of contextual information is crucial, such as segmentation and classification in medical imaging applications. Figure 2 below shows the HFCNN flow diagram with its functional features in a visual representation.

Fig. 2
figure 2

Flow diagram of HFCNN

In the DCNN approach, three phases exist image enhancement, segmentation, and detection process. For image & contrast enhancement, the Gaussian filtering is utilized, and later, the UNet approach is deployed for the computerized findings and segmentation for cancer should be undertaken with the deep-based CNN. Gaussian filtering is applied to smooth the input images and reduce noise, which is crucial for enhancing the contrast of tumor regions. By applying a Gaussian filter, we effectively remove high-frequency noise and retain essential structures in the liver tissue, which improves the accuracy of subsequent tumor detection. This preprocessing step helps the network focus on the key features in the liver scans without being distracted by irrelevant noise. Gaussian filtering reduces harmonics and frequency range and its disturbances towards the edge smoothing. Combining the filtering practice with the contrast-limited-adaptive-histogram-enhancement (CLAHE) functions enhances image contrast while simultaneously edge smoothens. The CLAHE technique is applied to enhance the local contrast in the image, particularly in regions with low intensity variations. This is important for liver cancer detection as tumors often present subtle intensity differences that might be overlooked in uniformly processed images. CLAHE helps in adjusting the contrast locally, making the boundaries of the tumor more distinct and enhancing the visibility of the affected areas in the liver.

Figure 3 above exhibits the flow diagram and architecture of computerized DCNN. The initial UNet focuses on liver segmentation, while the second UNet extracts detailed edge information related to complex liver structures within CT images. Combining the outputs of these two UNet layers restricts the boundaries of the liver object, referred to as under and over-segmentation. Kirsch’s filter is a convolutional filter used in image processing and computer vision. It is primarily employed for edge detection in digital images. The filter is named after its creator, John Kirsch. Kirsch developed a set of masks or kernels to perform convolution operations on an image. These masks are designed to detect edges and other important image features by measuring the gradient or change in intensity at each pixel. Kirsch’s filter consists of a set of eight various masks, as shown in Fig. 4, each sensitive to edges at a particular orientation (e.g., horizontal, vertical, diagonal). By applying these masks to an image through convolution, one can highlight the edges in various directions. Through Kirsch’s operator, the gradient through the convolution of the CT image can be formulated as equ. 1,

$$\:{K}_{a}\left(b,k\right)=\sum\:_{i=-1}^{1}\sum\:_{j=-1}^{1}ia\left(b+i,k+j\right).Ha\left(i,j\right)$$
(1)
Fig. 3
figure 3

Flow diagram and architecture of computerized liver cancer detection using DCNN

Fig. 4
figure 4

Kirsch-based responses for the edge under various paths [18]

In which, the gradient output for the Kirsch’s is \(\:{K}_{a}\left(b,k\right)\). The kernel operator for Kirsch’s is \(\:Ha\), and the CT image is exemplified with \(\:ia\left(b+i,k+j\right)\). And denoting the liver CT image has rows and columns of b and k.

For the overall eight directions, the gradient output under Kirsch’s operation is given as:

$$\:K_{max}\left(b,k\right) = \max(K_1\left(b,k\right),\dots\:,K_8\left(b,k\right))$$
(2)

The resulting output is aimed at tasks corresponding to edge detection, image analysis, and feature extraction. Kirsch’s filter is a popular tool in image processing for tasks like edge detection, pattern recognition, and object detection. It is part of a family of filters and operators designed to enhance or extract specific features from digital images. UNet is widely exploited in the responsibilities of medical image segmentation corresponding to identifying tumors through MRI scans, segmenting cells in microscopy images, and many other applications where precise image segmentation is required. UNet is typically trained on a dataset with annotated images. Parametrics associated with the networks are optimized through gradient-descent and backpropagation to minimize the chosen loss function [45, 46]. In DCNN, 5 CNN (i.e., convolution-layer, maximum-pooling layer, Fully connected; FC layer, and rectified-linear-unit; ReLU) are organized consecutively [47, 48]. The architectural diagram of UNet is shown below in Fig. 5.

Fig. 5
figure 5

UNet architecture [26]

Within the convolution feature associated with the input CT fed to the system that has been mapped with the filters as shown in an equation format,

$$\:{H}_{conv}\left(a,b\right)=\sum\:_{i=1}^{row}\sum\:_{j=1}^{colum}Lim\left(i,j\right).D(a-i,b-y)$$
(3)

Setting the Lim as the input image formulated to the filters assists with the augmentation of the nonlinearity enhanced within the convolution features. Furthermore, the training speed can be augmented. This DCNN is measured through the LiTS dataset.

The output of the ReLU layer is equated below,

$$\:H_{ReLU}\left(i,j\right)=\left\{\begin{array}{c}\begin{array}{cc}0,&\:H_{conv}\left(i,j\right)<0\end{array}\\\begin{array}{cc}H_{conv}\left(i,j\right),&\:H_{conv}\left(i,j\right)\geq\:0\end{array}\end{array}\right.$$
(4)

Dimensions can be reduced with the maximum pooling layer, and choosing the appropriate range for the network window is 2\(\:\times\:\)2. Avoiding the over-fitting ratio and reducing the computational burden through the parameter trained. Then,

$$\:{H}_{maxpool}={H}_{ReLU}(i:i+2,j:j+2)$$
(5)

Interconnectivity within every neuron network is associated with the conversion of the vector and classifier softmax loss function computed the probability of predicted output. HFCNN can process arbitrary inputs and produce accurate outputs efficiently [49]. In this procedure, assessing loss function over the object segmentation for imaging is similar to patch-based methods [50,51,52].

Figure 6 illustrates the advanced method framework for HFCNN developed in liver cancer detection. Unlike patch-based approaches, HFCNN practices thorough images as a replacement for smaller reinforcements, associated with the elimination process of requirement towards quality over repetitive patches. This approach effectively prevents redundant estimations when patches overlap, ultimately enhancing the resolution of the final image output, with a convolution layer of 2*20*5\(\:\times\:\)5, 2\(\:\times\:2\) maximum pooling, 500 Fully connected layers, and two fully connected layers.

Fig. 6
figure 6

HFCNN framework for liver cancer detection

The Autoencoder objective is represented as

$$\:y={f}_{i}\left(x\right)=\text{t}\text{a}\text{n}\text{h}({G}_{jx}+{b}_{j})$$
(6)

“tanh” exemplifies the activation function developed under layers of I/O (input-output). Here x and y are two vectors of dimensions m and n correspondingly, and \(\:{G}_{j}\) is weight matrix of size m * n, \(\:{b}_{j}\) an intercept vector of dimension n and \(\:{G}_{jx}\) gives a vector of size m. Moreover, implementation can be done in 3 hidden layers. During each data of training, the algorithm is processed. Under the joint distribution of the fully connected neural network within the models of layers K that has been observed within the vector of k and x to layers hidden, the \(\:{s}^{l}\) and is represented as expression follows,

$$\:Q\left(x,{s}^{1},\dots\:{s}^{m}\right)=\left({\prod\:}_{l=1}^{m-2}Q\right({s}^{l}\left|{s}^{l+1}\right)\left)Q\right({s}^{m-1},{s}^{m})$$
(7)

Equation (7) understands the hidden layer with the joint distribution. It cannot be easy to define a singular algorithm for segmentation because most algorithms amalgamate multiple techniques and employ various image indicators to improve their segmentation performance.

Results and discussions

Overall implementation is done through simulation applying MATLAB software over a laptop equipped with a Windows operating system, a core i7 processor, and 20GB of RAM. The implementation uses MATLAB’s toolbox, including signal processing, DL, and image processing. The assessment of the anticipated scheme has been conducted using the Liver-Tumor-Segmentation (LiTS) dataset, which comprises CT liver images 200. CT scans of 130 are portioned for training and rest for testing.

The UNet segmentation outcomes for liver tumors are demonstrated in Fig. 7 above. Undergoing these results, the comparison of different state-of-the-art is developed.

Fig. 7
figure 7

Segmentation outcomes (a) UNet (b) HFCNN

Table 1 lists the hyperparameters set for the network functionality as the number of trainable parameters increases as you add more layers to the network. This is because each layer in a CNN typically contains learnable weights and biases, and the number of these parameters grows with the depth of the network.

Table 1 DCNN hyperparameters

As shown in Fig. 8a, b, the accuracy and loss results for training are attained using the ADAM optimizer. ADAM automatically adapts the learning rate for each parameter based on the magnitude of the first and second moments. This adaptivity helps overcome the problem of manually choosing a suitable learning rate.

Fig. 8
figure 8

a. Training Accuracy based DCNN. b Loss training based DCNN

Figure 9a, b depicts the confusion matrix of 3 layered DCNN and HFCNN. The confusion matrix is a valuable tool for assessing the performance of classification models and gaining insights into their strengths and weaknesses.

Fig. 9
figure 9

a. Confusion matrix of 3-layered DCNN. b. Confusion matrix of 3-layered HFCNN

Figure 10 shows the DSC of the methods subjected to comparison. DSC quantifies the degree of resemblance concerning the region segmented (predicted) with the true zone or ground-truth region. This coefficient remains particularly useful in evaluating how well an automated or semi-automatic segmentation method performs.

Fig. 10
figure 10

Dice similarity coefficient

$$\mathrm{DSC}\;=\;(2\;\ast\vert\mathrm{Prediction}\;\cap\;\mathrm{Ground}\;\mathrm{Truth}\vert)/(\vert\mathrm{Prediction}\vert+\vert\mathrm{Ground}\;\mathrm{Truth}\vert)$$
(8)

The Dice coefficient for DCNN is 0.91, and HFCNN is 0.93. Using the values in the confusion matrix, you can compute various performance metrics for a classification model, including:

$$\mathrm{Precision}:\;\mathrm{TP}\;/\;(\mathrm{TP}\;+\;\mathrm{FP})$$
(9)
$$\mathrm{Accuracy}:\;(\mathrm{TP}\;+\;\mathrm{TN})\;/\;(\mathrm{TP}\;+\;\mathrm{TN}\;+\;\mathrm{FP}\;+\;\mathrm{FN})$$
(10)
$$\mathrm{Specificity}\;(\mathrm{True}\;\mathrm{Negative}\;\mathrm{Rate}):\;\mathrm{TN}\;/\;(\mathrm{TN}\;+\;\mathrm{FP})$$
(11)
$$\mathrm{Recall}\;(\mathrm{Sensitivity}\;\mathrm{or}\;\mathrm{True}\;\mathrm{Positive}\;\mathrm{Rate}):\;\mathrm{TP}\;/\;(\mathrm{TP}\;+\;\mathrm{FN})$$
(12)
$$\mathrm{False}\;\mathrm{Positive}\;\mathrm{Rate}\;(\mathrm{FPR}):\;\mathrm{FP}\;/\;(\mathrm{TN}\;+\;\mathrm{FP})$$
(13)
$$\mathrm{False}\;\mathrm{Negative}\;\mathrm{Rate}\;(\mathrm{FNR}):\;\mathrm{FN}\;/\;(\mathrm{TP}\;+\;\mathrm{FN})$$
(14)
$$\mathrm F1-\mathrm{Score}:\;2\;\ast\;(\mathrm{Precision}\;\ast\;\mathrm{Recall})\;/\;(\mathrm{Precision}\;+\;\mathrm{Recall})$$
(15)

Table 2 tabulates the comparison assessment for both the techniques DCNN and HFCNN. The accuracy of HFCNN is superior to that of the DCNN under segmentation and detection of liver cancer.

Table 2 Performance comparison of metrics for DCNN and HFCNN

Conclusion

This research focuses on the challenges of various ML and DL techniques in segmenting and detecting liver cancer. This analysis compares two advanced approaches, DCNN and HFCNN, to assess liver cancer detection. The evaluation of DCNN and HFCNN-based liver cancer detection is conducted by considering numerous measurement metrics such as precision, recall, accuracy, and F1-score. Notably, HFCNN achieves a superior accuracy of 93.85% compared to DCNN. Combining methods can be even more effective, particularly by incorporating systems that process substantial datasets in real-time scenarios. To address the class imbalance issue, data augmentation techniques can be employed to synthetically generate liver CT images, potentially improving model performance in rare case scenarios. Similar to the liver, segmentation of the kidneys from abdominal images is a very challenging issue. The same comparison performed in this work can be made for the segmentation of the kidneys as a potential future work because their automated segmentation with probabilistic methods [53] and traditional neural networks [54] may not always be effective. Comparative evaluations of DCNN and HFCNN-based methods can be helpful for many researchers. Future research could explore several key areas. Multi-Modal Fusion, integrating MRI, PET, and ultrasound with CT scans, can enhance liver cancer detection accuracy by providing complementary information. This approach could also extend to kidney segmentation, addressing the limitations of traditional methods. Additionally, interoperability and integration of diagnostic systems into existing clinical infrastructures could improve workflow efficiency. The development of 3D and 4D imaging models could aid in tracking tumor progression and response to treatment. Finally, telemedicine integration could provide real-time consultations, benefiting regions with limited healthcare access. Expanding HFCNN methods to these areas has the potential to transform cancer diagnosis and treatment, as well as drive advancements in medical imaging for other organs.

Data availability

Data will be available on a reasonable request and readers can contact Sandeep Dwarkanth Pande in this regard.

References

  1. Sia D, Villanueva A, Friedman SL, Llovet JM. Liver cancer cell of origin, molecular class, and effects on patient prognosis. Gastroenterology. 2017;152(4):745–61.

    Article  CAS  PubMed  Google Scholar 

  2. Chen L, Wei X, Gu D, Xu Y, Zhou H. Human liver cancer organoids: Biological applications, current challenges, and prospects in hepatoma therapy. Cancer Lett. 2023;555:216048.

    Article  CAS  PubMed  Google Scholar 

  3. Bai Z, Jiang H, Li S, Yao YD. Liver tumor segmentation based on multi-scale candidate generation and fractal residual network. IEEE Access. 2019;7:82122–33.

    Article  Google Scholar 

  4. Khan N, Ahmed I, Kiran M, Adnan A. Overview of technical elements of liver segmentation. Int J Adv. 2016;7(12):271–8.

    Google Scholar 

  5. Reis HC, Turk V, Khoshelham K, Kaya S. InSiNet: a deep convolutional approach to skin cancer detection and segmentation. Med Biol Eng Comput. 2022;1–20.

  6. Ganesan R, Yoon SJ, Suk KT. Microbiome and metabolomics in liver cancer: scientific technology. Int J Mol Sci. 2022;24(1):537.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Jiang H, Diao Z, Shi T, Zhou Y, Wang F, Hu W, et al. A review of deep learning-based multiple-lesion recognition from medical images: classification, detection and segmentation. Comput Biol Med. 2023;157:106726. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.compbiomed.2023.106726.

  8. Men K, Chen X, Zhang Y, Zhang T, Dai J, Yi J, Li Y. Deep deconvolutional neural network for target segmentation of nasopharyngeal cancer in planning computed tomography images. Front Oncol. 2017;7:315.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Wu K, Chen X, Ding M. Deep learning-based classification of focal liver lesions with contrast-enhanced ultrasound. Optik. 2014;125(15):4057–63.

    Article  Google Scholar 

  10. Trivizakis E, Manikis GC, Nikiforaki K, Drevelegas K, Constantinides M, Drevelegas A, Marias K. Extending 2-D convolutional neural networks to 3-D for advancing deep learning cancer classification with application to MRI liver tumor differentiation. IEEE J Biomed Health Informat. 2019;23(3):923–30.

    Article  Google Scholar 

  11. EVGİN, GÖÇERİ, MEHMET ZÜBEYİR ÜNLÜ, OĞUZ DİCLE. A comparative performance evaluation of various approaches for liver segmentation from SPIR images. Turk J Elec Eng Comp Sci. 2014;22(6):1834–46. https://doiorg.publicaciones.saludcastillayleon.es/10.3906/elk-1304-36.

    Article  Google Scholar 

  12. Evgin Göçeri, Metin N, Gürcan. Oğuz Dicle, Fully automated liver segmentation from SPIR image series,Computers in Biology and Medicine, Volume 53, 2014,Pages 265–278, ISSN 0010-4825. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.compbiomed.2014.08.009.

  13. Dura E, Domingo J, Göçeri E, et al. A method for liver segmentation in perfusion MR images using probabilistic atlases and viscous reconstruction. Pattern Anal Applic. 2018;21:1083–95. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s10044-017-0666-z.

    Article  Google Scholar 

  14. Goceri E. (2013). A comparative evaluation for liver segmentation from spir images and a novel level set method using signed pressure force function (Izmir Institute of Technology (Turkey) pp 1–136.

  15. Goceri E, Unlu MZ, Guzelis C, Dicle O. An automatic level set based liver segmentation from MRI data sets, 2012 3rd International Conference on Image Processing Theory, Tools and Applications (IPTA), Istanbul, Turkey, 2012, pp. 192–197. https://doiorg.publicaciones.saludcastillayleon.es/10.1109/IPTA.2012.6469551.

  16. Domingo E, Dura, Göçeri E. Iteratively Learning a Liver Segmentation Using Probabilistic Atlases: Preliminary Results, 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), Anaheim, CA, USA, 2016, pp. 593–598. https://doiorg.publicaciones.saludcastillayleon.es/10.1109/ICMLA.2016.0104.

  17. Anand L, Maurya M, Seetha J, Nagaraju D, Ravuri A, Vidhya RG. An intelligent approach to segment liver cancer using Machine Learning Method. In: 2023 4th Int Conf Electronics Sustainable Communication Systems (ICESC), IEEE; 2023. p. 1488–1493.

  18. Dong X, Zhou Y, Wang L, Peng J, Lou Y, Fan Y. Liver cancer detection using hybridized fully convolutional neural network based on deep learning framework. IEEE Access. 2020;8:129889–98.

    Article  Google Scholar 

  19. Vasundhara N, Nandan AS, Hemanth SV, Macherla S, Madhura GK. An efficient biomedical solicitation in liver cancer classification by deep learning approach. In: Proc IEEE Int Conf Integr Circuits Commun Syst (ICICACS), 2023. p. 1–5.

  20. G L, et al. Tumor burden analysis on computed tomography by automated liver and tumor segmentation. IEEE Trans Med Imaging. 2012;31(10):1965–76.

    Article  Google Scholar 

  21. Seo KS. Automatic hepatic tumor segmentation using composite hypotheses. In: Int Conf Image Anal Recognit; 2005. p. 92–929.

  22. Zhang X, Li Y, Wu Z, Zhang W. Multi-level Fusion and attention-guided CNN for Image Dehazing. IEEE Trans Circuits Syst Video Technol. 2022;32(7):4226–37. https://doiorg.publicaciones.saludcastillayleon.es/10.1109/TCSVT.2022.3143177.

    Article  Google Scholar 

  23. Lu W, Zhao H, He Q, Zhang S. Category-consistent deep network learning for accurate vehicle logo recognition. IEEE Trans Neural Networks Learn Syst. 2020;31(6):1879–90. https://doiorg.publicaciones.saludcastillayleon.es/10.1109/TNNLS.2019.2941557.

    Article  Google Scholar 

  24. Chen MR, Zeng GQ, Lu KD. A many-objective population extremal optimization algorithm with an adaptive hybrid mutation operation. Inf Sci. 2019;501:287–300. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.ins.2019.07.043.

    Article  Google Scholar 

  25. Shi B, Chen J, Chen Y, Li Q. Prediction of recurrent spontaneous abortion using evolutionary machine learning with joint self-adaptive sime mould algorithm. Comput Biol Med. 2020;117:103588. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.compbiomed.2020.103588.

    Article  CAS  PubMed  Google Scholar 

  26. Kim SH, Lee JM, Kim HY, et al. Deep learning in food category recognition. Inf Fusion. 2023;98:101859.

    Article  Google Scholar 

  27. Wang Y, Lee SJ, Choi YJ, et al. A cerebral microbleed diagnosis method via FeatureNet and ensembled randomized neural networks. Appl Soft Comput. 2021;109:107567. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.asoc.2021.107567.

  28. Zhang L, Liu F, Yao W, et al. CTBViT: a novel ViT for tuberculosis classification with efficient block and randomized classifier. IEEE Access. 2023;11:38458–69.

    Google Scholar 

  29. Chartrand G, Cresson T, Chav R, Gotra A, Tang A, Guise JAD. Liver segmentation on CT and MR using laplacian mesh optimization. IEEE Trans Biomed Eng. 2017;64(9):2110–21.

    Article  PubMed  Google Scholar 

  30. Goceri E. Automatic labeling of portal and hepatic veins from MR images prior to liver transplantation. Int J Comput Assist Radiol Surg. 2016;11:2153–61.

    Article  PubMed  Google Scholar 

  31. AlZu’bi S, Islam N, Abbod M. Multiresolution analysis using wavelet, ridgelet, and curvelet transforms for medical image segmentation. Int J Biomed Imaging. 2011;2011:136034.

  32. Mahr A, Levegrun S, Bahner ML, Kress J, Zuna I, Schlegel W. Usability of semi-automatic segmentation algorithms for tumor volume determination. Invest Radiol. 1999;34(2):143–50.

    Article  CAS  PubMed  Google Scholar 

  33. Li X, Chen H, Qi X, Dou Q, Fu CW, Heng PA. H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE Trans Med Imag. 2018;37(12):2663–74.

    Article  Google Scholar 

  34. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. Proc int conf Med Image Comput Comput-Assist Intervent. Cham, Switzerland: Springer; 2015. p. 234–41.

    Google Scholar 

  35. Du G, Cao X, Liang J, Chen X, Zhan Y. Medical image segmentation based on U-net: A review. J Imag Sci Technol. 2020;64(2):020508-1-020508-12.

  36. Siddique N, Paheding S, Elkin CP, Devabhaktuni V. U-net and its variants for medical image segmentation: a review of theory and applications. IEEE Access. 2021;9:82031–57.

    Article  Google Scholar 

  37. Fan T, Wang G, Wang X, Li Y, Wang H. MSN-Net: a multi-scale context nested U-Net for liver segmentation. Signal Image Video Process. 2021;15(6):1089–97.

    Article  Google Scholar 

  38. Vadali S, Deekshitulu GVS, Murthy JVR. Analysis of liver cancer using data mining SVM algorithm in MATLAB. Soft Computing for Problem solving: SocProS. Volume 1. Singapore: Springer; 2019. pp. 163–75.

    Chapter  Google Scholar 

  39. Dutta A, Dubey A. Detection of liver cancer using image processing techniques. In: Proc Int Conf Commun Signal Process (ICCSP); 2019. p. 281–285.

  40. Liu Y, Liu Z, Li H, et al. Model-based segmentation of liver tumors in dynamic contrast-enhanced MRI images. Phys Med Biol. 2012;57(3):611–26.

    Google Scholar 

  41. He Y, Wu Y, Shi X, et al. Combined classification and segmentation approach for liver tumor analysis using MRI images. BioMed Eng OnLine. 2016;15(1):1–11.

    Google Scholar 

  42. Islam MS. Modeling a hybrid system for liver tumor detection: image segmentation and classification using a deep convolutional neural network. Computers. 2020;9(3):56.

    Google Scholar 

  43. Bu Y, Lu X, Jiang J. Denoising autoencoder-based liver cancer detection and segmentation: a review. Neural Comput Appl. 2023;35(2):949–59.

    Google Scholar 

  44. Asaduzzaman M, Ahmed J, Karim M, et al. Automated liver tumor segmentation and classification using deep learning and transfer learning techniques. Comput Biol Med. 2023;150:106393.

    Google Scholar 

  45. Shah A, Ghazal M, Ali S. Convolutional neural network-based hybrid liver tumor detection using MR images. Comput Biol Med. 2023;137:104650.

    Google Scholar 

  46. Kandasamy S, Karthikeyan P, Prabhu S, et al. A review on deep learning-based liver cancer detection and segmentation methods. Artif Intell Med. 2023;133:102380.

    Google Scholar 

  47. Garcia A, Garcia R, Lujan M, et al. Multimodal deep learning approaches for liver tumor segmentation: a systematic review. Med Image Anal. 2023;77:102411.

    Google Scholar 

  48. Durdu A, Gozde T, Engin U, et al. Hybrid model-based liver tumor detection in CT and MRI images using convolutional neural networks. Biol Psychol. 2022;134:19–29.

    Google Scholar 

  49. Kumar R, Singh R, Sharma G, et al. Detection and classification of liver tumors using machine learning and image segmentation methods. Healthc Technol Lett. 2022;9(3):168–75.

    Google Scholar 

  50. Ijaz M, Baig H, Imran M. Convolutional neural networks for automated liver cancer detection in CT scans. J Med Imaging Health Inf. 2022;12(10):2396–402.

    Google Scholar 

  51. Sharif M, Zaidi F, Anwar S. Multi-channel liver cancer detection using deep learning models: a comparative study. Comput Biol Med. 2022;137:104712.

    Google Scholar 

  52. Bhattacharya P, Aitkenhead M, Dutta A, et al. Deep learning in liver cancer detection and segmentation: a comprehensive review. Biol Med. 2023;17(1):42–59.

    Google Scholar 

  53. Shen X, Zhang L, Yang F, et al. U-net-based deep learning model for liver tumor segmentation in CT and MRI images. J Med Imaging. 2023;10(2):103–15.

    Google Scholar 

  54. Li Y, Zhang Y, Wang Y, et al. A novel hybrid deep learning approach for liver cancer detection and diagnosis from CT images. Int J Comput Assist Radiol Surg. 2023;18(1):124–36.

    Google Scholar 

Download references

Acknowledgements

This research was supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2025R234), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Funding

No funding is available.

Author information

Authors and Affiliations

Authors

Contributions

Authors contributed equally in this work.

Corresponding authors

Correspondence to Sandeep Dwarkanth Pande, Ala Saleh Alluhaidan or Ebenezer Bonyah.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pande, S.D., Kalyani, P., Nagendram, S. et al. Comparative analysis of the DCNN and HFCNN Based Computerized detection of liver cancer. BMC Med Imaging 25, 37 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12880-025-01578-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12880-025-01578-4

Keywords