
Additional information
- Ethical approval: N/a
- Consent: N/a
- Funding: No industry funding
- Conflicts of interest: N/a
- Author contribution: Sonia Suneja – Conceptualization, Writing – original draft, review and editing.
- Guarantor: Dr. Manvinder Sharma
- Provenance and peer-review: Unsolicited and externally peer-reviewed
- Data availability statement: N/a
Keywords: Pancreatic tumor, Pancreatic cancer diagnosis, Pancreatic ductal adenocarcinoma , Deep learning, Artificial intelligence.
Peer Review
Received: 9 December 2025
Last revised: 26 February 2026
Accepted: 1 March 2026
Version accepted: 6
Published: 8 March 2026
Plain Language Summary Infographic

Abstract
Pancreatic ductal adenocarcinoma (PDAC) remains among the most lethal solid malignancies globally, largely due to delayed diagnosis and limited sensitivity of conventional imaging for early-stage lesions. The substantial mortality burden underscores the urgent need for improved diagnostic strategies capable of identifying subtle radiological patterns in contrast-enhanced computed tomography (CT) scans. In a structured way, all AI-based techniques developed to analyze pancreatic tumors using CT Images from 2018 to 2025 have been evaluated in this review. By conducting systematic searches of various databases, including PubMed, Web of Science, Scopus and IEEE Xplore, the eligible studies were determined. The methods for identifying studies were conducted under the principles of PRISMA 2020. Moreover, the imaging modality used was CT only. The conducted electronic searches of PubMed, Web of Science, Scopus, and IEEE Xplore identified 236 records.
Screening was conducted on 195 records, after removal of 41 duplicates. Seventy-eight records were excluded following title and abstract screening. It was sought to retrieve full-text reports for 117 studies; however, 36 could not be retrieved. Out of 81 reports assessed for eligibility, 48 were excluded (20 non-CT imaging modalities; 9 non-pancreatic cancer/PDAC specific; 19 insufficient methodological detail). In total, 33 studies were included. The 33 included studies were systematically categorized into four distinct themes based on their primary focus: (i) AI-driven segmentation for pancreas tumor localization, (ii) deep learning-based tumor classification, (iii) CT-based radiomics and feature-driven analysis, and (iv) early detection models. This review consolidates current advancements in AI-driven frameworks that integrate CT imaging data, thereby enhancing diagnostic accuracy and enabling earlier identification of PDAC. This review aims to bridge this gap by consolidating recent advances in CT-based AI methods for pancreatic disease diagnosis.
Introduction
Pancreatic cancer is one of the deadliest cancers. Its prognosis is usually bad. Most patients get diagnosed at an advanced stage because of this.1 Although relatively less common than other malignancies, pancreatic cancer accounts for a disproportionately high number of cancer-related deaths. In developed nations, lung cancer ranks seventh in cancer-driven deaths, according to the World Health Organization. According to predictions, the second leading cause of death from cancer will be lung cancer by 2030, mainly due to its frequency.1–3 Gender differences are evident, as men exhibit a slightly higher incidence rate of 5.7 per 100,000 (34,530 cases) and a mortality rate of 4.5% (27,270 deaths), compared to women, who have an incidence rate of 4.9 per 100,000 (31,910 cases), and a mortality rate of 4.0%.1,2 This underscores the urgent need for improved diagnostic and treatment methodologies.4 Traditional imaging modalities, including CT scans, have been integral in diagnosing pancreatic cancer; however, they are often limited by their inability to detect small or early-stage tumors accurately.5,6
Over the last few years, particularly with deep learning (DL), image analysis by means of artificial intelligence (AI) has shown very favorable performance as a routine radiological assessment aid. Systems powered by AI may be able to reliably and repeatedly perform important feature identification, fine texture assessment, volumetric quantification, and risk classification that is not immediately visible to the human eye. The new developments will assist in many important activities, such as early detection of tumors and localization of lesions. This review assesses the performance of these techniques, outlining their respective strengths and limitations. Distinct from earlier reviews, this paper systematically classifies AI models based on their functional approaches, such as segmentation, classification, or hybrid techniques, while also emphasizing the datasets employed, evaluation metrics applied, and their practical relevance to clinical settings. Furthermore, the study proposes future research avenues that will help to bridge the divergence between technological innovation and clinical applications.
The primary objective of this study is to assess the current state of AI models for pancreatic cancer diagnosis and to compare different AI frameworks and techniques. It also identifies key challenges and opens up new avenues for future research. It highlights a few essential gaps that need to be filled for the appropriate design and implementation of innovative AI-based solutions in the clinical setting. The organization of the remaining paper is as follows. Specifically, the section “Pancreas Tumors in the Human Body” outlines pancreatic tumor in the human body. “Methods” section presents the selected studies and summarizes their characteristics. The “Results” section reports the results of the systematic analysis. Finally, there is a comparative analysis of AI models used in the section “Comparative Analysis of AI Techniques” . The section “Challenges” discusses the challenges and limitations associated with current diagnostic approaches. Sections “Future Directions and Scope” and “Conclusions” outline the future research directions and conclude the study.
Pancreas Tumors in the Human Body
Tumors are masses of abnormal pancreatic cells that proliferate uncontrollably and disrupt normal tissue function, leading to malignant transformation and pancreatic cancer. The most common type is ductal adenocarcinoma. In this type, the cells lining the ducts of the pancreas are involved.7 Patients with pancreatic cancer may experience symptoms such as abdominal pain, decrease in appetite, reduced energy levels, weight loss, and jaundice. Unfortunately, the disease is often detected after it has metastasized.4,7
Exocrine Pancreatic Cancer
The pancreas is an elongated retroperitoneal gland located in the upper abdomen, extending from the duodenal loop to the splenic hilum, and anatomically divided into the head, neck, body, and tail. Its exocrine component constitutes the majority of pancreatic tissue and is responsible for the production and secretion of digestive enzymes into the pancreatic ducts, facilitating nutrient breakdown and absorption in the duodenum. Exocrine pancreatic cancer arises from the epithelial cells involved in enzyme production and ductal transport. The exocrine portion comprises acinar cells and ductal structures, both of which contribute to digestive enzyme synthesis and secretion. The most common histological subtype is pancreatic ductal adenocarcinoma (PDAC), which originates from the epithelial lining of the pancreatic ducts. PDAC is highly aggressive and frequently diagnosed at advanced stages due to the nonspecific or subtle nature of early clinical symptoms. These tumors demonstrate a strong propensity for early metastasis, particularly to the liver, lungs, and peritoneal cavity.8
Endocrine Pancreatic Cancer
The endocrine pancreas is the part of the pancreas made of cells that make hormones. Moreover, these cells are arranged in “islets of Langerhans.” These are the isolated groups or islets of cells in the pancreas. Endocrine cells secrete hormones into the bloodstream. Insulin and glucagon are among them, which control glucose metabolism and overall metabolism.1,9 Endocrine pancreatic tumors are derived from the islet cells of the pancreas. Pancreatic neuroendocrine tumors are another name for them. The illness of the pancreas’s endocrine function is rare compared to the exocrine tumors. Certain tumors produce hormones in excess and are associated with a clinical syndrome. Other tumors may be non-functional, which may be asymptomatic over a long time. Prognosis and treatment options vary significantly; certain tumors may be indolent and possibly asymptomatic.4,10
Methods
Study Selection Process
In accordance with PRISMA 2020 guidelines, a systematic literature search was executed in four electronic databases: IEEE Xplore, PubMed, Scopus, and Web of Science to find relevant studies. Initially, there were 236 records identified for inclusion, of which 41 duplicate or clearly irrelevant records were removed before screening, and 195 records were screened based on title and abstract. A total of 78 records were excluded as they were either not related to AI-based pancreatic cancer diagnosis or did not involve CT imaging. The remaining 117 reports were sought for full-text retrieval, of which full text for 36 articles was not available as access was restricted. An eligibility assessment was done for 81 full-text articles, of which 48 investigations were excluded according to the predefined exclusion criteria. Ultimately, 33 studies were found to have met all eligibility criteria and were thus included in qualitative synthesis.
As illustrated in Figure 1, a significant upward trend is observed, indicating an increasing number of publications over time. These studies were shortlisted based on their title, abstract, and full text. The study followed the PRISMA 2020 flow diagram in Figure 2, which outlines the identification, screening, eligibility assessment, and inclusion process.11,12 The review protocol was not prospectively registered. Full-text articles of potentially eligible studies were independently assessed by two reviewers against the predefined inclusion criteria.


Items for Systematic Reviews and Meta-Analyses (PRISMA).12
Search String
A systematic review of the literature identified studies of AI-based pancreatic cancer diagnosis with CT imaging. A core search strategy was created and modified for each database to conform to specific syntax and indexing rules. All searches were conducted manually on 15 July 2025. No automation tools were used in any of the searches. All databases were consistently filtered for language (English) and publication year (2018–2025). Table 1 summarizes the search strategy and the filters applied for each database.
| Table 1: Database-specific Search String and Filters. | |||
| Database | Search Date | Search String | Filters Applied |
| PubMed | 15 July 2025 | ((“Pancreatic Cancer” OR “Pancreatic Ductal Adenocarcinoma” OR “PDAC”) AND (“deep learning” OR “machine learning” OR “artificial intelligence”) AND (“CT scan” OR “computed tomography”)) | Language = English; Publication Years = 2018–2025 |
| Scopus | 15 July 2025 | (TITLE-ABS-KEY (“Pancreatic Cancer” OR “Pancreatic Ductal Adenocarcinoma” OR “PDAC”) AND TITLE-ABS-KEY (“deep learning” OR “machine learning” OR “artificial intelligence”) AND TITLE-ABS-KEY(“CT scan” OR “computed tomography”)) | Language = English; publication years = 2018–2025 |
| Web of Science | 15 July 2025 | TS= (“Pancreatic Cancer” OR “Pancreatic Ductal Adenocarcinoma” OR “PDAC”) AND TS= (“CT scan” OR “computed tomography”) AND TS= (“deep learning” OR “machine learning” OR “artificial intelligence”) | Language = English; publication years = 2018–2025; source type = Article |
| IEEE Xplore | 15 July 2025 | (“Pancreatic Cancer” OR “Pancreatic Ductal Adenocarcinoma” OR “PDAC”) AND (“CT scan” OR “computed tomography”) AND (“deep learning” OR “machine learning” OR “artificial intelligence”) | Language = English; publication years = 2018–2025; Source type = Journal and Conference |
Inclusion Criteria
Studies were included in the systematic review if they met all of the following criteria:
- The study was published in peer-reviewed journals or presented at reputable international conferences.
- The article was written in the English Language.
- The study was published between January 2018 and July 2025, reflecting recent advances in AI.
- The title or abstract explicitly contained keywords related to pancreatic cancer, PDAC and AI or DL.
- The study focused on AI-based diagnosis, detection, segmentation, or classification of pancreatic cancer.
- The imaging modality used was CT only.
- The study reported original experimental results, including model architecture, dataset characteristics, and performance metrics.
Exclusion Criteria
Studies were excluded if any of the following conditions applied:
- The article was published in a language other than English.
- The study was published before 2018.
- Full-text availability was unavailable.
- The study did not involve pancreatic cancer or PDAC.
- The study relied exclusively on non-CT imaging modalities, which were excluded (e.g., Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET) or Endoscopic Ultrasound (EUS)).
- The study focused on biopsy-based, genomic, histopathological, or non-imaging approaches.
- The study did not employ machine learning (ML) or deep learning (DL) techniques.
- Review articles, editorials, letters, abstracts without full papers, and non-experimental studies were excluded.
Risk of Bias Assessment Using QUADAS-2
An assessment of diagnostic accuracy studies was conducted using the QUADAS-2 tool in terms of methodological quality and risk of bias. The authors performed descriptive comparisons across included studies for the risk of bias of the four QUADAS-2 domains: patient selection, index test, reference standard, and flow and timing. Some risk of bias was identified across domains, and concerns regarding applicability were noted for the first three domains. The level of included studies was assessed using QUADAS-2. Results of this assessment are presented with descriptive summaries. The outcome of this evaluation assisted in the interpretation of reported findings.13
PROBAST Assessment for Prediction Models
Prediction and classification studies were evaluated using the Prediction Model Risk of Bias Assessment Tool (PROBAST). The assessment covered four domains: participants, predictors, outcome, and analysis. Given the heterogeneity of included studies, PROBAST was applied descriptively at the review level to identify common sources of bias rather than to generate pooled scores. The findings were used to contextualize the reliability and generalizability of reported model performance.14
Radiomics Quality Score Assessment
The radiomics quality score (RQS) was applied only to radiomics-based studies to assess their methodological rigor and reproducibility. Assessment was done on important factors like quality of imaging protocol, stability of radiomics features, validation strategy, demonstration of clinical utility, and transparency of the protocol. The overall methodological quality was qualitatively interpreted based on RQS criteria.15
Data Extraction and Synthesis
Data were extracted independently from each included study using a predefined structured format. The variables extracted included year of publication, dataset features (sample size, source, type), imaging details, model type (ML, DL, hybrid), segmentation/classification, performance metrics (accuracy, sensitivity, specificity, area under curve (AUC), dice similarity coefficient (DSC), f1-score), and validation strategy.16 Due to this methodological heterogeneity in studies (e.g., datasets, model architectures, evaluation metrics), a meta-analysis of results was neither possible nor desirable. The review synthesized studies in tabular form, developed for this review.
Results
This paper surveys a total of 33 research studies from 2018 to 2025, focusing on PDAC diagnosis. In this section, we describe various AI-Driven Segmentation models for tumor localization, DL models for tumor classification, radiomics-based approaches using CT imaging, and early detection models. The performance of each category was evaluated using standard metrics such as DSC, Sensitivity, Specificity and AUC for clinical utility and feasibility of deployment.
AI-Driven Segmentation for Pancreas Tumor Localization
This review examined how linear self-attention was integrated with the nnU-net. Using the MSD dataset, a DSC of 88.3% was achieved. The performance of the ADAU-Net was assessed as well. A DSC of 83.76% was achieved on the NIH dataset.17 Deep Q-networks18 combined with U-Net architectures achieved a DSC of 86.93% on the NIH pancreas segmentation dataset.10 An average DSC of 0.70% was achieved for pancreatic subregion segmentation.
Tumor Classification Using Deep Learning Models
Hybrid architectures such as the Mutual Information Minimization and Cross-Modal Fusion Network (MIM-CMFNet) framework, which combined mutual information minimization and cross-model fusion, achieved a dice coefficient of 73.14%.19 A CAD tool integrating five Convolutional Neural Network (CNN) classifiers20 reported 89.9% sensitivity and 95.9% specificity for detecting small pancreatic malignancies (<2 cm), demonstrating the strength of ensemble learning for identifying minute and otherwise easily missed tumors. Despite high classification metrics, the models often struggle with imbalanced datasets and generalizability across different imaging centers, pointing to the need for larger and more diverse training cohorts.
Radiomics in Pancreatic Cancer Diagnosis
Radiomics-based studies extracted quantitative imaging features (QIFs) from pre-diagnostic CT scans, using neighborhood component analysis (NCA) and principal component analysis (PCA), achieving 94%–95% accuracy for tumor detection.21 Radiomics-based analysis of texture, shape and volumetric features has effectively utilized in tumor heterogeneity assessment, as seen in hybrid architectures like DSD-ASPP-Net22 achieving a DSC of 91.64% on the local hospital dataset, suggesting that texture and spatial information can significantly enhance both segmentation and classification performance. However, reproducibility remains a concern due to differences in imaging protocols, scanner types and feature extraction pipelines, emphasizing the need for standardization prior to the clinical translation.
Early Detection Models
The large-scale pancreatic detection model addressed the challenge of identifying small and isodense tumors using non-contrast CT scans, achieving an AUC of 0.986 to 0.996.23 Such high diagnostic accuracy at early stages of disease progression suggests that AI models can significantly enhance the likelihood of successful therapeutic interventions. Nevertheless, the reliance on subtle imaging features demands highly sensitive models, and the generalizability of these early detection models needs further validation across multicenter datasets and diverse demographic representation.24
Reporting Quality Assessment (CLAIM 2024)
Assessment using the CLAIM 2024 checklist revealed sub-optimal adherence to several essential reporting standards among the 33 included studies. External validation was explicitly reported in only 5 of 33 studies (15.2%), while calibration analyses were described in 2 studies (6.1%). Measures of statistical uncertainty, such as confidence intervals or variance estimates for performance metrics, were provided in 8 studies (24.2%). Public availability of source code, trained model weights, or reproducible pipelines was reported in 3 studies (9.1%). Robustness analyses, including evaluation under dataset shift conditions such as multicenter or cross-institutional validation, were performed in only 4 studies (12.1%). Collectively, these findings highlight significant limitations in transparency, reproducibility, and clinical generalizability across current CT-based AI research for pancreatic cancer detection.
Stratified Performance by Validation Type
Only a small subset of studies was externally validated (n = 5). Performance trends on external validation datasets were heterogeneous, rather than displaying a clear pattern of inferior performance. According to Javed et al.17 a minor decrease in classification accuracy was noted for internal (93%) and external (89.3%) datasets.25 The public cohort from Chen et al.21 was relatively stable compared to a large institutional cohort.20 Cao et al.23 reported highly consistent AUC values (0.986–0.996) on large multi-institutional validation datasets. According to Ramaekers et al.26 a powerful high AUC on an external dataset (0.99) showed reduced specificity. According to experts, studies involving external evaluation showed classification accuracy 89.3%–93%, AUC 0.81–0.99, and segmentation DSC 0.64–0.86.27
Comparative Analysis of AI Techniques
The area of pancreatic cancer diagnosis using AI and advanced CT imaging has evolved rapidly in recent years. A multitude of research organizations have contributed significantly to this field. Tables 2 and 3 summarize studies published between 2018 and 2025, focusing on detection, segmentation, and classification of pancreatic cancer using DL and ML techniques.
Table 2 highlights that encoder–decoder frameworks based on U-Net architectures are used in the majority of segmentation studies. U-Net-like architectures preserve spatial hierarchies through skip connections, which are particularly beneficial for anatomical structures with irregular boundaries. Boers et al.37 developed an interactive 3D U-Net with the capability of reducing the slice-wise inconsistencies and user re-annotating. Using volumetric convolutions and the Adam optimizer to study a private cohort of CT scans, DSC = 78%. The subsequent studies mainly focused on enhancing the representation of features through salience awareness and multi-scale refinement. Hu et al.22 proposed a framework based on DenseASPP to iteratively refine dissimilarity between the region of interest and the background. The DSC value on the NIH dataset ranges from 67.19% to 91.64%. Z. Chen et al.36 and W. Li et al.24 employed the methods of multi-scale feature fusion strategies.
More recently, studies were conducted with emphasis on cross-dataset generalization and architectural sophistication. In this regard, Mahmoudi et al.30 combined the outputs of Attention U-Net and Texture Attention U-Net (TAU-Net) for the pancreas and PDAC mass regions, respectively. They initially identified the performance gap related to organ and tumor segmentation. The reported mean DSC was approximately 0.64 for PDAC mass segmentation, highlighting the persistent performance gap between organ and tumor segmentation. The study by Amiri et al.29 was proposed to extend the previously mentioned segmentation to pancreatic subregions and pancreatic ducts. Reinforcement learning-based anatomical navigation proposed by Amiri et al.29 further extended the segmentation to pancreatic subregions and ducts, achieving a mean DSC of 0.70. Large-scale evaluations across Medical Segmentation Decathlon (MSD), The Cancer Imaging Archive (TCIA), and National Institute of Health (NIH) datasets by Mukherjee et al.27 and Yang et al.39 demonstrated that segmentation performance remains highly dataset-dependent, underscoring challenges related to domain shift and annotation variability.
| Table 2: Overview of CT-based machine learning and deep learning studies for pancreatic tumor segmentation. | ||||||||
| Study/Year | Dataset Source | Patient (n) | CT scans (n) | Ground Truth | Validation Type | Model and Architecture | Primary Metric (DSC) | 95% CI |
| (Qiu et al., 2024)28 | MSD + NIH | NR | 2537 (MSD), 82 (NIH) | Public dataset annotation | Cross-dataset validation | DL cascading model | 59.24% (MSD), 87.63% (NIH) | CI NR |
| (Amiri et al., 2024)29 | Pancreas sub-region + Duct dataset | NR | 82 +37 | Single radiologist manual annotation | Internal dataset validation | Reinforcement learning-based anatomical maps for pancreas sub-region and duct segmentation | 70% | CI NR |
| (Mukherjee et al., 2023)27 | TCIA + MSD | 41 (TCIA), 152 (MSD) | 41 + 152 Volumes | Public dataset expert annotation | Public cross-dataset validation | Bounding-box-based 3D convolutional neural network (CNN) | 84.0 ± 8% (TCIA), 82 ± 6 % (MSD) | CI NR |
| (Mahmoudi et al., 2022)30 | MSD + University Hospital | 138 (MSD) + 19 (Hospital) | NR | Histopathology + Radiologist annotation | External institutional validation | Hybrid model that combines Attention U-Net and Texture Attention U-Net (TAU-Net) | 72.7% (Pancreas), 60.6%. (PDAC) | CI NR |
| (M. Li et al., 2022)31 (AX-Unet) | NIH + MSD | NR | 82 (NIH), NR MSD | Public dataset annotation | Cross-dataset validation | AX-Unet | 87.7 ± 3.8% (NIH), 85.9 ± 5.1% (MSD) | CI NR |
| (M. Li et al., 2022)31 (ADAU-Net) | NIH | NR | 82 | Public dataset annotation | Internal validation | Attention-guided Duplex Adversarial U-Net (ADAU-Net) | 83.76% | CI NR |
| (M. Li et al., 2021)32 | NIH | NR | 82 | Public dataset annotation | Internal validation | Multi-level pyramidal pooling residual U-Net integrated with an adversarial mechanism | 81.36% | CI NR |
| (Wang et al., 2021)33 | NIH | NR | 82 | Public dataset annotation | Internal validation | View Adaptive 3D U-Net (VA-3DUNet) | 86.19%. | CI NR |
| (Tian et al., 2021)34 | NIH | NR | 82 | Public dataset annotation | Internal validation | Markov Chain Monte Carlo (MCMC) guided convolutional neural network (CNN) approach | 78.13% | CI NR |
| (Hu et al., 2021)22 | NIH | NR | 82 | Public dataset annotation | Internal validation | DenseASPP model that learns the pancreas location and probability map | 67.19% – 91.64% | CI NR |
| (W. Li et al., 2021)24 | NIH + MSD | NR | 82 + 281 | Public dataset annotation | Cross-dataset validation | MAD-Unet | 88.52% | CI NR |
| (Xue et al., 2021)35 | NIH | NR | 82 | Public dataset annotation | Internal validation | Cascaded multitask 3-D fully convolutional network (FCN) | 86.4% | CI NR |
| (Z. Chen et al., 2020)36 | NIH | NR | 82 | Public dataset annotation | Internal validation | Multi-scale feature fusion (MsFF) model | 87.26% | CI NR |
| (Boers et al., 2020)37 | Internal dataset | NR | 100 | Public dataset annotation | Internal validation | iUnet – interactive version of the U-net architecture | 78% | CI NR |
| (Liu et al., 2020)38 | NIH | NR | 82 | Public dataset annotation | Internal validation | Ensemble model that combines five different CNNs based on the U-Net architecture | 84.10 ± 4.91% | CI NR |
| (Man et al., 2019)18 | NIH | NR | 82 | Public dataset annotation | Internal validation | Deep Q Network (DQN) for localization and a deformable U-Net for segmentation | 86.93 ± 4.92% | CI NR |
Table 3 shows a considerably broader range of model architectures. They include classical ML/DL, transformers and hybrid pipelines. Several CNN-based classifiers were trained on publicly available CT-image datasets hosted on Kaggle. Nadeem et al.6 proposed a multi-stage pipeline for multi-class pancreatic lesion classification, which involved anisotropic diffusion filtering-based preprocessing, U-Net-based watershed segmentation, and AlexNet classification, and reported very high accuracy and AUC. There has been an increase in hybrid strategies involving transfer learning combined with traditional classifiers. For example, Alaca and Akmeşe5 utilized feature extractors DenseNet121 and InceptionV3, followed by nearest neighbors, support vector machines, and random forests. The authors found balanced accuracies of up to 92.5%. In the same way, Alaca7 used DARTS-optimized MobileViT models. Institutional and clinically curated datasets remain relatively limited in the current literature.
However, such studies offer a more realistic estimation of the performance. Javed, Qureshi, Deng, et al.17 and Mitrea et al.43 internally externally validated data source cohort study. Although the reported accuracy values were slightly lower in externally validated cohorts, these findings provide more clinically meaningful evidence of generalizability. Prior works in these domains, like Choi et al.47 and Chu et al.48 feature-engineering driven, included texture descriptors, CA19-9 biomarkers, volumetric and radiomics features in their studies. Therefore, conventional ML approaches may still yield competitive results in highly clinically constrained settings with limited data availability. The classification literature shows extensive variability across datasets and validation designs.
| Table 3: Overview of CT-based machine learning and deep learning studies for pancreatic cancer classification. | ||||||||
| Study/Year | Patients (n) | CT scans (n) | Dataset and N | Ground Truth | Validation Type | Model /Architecture | Primary Metrics | 95% CI |
| (Nadeem et al., 2025)6 | NR | NR | “Pancreatic CT Images” dataset on Kaggle | Dataset-provided labels | Internal Validation | AlexNet | Acc 98.72%, AUC 0.9979 | CI NR |
| (Alaca andAkmeşe, 2025)5 | NR | NR | “Pancreatic CT Images” dataset on Kaggle | Dataset-provided labels | Internal Validation | DenseNet 121 + KNN/ RF/ SVM /Inception V3 hybrids | Balanced ACC 77.5%–92.5% | CI NR |
| (Alaca, 2025)7 | NR | NR | “Pancreatic CT Images” dataset on Kaggle | Dataset-provided labels | Internal Validation | DARTS-MobileViT | Acc 97.33%, F1-score 96.25% | CI NR |
| (Thanya and Jeslin, 2025)40 | NR | NR | Institutional CT | Reference Standard NR | Internal Validation | DeepOptimalNet | Acc 98.78% – 99.87%. F1-Score – 97% | CI NR |
| (J. Li et al., 2025)8 | NR | NR | Huaihe Hospital, China | Reference standard NR | Internal Validation | RPMSNet | Acc 73.67%, Sensitivity 71.54%, precision 76.78%, F1-score 72.61%, AUC 81.03% | CI NR |
| (Mahendran et al., 2025)41 | NR | NR | Kaggle dataset Pancreatic CT Images | Dataset-provided labels (NR) | Internal Validation | Transformer-based PancreasNet | Acc 92.4%, Specificity 90.7%, Recall 93.1% | CI NR |
| (Babaei et al., 2024)42 | NR | 281 | MSD | Public dataset annotation | Internal Validation | Denoising Diffusion models (DDPMs) anomaly detection method | Acc 81.6% | CI NR |
| (Mitrea et al., 2024)43 | NR | NR | Institutional CT | Reference standard NR | Internal validation | Hybrid neural recognition pipeline | Acc 98% | CI NR |
| (Ramaekers et al., 2024)26 | NR | NR | Internal dataset (The Netherlands), MSD dataset (USA) | Radiologist + Pathology confirmation | Public cross-dataset validation | 3D U-Net (DL Model) | AUC 0.99, Sensitivity 1.00, Specificity 0.86 | CI NR |
| (W. Chen et al., 2023)21 | NR | 694 | Internal Dataset | Reference standard NR | Internal Validation | Quantitative imaging features (QIFs) extracted using NCA and PCA | Acc 94%–95% | CI NR |
| (P. T. Chen et al., 2023)20 | 546 (NTUH) | 281 (MSD), 82 (TCIA), 30 (Synapse) | National Taiwan University Hospital + MSD (USA) + TCIA (USA) + Synapse (China) | Pathology-confirmed PDAC | Multi-center validation | 5CNN CAD | Sensitivity 89.9%, Specificity 95.9% | CI NR |
| (Cao et al., 2023b)44 | NR | 3208 | Shanghai Institution of Pancreatic Diseases (SIPD) Dataset | Pathology-confirmed PDAC | Multi-institutional validation | PANDA (DL) | AUC 0.986 – 0.996. | CI NR |
| (Shi et al., 2023)45 | NR | 71 (UNMC), 103 (MSD), 80 (TCIA) | University of Nebraska Medical Centre (USA), MSD (USA), TCIA (USA) | Reference standard NR | Cross-dataset validation | 3DGAUnet + GAN classifier | Not stated | CI NR |
| (Javed et al., 2022)17 | NR | 58 + 42 | Internal + External Dataset (USA) | Histopathology-confirmed PDAC | External independent validation | Subregional risk prediction model | Acc – 93% (Int), 89.3% (Ext) | CI NR |
| (Zhang et al., 2020)46 | NR | 2890 | Affiliated Hospital of Qingdao University (China) | Reference standard NR | Internal validation | ResNet-101, Augmented Feature Pyramid Networks, Self-adaptive Feature Fusion and Dependencies Computation Module | Acc – 90.18% | CI NR |
| (Choi et al., 2020)47 | 183 | NR | Seoul St. Mary’s Hospital (South Korea) | Pathology + CA 19-9 levels | Internal Validation | Clinical-imaging predictive model | AUC – 0.71 | CI NR |
| (Chu et al., 2019)48 | 190 | 380 | Johns Hopkins University (USA) | Histopathology confirmed PDAC | Internal train- validation split (255 train / 125 validation) | Radiomics + Random forest | Acc –99.2%, Sensitivity –100%, Specificity –98.5%, AUC –99.9% | CI NR |
Overall, segmentation studies yield relatively stable DSC standardized datasets, such as NIH and MSD. Segmenting a tumor is harder than segmenting a whole organ. Classification studies indicate high accuracy values leveraging publicly available datasets, with high variability in institutional cohorts of patients. This demonstrates that dataset heterogeneity and external validation are, to a greater extent, an obstacle to clinical translation.
Challenges
Despite notable advancements, there remains a significant challenge in detecting very small (<1 cm) or isodense tumors, which is crucial for improving patient outcomes. Although reinforcement learning-based anatomical maps utilize attention mechanisms and probability maps to segment pancreas regions, a notable gap remains in research specifically targeting fine-grained segmentation of the pancreatic duct, which is essential for diagnosing conditions such as PDAC.11 The 3DGAUnet approach enhances volumetric feature representation and provides more detailed tumor segmentation, addressing the gap in accurate and effective segmentation of PDAC and its subregions. The large-scale pancreatic cancer detection model addresses the challenge of detecting very small or isodense tumors, a significant gap in early stage detection accuracy. The model showed good potential applicability to other non-contrast CT types of early pancreatic cancer.48
Clinical Translations and Limitations
So far, most of the AI models reviewed show promising performance, but most are not clinically ready. Single-center datasets, particularly retrospective datasets, have limited generalizability due to restricted demographic and scanner diversity. External validation was infrequently performed, increasing vulnerability to domain shift when the models were used on data from different scanners, institutions, or patient groups. A lack of reporting for calibration analyses undermines confidence in probabilistic clinical decision making. Moreover, the regulatory and reporting frameworks, such as TRIPOD-AI, STARD-AI, and DECIDE-AI, were inconsistently followed, and reporting was variable. Future studies should prioritize prospective multicenter validation, calibration analysis, and decision-curve assessment to enhance clinical reliability.
Generalizability and Deployment Considerations
In externally validated studies, performance was more heterogeneous, suggesting that generalizability is not necessarily negatively impacted. However, generalizability depends on the diversity of datasets, model training scale, architecture, and the design of validation experiments. The increased size of the institutions in the dataset improved the performance and their consistency with the model. Essentially, models developed with single-institution datasets showed a mild decrease in performance on independent datasets. This finding indicates that multi-center prospective validation sets are necessary for controlling dataset shift and improving clinical reliability.
Future Directions and Scope
The previously reviewed literature has indicated the challenges involved in pancreatic cancer detection and segmentation. Numerous studies report similar challenges, including limited data availability, heterogeneous imaging data, generalizability, clinical translation issues, etc. In order for state-of-the-art technologies to become obsolete, new methodologies, multimodal investigations, and more clinically relevant diagnostic models must advance the field. A promising direction involves advanced DL architectures, particularly 3D convolutional networks integrated with attention mechanisms. Recent studies show that attention-based frameworks positively impact the simultaneous pancreas and its subregions and tumors segmentation. To illustrate this concept, PanSegNet was proposed by Zhang et al.50 for integrating linear self-attention modules into the encoder–decoder of nnU-Net, which gave DSCs over 88% for all pancreatic organs on multi-center CT images of 140 cases from the MSD challenge.49,50
Future research should explore hierarchical and multi-level attention, transformer-based encoders, and lightweight attention modules for clinical deployment. There is also an emerging research trend around building hybrid and efficiency-oriented models. Li et al.24 proposed attention-augmented adversarial U-NetsClick or tap here to enter text. and Amiri et al.29 stated a reinforcement learning-based pancreas anatomical mapping that enables promising accuracyClick or tap here to enter text.. Moreover, they found that they can maintain a high accuracy with fewer parameters and less computation, which means the optimized model can run in a real environment with limitations. Going forward, hybrid model development and application on larger datasets, with patient data from multiple institutions, should be explored.
Combining the different data types should be an important direction for the future that would allow a holistic disease characterization for early disease prediction and intervention. Once again, a large-scale ensemble detection model with CT, and clinical features by Chen et al.20 achieved an AUC of 0.95 with good sensitivities.8 Besides, a radiomics-based early prediction framework predicted pancreatic cancer with an advance of up to 36 months before clinical diagnosis, given by Chen W. et al.21 Further research concentrating on effective multimodal fusion methods and related longitudinal modeling can pave the way for personalized risk stratification and early intervention in the future. After this, we need architectural designs for model generalization and interpretability. Above all, developing such strategies depends on the availability of data from large cohorts and variation among institutions.
Hence, a critical milestone in a translation pathway is leveraging the federated learning (FL) paradigms. FL provides a setting to carry out multi-institutional model training without private data sharing. Importantly, future research must ensure rigorous clinical validation, explainability, and seamless workflow integration. Though there is a reported high accuracy for some experimental settings, most of these studies are based on retrospective evaluation that lacks prospective validation and an assessment of interpretability. Aligning future research directions explicitly with identified limitations—such as poor external validation, limited subregion focus, and absence of decision-support integration—may facilitate the translation of AI models into meaningful improvements in pancreatic cancer screening, diagnosis, and patient outcomes.
Conclusion
The recent emergence of AI-assisted detection and characterization targeting pancreatic cancer has become commonplace in modern times. The growing interest in clinical translation represents more than a methodological advancement. According to an analysis of the research carried out, a significant improvement has been achieved in ML and DL architectures. CNN, attention-based methodologies, and hybrid frameworks have demonstrated significant progress. Pancreas segmentation, tumor detection, and early risk prediction represent major advancements in the field. In addition, these approaches have demonstrated performance characteristics in controlled study settings. There also exist performance indicators for the long-standing issues of pancreatic cancer early detection. Early tumor detection remains technically challenging due to class imbalance, subtle imaging features, and anatomical heterogeneity.
Moreover, anatomical variability and tumor subtype heterogeneity further complicate model development. These challenges are commonly observed in pancreatic cancer and other oncological conditions. Significant challenges remain for large-scale clinical translation. In multi-institutional studies, prospective challenges include model interpretability, clinical evaluation, etc. Additionally, other challenges include integration into clinical workflows, domain adaptation, data harmonization, and fixed model generalization. We also face a challenge concerning across-protocol generalization and changes in scanners.
This includes the use of complex DL architectures, attention and transformer-based mechanisms, compensation for lack of data by means of federated learning and generative modelling, and setting common evaluation protocols across heterogeneous populations and imaging platforms. Most importantly, the use of AI should not be restricted to detection and segmentation. It must also evaluate treatment response, characterize the tumor and predict disease progression. In summary, translating AI technologies successfully could radically change how pancreatic cancer is managed, leading to better outcomes and curbing mortality in patients. Such advancements may contribute to earlier detection, personalized treatment planning, and improved patient survival outcomes.
List of Abbreviations
- AI: Artificial Intelligence
- AUC: Area Under Curve
- CNN: Convolutional Neural Network
- CT: Computed Tomography
- DL: Deep Learning
- DSC: Dice Similarity Coefficient
- EUS: Endoscopic Ultrasound
- FL: Federated Learning
- MIM-CMFNet: Mutual Information Minimization and Cross-Modal Fusion Network
- ML: Machine Learning
- MRI: Magnetic Resonance Imaging
- MSD: Medical Segmentation Decathlon
- NCA: Neighborhood Component Analysis
- NIH: National Institute of Health
- PCA: Principal Component Analysis
- PDAC: Pancreatic Ductal Adenocarcinoma
- PET: Positron Emission Tomography
- PROBAST: Prediction Model Risk of Bias Assessment Tool
- QIFs: Quantitative Imaging Features
- RQS: Radiomics Quality Score
- TCIA: The Cancer Imaging Archive
References
- Siegel RL, Giaquinto AN, Jemal A. Cancer statistics 2024. CA Cancer J Clin. 2024;74(1). https://doi.org/10.3322/caac.21820
- Bray F, Laversanne M, Sung H, Ferlay J, Siegel RL, Soerjomataram I, et al. Global cancer statistics 2022: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2024;74(3). https://doi.org/10.3322/caac.21834
- Desai K, Baralo B, Kulkarni A, Keshava VE, Iqbal S, Ali H, Prabhakaran Y, Thirumaran R. Cancer statistics: The United States vs. worldwide; 2025.
- Gerasimenko JV, Gerasimenko OV. The role of Ca2+ signalling in the pathology of exocrine pancreas. Cell Calcium. 2023;112. https://doi.org/10.1016/j.ceca.2023.102740
- Alaca Y, Akmeşe ÖF. Pancreatic tumor detection from CT images converted to graphs using Whale Optimization and Classification Algorithms with Transfer Learning. Int J Imaging Syst Technol. 2025;35(2). https://doi.org/10.1002/ima.70040
- Nadeem A, Ashraf R, Mahmood T, Parveen S. Automated CAD system for early detection and classification of pancreatic cancer using deep learning model. PLoS ONE. 2025;20(1). https://doi.org/10.1371/journal.pone.0307900
- Alaca Y. Machine learning via DARTS-optimized MobileViT models for pancreatic cancer diagnosis with graph-based deep learning. BMC Med Inform Decis Mak. 2025;25(1). https://doi.org/10.1186/s12911-025-02923-x
- Li J, Li X, Chen Y, Wang Y, Wang B, Zhang X, et al. Mesothelin expression prediction in pancreatic cancer based on multimodal stochastic configuration networks. Med Biol Eng Comput. 2025;63(4):1117–29. https://doi.org/10.1007/s11517-024-03253-2
- Daher et al., 2024
- Jannin A, Dessein AF, Do Cao C, Vantyghem MC, Chevalier B, Van Seuningen I, et al.(2023). Metabolism of pancreatic neuroendocrine tumors: what can omics tell us? Front Endocrinol. 2023;vol.14. https://doi.org/10.3389/fendo.2023.1248575
- Jin D, Khan NU, Gu W, Lei H, Goel A, Chen T. Informatics strategies for early detection and risk mitigation in pancreatic cancer patients. Neoplasia (US). 2025;vol.60. Elsevier Inc. https://doi.org/10.1016/j.neo.2025.101129
- Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;vol.372. https://doi.org/10.1136/bmj.n71
- Wade R, Corbett M, Eastwood A. Quality assessment of comparative diagnostic accuracy studies: our experience using a modified version of the QUADAS-2 tool. Res Synth Methods. 2013;4(3). https://doi.org/10.1002/jrsm.1080
- Kaul T, Damen JA, Wynants L, Van Calster B, van Smeden M, Hooft L, et al. Assessing the quality of prediction models in health care using the Prediction model Risk Of Bias ASsessment Tool (PROBAST): an evaluation of its use and practical application. J Clin Epidemiol. 2025;181. https://doi.org/10.1016/j.jclinepi.2025.111732
- Spadarella G, Stanzione A, Akinci D’Antonoli T, Andreychenko A, Fanni SC, Ugga L, et al. Systematic review of the radiomics quality score applications: an EuSoMII Radiomics Auditing Group Initiative. Eur Radiol. 2023;33(3). https://doi.org/10.1007/s00330-022-09187-3
- Erickson BJ, Kitamura F. Magician’s corner: 9. performance metrics for machine learning models. Radiol Artif Intell. 2021;3(3). https://doi.org/10.1148/ryai.2021200126
- Javed S, Qureshi TA, Deng Z, Wachsman A, Raphael Y, Gaddam S, et al. Segmentation of pancreatic subregions in computed tomography images. J Imaging. 2022;8(7). https://doi.org/10.3390/jimaging8070195
- Man Y, Huang Y, Feng J, Li X, Wu F. Deep Q learning driven CT pancreas segmentation with geometry-aware U-Net. IEEE Trans Med Imaging. 2019;38(8):1971–80. https://doi.org/10.1109/TMI.2019.2911588
- Karpińska M, Czauderna M. Pancreas—its functions, disorders, and physiological impact on the mammals’ organism. Front Physiol. 2022;vol.13. https://doi.org/10.3389/fphys.2022.807632
- Chen PT, Wu T, Wang P, Chang D, Liu KL, Wu MS, et al. Pancreatic cancer detection on CT scans with deep learning: a nationwide population-based study. Radiol. 2023;306(1):172–82. https://doi.org/10.1148/radiol.220152
- Chen W, Zhou Y, Asadpour V, Parker RA, Puttock EJ, Lustigova E, et al. Quantitative radiomic features from computed tomography can predict pancreatic cancer up to 36 months before diagnosis. Clinical Transl Gastroenterol. 2023;14(1):e00548. https://doi.org/10.14309/ctg.0000000000000548
- Hu P, Li X, Tian Y, Tang T, Zhou T, Bai X, et al. Automatic pancreas segmentation in CT images with distance-based saliency-aware DenseASPP network. IEEE J Biomed Health Inform. 2021;25(5):1601–11. https://doi.org/10.1109/JBHI.2020.3023462
- Cao K, Xia Y, Yao J, Han X, Lambert L, Zhang T, et al. Large-scale pancreatic cancer detection via non-contrast CT and deep learning. Nat Med. 2023a;29(12):3033–43. https://doi.org/10.1038/s41591-023-02640-w
- Li W, Qin S, Li F, Wang L. MAD-UNet: a deep U-shaped network combined with an attention mechanism for pancreas segmentation in CT images. Med Phys. 2021;48(1):329–41. https://doi.org/10.1002/mp.14617
- Javed S, Qureshi TA, Gaddam S, Wang L, Azab L, Wachsman AM, et al. Risk prediction of pancreatic cancer using AI analysis of pancreatic subregions in computed tomography images. Front Oncol. 2022;12. https://doi.org/10.3389/fonc.2022.1007990
- Ramaekers M, Viviers CGA, Hellström TAE, Ewals LJS, Tasios N, Jacobs I, et al. Improved pancreatic cancer detection and localization on CT scans: a computer-aided detection model utilizing secondary features. Cancers. 2024;16(13). https://doi.org/10.3390/cancers16132403
- Mukherjee S, Korfiatis P, Khasawneh H, Rajamohan N, Patra A, Suman G, et al. Bounding box-based 3D AI model for user-guided volumetric segmentation of pancreatic ductal adenocarcinoma on standard-of-care CTs. Pancreatology. 2023;23(5):522–9. https://doi:10.1016/j.pan.2023.05.008
- Qiu D, Ju J, Ren S, Zhang T, Tu H, Tan X, et al. A deep learning-based cascade algorithm for pancreatic tumor segmentation. Front Oncol. 2024;14. https://doi.org/10.3389/fonc.2024.1328146
- Amiri S, Vrtovec T, Mustafaev T, Deufel CL, Thomsen HS, Sillesen MH, et al. Reinforcement learning-based anatomical maps for pancreas subregion and duct segmentation. Med Phys. 2024;51(10):7378–92. https://doi.org/10.1002/mp.17300
- Mahmoudi T, Kouzahkanan ZM, Radmard AR, Kafieh R, Salehnia A, Davarpanah AH, et al. Segmentation of pancreatic ductal adenocarcinoma (PDAC) and surrounding vessels in CT images using deep convolutional neural networks and texture descriptors. Sci Rep. 2022;12(1). https://doi.org/10.1038/s41598-022-07111-9
- Li M, Lian F, Li Y, Guo S. Attention-guided duplex adversarial U-net for pancreatic segmentation from computed tomography images. J Appl Clin Med Phys. 2022;23(4). https://doi.org/10.1002/acm2.13537
- Li M, Lian F, Wang C, Guo S. Accurate pancreas segmentation using multi-level pyramidal pooling residual U-Net with adversarial mechanism. BMC Med Imaging. 2021;21(1). https://doi.org/10.1186/s12880-021-00694-1
- Wang Y, Zhang J, Cui H, Zhang Y, Xia Y. View adaptive learning for pancreas segmentation. Biomed Signal Process Control. 2021;66. https://doi.org/10.1016/j.bspc.2020.102347
- Tian M, He J, Yu X, Cai C, Gao Y. MCMC-guided CNN training and segmentation for pancreas extraction. IEEE Access. 2021;9:90539–54. https://doi.org/10.1109/ACCESS.2021.3070391
- Xue J, He K, Nie D, Adeli E, Shi Z, Lee SW, et al. Cascaded multitask 3-D fully convolutional networks for pancreas segmentation. IEEE Trans Cybern. 2021;51(4):2153–65. https://doi.org/10.1109/TCYB.2019.2955178
- Chen Z, Wang X, Yan K, Zheng J. Deep multi-scale feature fusion for pancreas segmentation from CT images. Int J Comput Assist Radiol Surg. 2020;15(3):415–23. https://doi.org/10.1007/s11548-020-02117-y
- Boers TGW, Hu Y, Gibson E, Barratt DC, Bonmati E, Krdzalic J, et al. Interactive 3D U-net for the segmentation of the pancreas in computed tomography scans. Phys Med Biol. 2020;65(6). https://doi.org/10.1088/1361-6560/ab6f99
- Liu S, Yuan X, Hu R, Liang S, Feng S, Ai Y, et al. Automatic pancreas segmentation via coarse location and ensemble learning. IEEE Access 2020;8:2906–14. https://doi.org/10.1109/ACCESS.2019.2961125
- Yang J, Qiu P, Zhang Y, Marcus DS, Sotiras A. D-net: dynamic large kernel with dynamic feature fusion for volumetric medical image segmentation. Biomed Signal Process Control. 2026;113. http://arxiv.org/abs/2403.10674
- Thanya T, Jeslin T. DeepOptimalNet: optimized deep learning model for early diagnosis of pancreatic tumor classification in CT imaging. Abdom Radiol. 2025;50(9):4181–211. https://doi.org/10.1007/s00261-025-04860-9
- Mahendran RK, Aniruddhan P, Kumar P. PancreasNet: a transformer-based progressive residual network for comprehensive pancreatic cancer detection using CT images. 10th International Conference on Wireless Communications Signal Processing and Networking WiSPNET 2025. https://doi.org/10.1109/WiSPNET64060.2025.11004859
- Babaei R, Cheng S, Thai T, Zhao S. Pancreatic tumor segmentation as anomaly detection in ct images using denoising diffusion models. 2024. http://arxiv.org/abs/2406.02653
- Mitrea D, Brehar R, Itu R, Nedevschi S, Socaciu M, Badea R. Pancreatic tumor recognition from CT images through advanced deep learning techniques. Proceedings of the 24th IEEE International Conference on Automation Quality and Testing Robotics AQTR 2024. https://doi.org/10.1109/AQTR61889.2024.10554139
- Cao et al. (2023b)
- Shi Y, Tang H, Baine MJ, Hollingsworth MA, Du H, Zheng D, et al. 3DGAUnet: 3D generative adversarial networks with a 3D U-net based generator to achieve the accurate and effective synthesis of clinical tumor image data for pancreatic cancer. Cancers. 2023;15(23). https://doi.org/10.3390/cancers15235496
- Zhang Z, Li S, Wang Z, Lu Y. A novel and efficient tumor detection framework for pancreatic cancer via CT images. Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS. 2020. https://doi.org/10.1109/EMBC44109.2020.9176172
- Choi MH, Yoon SB, Song M, Lee IS, Hong TH, Lee MA, et al. Benefits of the multiplanar and volumetric analyses of pancreatic cancer using computed tomography. PLoS ONE. 2020;15(10). https://doi.org/10.1371/journal.pone.0240318
- Chu LC, Park S, Kawamoto S, Fouladi DF, Shayesteh S, Zinreich ES, et al. Utility of CT radiomics features in differentiation of pancreatic ductal adenocarcinoma from normal pancreatic tissue. Am J Roentgenol. 2019;213(2):349–57. https://doi.org/10.2214/AJR.18.20901
- Antonelli M, Reinke A, Bakas S, Farahani K, Kopp-Schneider A, Landman BA, et al. The medical segmentation decathlon. Nat Commun. 2022;13(1). https://doi.org/10.1038/s41467-022-30695-9
- Zhang Z, Keles E, Durak G, Taktak Y, Susladkar O, Gorade V, et al. Large-scale multi-center CT and MRI segmentation of pancreas with deep learning. Med Image Anal. 2025;99:103382. https://doi.org/10.1016/j.media.2024.103382








