Constraint-Embedded Deep Learning for Breast Cancer Diagnosis Using Thermogram Images: An Experimental Study

Panidhar Karanam ORCiD, Subhadra Kompella, Suresh Chittineni, Venkata Sai Dhruv Nadamuni and Sai Manikanta Anna
Department of CSE, Gitam (Deemed to be) University, Visakhapatnam, India
Correspondence to: Panidhar Karanam, panidharkaranam858@gmail.com

Premier Journal of Science

Additional information

  • Ethical approval: N/a
  • Consent: N/a
  • Funding: No industry funding
  • Conflicts of interest: N/a
  • Author contribution: Panidhar Karanam, Subhadra Kompella, Suresh Chittineni, Venkata Sai Dhruv Nadamuni and Sai Manikanta Anna – Conceptualization, Writing – original draft, review and editing
  • Guarantor: Panidhar Karanam
  • Provenance and peer-review:
    Unsolicited and externally peer-reviewed
  • Data availability statement: N/a

Keywords: Thermogram-based breast cancer diagnosis, Constraint-embedded convolutional networks, Infrared thermal imaging, Ensemble boosting integration, Domain-specific medical constraints.

Peer Review
Received: 14 August 2025
Last revised: 23 September 2025
Accepted: 29 September 2025
Version accepted: 2
Published: 20 October 2025

Plain Language Summary Infographic
“Vibrant infographic illustrating constraint-embedded deep learning for breast cancer diagnosis using thermogram images, featuring icons of thermography, AI chip, brain, and a doctor holding a checklist symbolizing constraints reasoning and medical standard compliance.”
Abstract

Artificial intelligence (AI) breakthroughs have greatly changed medical diagnostics, especially in disease early detection like that of breast cancer. Thermography, a non-invasive imaging method, offers a new hope against conventional diagnostic techniques in the form of capturing temperature changes in breast tissue to facilitate early abnormality detection. Most AI algorithms, though, focus on prediction correctness at the expense of important domain-specific constraints necessary in clinical practice. This supervision tends to create models that work well in benchmark environments but do not work in actual medical settings because of demographics variation, imaging variability, and early threshold diagnosis. To address this deficiency, this paper embeds constraint reasoning into AI models to achieve high predictive accuracy and medical standard compliance. The proposed approach uses constraints that restrict outputs within valid ranges, prevent risk factor overestimation, ensure feature scaling consistency, and eliminate unrealistic values of medical parameters. By integrating such constraints, the model enhances robustness, reliability, and explainability, reducing symptomatic errors and clinical expectation compliance. This renders AI-based diagnostic tools clinically relevant, improving decision-making, and consequently resulting in more accurate and reliable breast cancer diagnosis in medical practice.

Introduction

Medical diagnosis has been transformed by artificial intelligence (AI), which has made it possible to use automated, highly accurate disease detection techniques. Due to its high global incidence, which accounts for about 30% of newly diagnosed cancer cases in women, breast cancer detection has drawn particular attention among other applications.1 Enhancing survival rates requires early detection, but conventional screening procedures like mammograms and biopsies, while useful, are often intrusive, expensive, or expose patients to radiation. Thermography, a novel imaging technique that identifies temperature variations in breast tissue and may reveal hidden anomalies, has been developed as a non-invasive alternative. However, the primary focus of existing deep learning-based medical imaging solutions is prediction accuracy optimization, with comparatively little attention paid to crucial domain-specific limitations required for real-world clinical application. This limitation frequently results in models that perform flawlessly in experimental settings but struggle to generalize across diverse patient populations and imaging configurations. Furthermore, the limitations of translation invariance, local connectivity, and gradually declining spatial resolution2 are inherent to convolutional neural networks (CNNs), which are widely employed in medical imaging and may restrict their ability to detect breast cancer. Figure 1 shows two distinct images: Benign tissue is indicated by a normal image, while malignant tissue is indicated by an abnormal image.

Fig 1 | Benign vs malignant tissues
Figure 1: Benign vs malignant tissues.

This project, “Embedding Constraint Reasoning in AI Models for Breast Cancer Diagnosis Using Thermogram Images,” aims to address these issues by introducing domain-specific constraints into the learning algorithm itself. The method starts with a constraint layer, then moves on to a CNN structure for feature extraction, boosting mechanisms for improved performance, and a final decision layer for predictions that can be used in clinical settings. When tested using performance metrics, CNNs have demonstrated improved accuracy and re-call thanks to their use of multiple layers and large datasets for training.3 Furthermore, the integration of constraint solving and machine learning aligns with emerging trends in these fields, resulting in AI-based diagnostic tools that are both clinically valid and accurate.4 This framework reduces misdiagnoses and improves model robustness for a range of patient populations by integrating constraints into the model architecture, which ensures that predictions adhere to medical guidelines. All things considered, this work advances the use of AI in healthcare by developing a reliable diagnostic tool that can support the early and certain detection of breast cancer, improving patient outcomes.

Literature Review

In terms of deep learning-based breast cancer detection, Residual Network 101 (ResNet101) has 100% (accuracy), 98.31% (accuracy), and 98.31% (accuracy) with the Stochastic Gradient Descent with Momentum (SGDM), Adaptive Moment Estimation (ADAM), and Root Mean Square Propagation (RMSprop) optimizers, respectively. Visual Geometry Group 16 (VGG16) is also compatible with any training setup, as evidenced by its 98.31% (accuracy) with the same optimizers. The model’s diagnostic potential is further increased by using Matrix Laboratory (MATLAB)-based processing to identify cancerous areas in thermograms. These findings highlight how crucial model selection and optimization techniques are to achieving high classification accuracy in medical images.5 Studies have shown that Inception Mv4 is very effective at detecting breast cancer in real time. It performs better and is even more efficient when used in conjunction with situ cooling techniques.6

The performance of the suggested approach is evaluated using the Mastology Research Database (DMR-IR), which contains extensive and well-established thermal breast image data. When compared to several state-of-the-art methods, the method achieves the highest classification accuracy of 98.80%, confirming its effectiveness in detecting breast cancer. This improved performance results from the combination of deep learning and constraint reasoning, which together improve prediction accuracy and clinical significance. These findings support the idea that AI-based thermographic inspection could be a viable substitute in the future for early breast cancer detection, leading to improved health outcomes.7 The superiority of hybrid models is further demonstrated by the DenseNet121+ELM model, which achieved training and testing accuracy rates of 99.47% and 99.14%, respectively.8

Medical imaging and illness diagnosis have also made use of constraint-based AI models. Research has demonstrated that, although they are domain-constrained, more effective structured prediction and content generation architectures exist that can take the place of the current models. Although this is primarily speculative, their combination with the predictions of large language models is being used to improve interpretability. The need for robust and comprehensible AI models in the healthcare industry has been confirmed by the 98.95% accuracy of CNN-based models optimized with Bayesian algorithms in the detection of breast cancer from thermogram images. A comparative analysis of deep CNN architectures has also been conducted to distinguish between breast cancer and normal breast conditions, revealing differences in model performance across networks.9 Because medical applications are life-or-death, even small misclassifications can have serious repercussions, which is why highly reliable AI-powered diagnostic tools are required.

Table 1 compares recent thermogram-based breast cancer detection models, focusing on the models’ classification accuracy and area under the ROC curve (AUC). The TokenMixer Hybrid Architecture and Vision Transformer (ViT) models successfully extract features from thermal images, as evidenced by their high accuracy and AUC. The Deep CNN with Attention Mechanisms, which exhibits superior accuracy and AUC,10,11 demonstrates the benefits of integrating attention mechanisms for enhanced detection performance. This comparison highlights the improvements made by modern deep learning techniques while guaranteeing fair evaluation on identical datasets and splits. However, population heterogeneity and variations in imaging conditions make traditional thermographic analysis unreliable. Although it has been demonstrated that ensemble boosting techniques improve tumor stage and malignancy classification,12 they do not always impose domain-specific constraints that are readily mapped to clinical requirements. By using medically relevant constraints to ensure stable, comprehensible, and clinically valid outputs, the constraint-embedded CNN model described here fills the gap. Our approach enhances reliability by avoiding irrational temperature values, feature consistency, and decision improvement in real-world medical practice, in contrast to earlier work that prioritizes accuracy.

Table 1: Comparative evaluation of recent thermogram-based breast cancer detection models.
ModelAccuracy (%)AUC (ROC) (%)
TokenMixer Hybrid Architecture97.0299.70
Vision Transformer (ViT)96.5399.38
Deep CNN with Attention Mechanisms99.4699.80
Methodology

Thermogram Images

Using high-resolution infrared cameras, thermography, also known as infrared imaging, is a non-invasive technique that captures the heat patterns generated by the human body. The cameras record variations in skin temperature and convert them into thermogram images, which show different temperatures as gradients of color. Compared to normal tissues, cancerous tissues generate more heat due to their higher metabolic rates and increased blood flow. Thermal asymmetry, a measurement of this temperature differential, may indicate aberrant cell growth. Since there is no radiation involved, the procedure is completely safe and can be used for early screening and ongoing monitoring. Various imaging modalities have been developed to detect breast cancer symptoms in the past. Apart from mammography, thermography is another imaging technique used to identify breast abnormalities that is also reasonably priced.13 The pictures that provide information about malignant tissue are displayed in Figure 2A. Furthermore, Figure 2B shows the benign and malignant masses from various samples. The architecture described in this paper was trained using these images.

Fig 2 | Benign vs malignant tissues
Figure 2: Benign vs malignant tissues.

Because it can detect abnormal vascular patterns and temperature variations that may point to the presence of a tumor, thermography is helpful in the detection of breast cancer. Thermography can detect physiological changes in breast tissue, unlike mammography, which uses structural imaging. This makes it particularly useful for early detection, especially in women with dense breast tissue, where mammography may not be as effective.14 Additionally, thermography reduces patient discomfort by providing a contact-free, painless screening method. However, despite being a helpful adjunct, it is not a diagnostic method in and of itself; rather, it is typically used in conjunction with ultrasound and mammography to improve the detection of breast cancer.

Data Acquisition and Preprocessing

The dataset utilized in this study is a compilation of several publicly accessible thermographic breast cancer image sources, such as Kaggle images and extra images gathered from nearby hospitals with the proper authorization. A balanced representation for training is ensured by the dataset’s 3,042 thermal breast images, of which 1,520 are classified as benign and 1,522 as malignant. The dataset naturally contains differences in acquisition devices, patient demographics, and imaging conditions because the images come from a variety of sources. This improves the suggested model’s generalizability and lowers the possibility of overfitting to a particular population or imaging configuration. We used a patient-level partitioning strategy, allocating all images from a single patient to a single split in order to guarantee robust evaluation and prevent data leakage. We used a five-fold GroupKFold cross-validation approach, with 70% training, 15% validation, and 15% testing for each fold. We maintained strict independence between training and testing data by grouping images at the patient level, ensuring that no patient’s images were shared across folds. Despite the lack of an independent external test set, we reported the mean ± standard deviation of evaluation metrics across the five folds to reduce the risk of overfitting. We fixed a global random seed of 42 for reproducibility, which allowed for consistent dataset partitioning and model evaluation across all experiments.

Extensive preprocessing techniques were consistently used across the dataset to enhance model performance and ensure stable training. All images were resized to 224 × 224 pixels and converted to three-channel RGB format in order to fit the model architecture. Intensity values were normalized using the dataset mean and standard deviation that were determined exclusively from the training folds in order to stop information from leaking from the test data. Only the training set was subjected to data augmentation in order to improve robustness against various imaging conditions. This included random horizontal flips (p=0.5), minor rotations within ±5°, and contrast adjustments. Together, these preprocessing methods helped to achieve a high classification accuracy of 95.67.

Constraint Reasoning

In 2023, There was a model proposed On-the-fly Constraint Programming Search (OTFS), based on GCSP, to successfully solve Model Checking problems with a large number of functional constraints.15 Constraint reasoning is needed for the enforcement of domain constraints upon machine learning models, especially for important applications such as breast cancer diagnosis based on thermogram images. Various techniques have already proved that machine learning can improve search processes in constraint solving with improved efficiency and accuracy.16 Additionally, machine learning and data mining methods can be utilized to complement constraint-solving techniques to enable AI systems to make more rational and medically sound decisions.17 Incorporating domain knowledge represented in terms of constraints into deep neural models has received substantial research focus, given that it enhances model reliability, robustness, and interpretability in a clinical context.18 In this project, incorporating constraints guarantees that model outputs conform to medical expectations, reducing the risk of false predictions and building confidence in AI-based diagnostics.

1. Positivity Constraint: The positivity constraint ensures that all values are non-negative since negative intensity values in thermography have no physical meaning. It is defined as: (1)

Mathematical formula depicting the function f(x) = max(x, 0.0), representing a non-negative activation function.

Backpropagation (a.e.): (2)

Mathematical notation illustrating the derivative of a function, with two cases: one for when x is greater than zero and another for when x is less than or equal to zero.

2. Upper Bound Constraint: The upper bound constraint prevents the outputs from exceeding a specified maximum value, keeping them within a clinically relevant range: (3)

An equation showing a function f(x) that limits a value x to a maximum value.

Backpropagation: (4)

Mathematical derivative notation showing the conditional derivative of a function based on whether x is less than or greater than or equal to a maximum value.

3. Normalization Constraint: The normalization constraint rescales inputs to ensure numerical stability and consistency across samples: (5)

Mathematical formula demonstrating a function f(x) defined as the ratio of x to the sum of its norm and a small constant.

Backpropagation (vector–Jacobian form): (6)

Mathematical representation of the gradient of a function, showing derivatives with respect to variables, highlighting its components and structure.

4.  Non-Negativity Constraint: This constraint preserves values in positive form using an absolute transformation: (7)

Mathematical function representation of absolute value.

Backpropagation (a.e.): (8)

Mathematical derivative equation showing piecewise function defining the derivative of function f with respect to x. The function is defined as 1 if x is greater than or equal to 0, and -1 if x is less than 0.

5.  Clipping Constraint: Clipping limits values to a fixed interval to avoid extreme, clinically implausible outputs: (9)

Mathematical expression representing a clipping function, defined as f(x) = clip(x, min value, max value), used in deep learning models to constrain output values within specified limits.

Backpropagation: (10)

A mathematical expression representing a derivative with conditions on x, indicating that the derivative equals 1 when x is between min_value and max_value, and 0 otherwise.

Training objective with penalties (optional to include): (11)

Formula illustrating the total loss function in a machine learning model, combining binary cross-entropy loss with additional regularization terms.

The ablation study assesses each constraint’s unique contribution to the overall performance of the model, as summarized in Table 2. The maximum accuracy, precision, and recall are 95.67%, 90.12%, and 85.78%, respectively, for the complete model that takes into account all constraints. Performance noticeably declines when any restriction is removed, under-scoring their significance as a whole. Since its removal yields the lowest accuracy and recall, the normalization constraint has the biggest effect among them. Likewise, the upper bound and positivity constraints play a significant role in preserving prediction stability and clinical significance. These findings show that in order to attain the best diagnostic performance, all constraints work in concert with one another. In data preprocessing, positivity and non-negativity constraints are imposed on thermographic images to eliminate invalid negative intensity values and maintain the physical plausibility of all input features. Normalization constraint rescales temperature variations across different patients and maintains numerical stability while facilitating more stable learning. In the model architecture, clipping and upper bound constraints are incorporated in activation functions and post-processing layers to maintain predictions within medically plausible temperature ranges, avoiding unrealistic or extreme predictions by the model. These constraints are applied throughout training using customized loss functions and gradient-based mechanisms, such that the model learns within medical standards.

Table 2: Impact of each constraint.
ConfigurationAccuracy (%)Precision (%)Recall (%)
Full Model (All Constraints)95.6790.1285.78
Without Positivity Constraint92.1487.3582.16
Without Upper Bound Constraint93.0888.2183.42
Without Normalization Constraint91.3285.6780.73
Without Non-Negativity Constraint92.8786.9581.64
Without Clipping Constraint93.5188.0983.05
Integrating CNN with Boosting for enhanced prediction

Because Convolutional Neural Networks (CNNs) can learn complex image patterns and spatial hierarchies, they have outperformed medical image analysis. To improve deep learning models’ capacity to extract useful features from medical images, DenseNet121, a CNN extension consisting of 120 convolutional layers and four average pooling layers, has been used.19 However, CNNs have poor generalization capabilities when dealing with noisy data, class imbalance, and overfitting, especially in medical imaging where data sizes are small. CNNs’ predictive power and model generalization can be improved by using boosting algorithms like Gradient Boosting and Adaptive Boosting (AdaBoost) to compensate for these drawbacks.In order to guide model improvements to hard examples and improve decision boundaries toward better prediction, boosting gradually modifies model weights. By combining CNNs and boosting, we can take advantage of deep learning’s ability to extract features and boosting’s capacity for adaptive learning, which allows the model to learn from its incorrectly classified examples and get better over time.

This technique uses a CNN as a feature extractor, learning hierarchical representations for the thermogram images and classifying them using a boosting algorithm such as AdaBoost or Gradient Boosting. While Gradient Boosting uses gradient descent to minimize a loss function in an effort to improve prediction, AdaBoost achieves this by iteratively training a large number of weak classifiers sequentially and modifying their weights based on the error in the previous iteration. After that, the CNN features are fed into the boosting classifier, which increases the weights of the incorrectly classified examples. This forces the model to concentrate on difficult patterns and boosts accuracy. False positives and false negatives, two major problems in medical diagnosis, are reduced by this combination. The accuracy of breast tumor diagnosis for tumor stage and malignancy detection is also significantly improved by ensemble boosting.12 Compared to traditional methods, CNNs allow for extensive feature extraction, which further advances medical imaging research.20 This combined approach is more suitable for practical diagnostic applications because it offers enhanced accuracy, interpretability, and clinical reliability.

Proposed Architecure of the model

The proposed CNN architecture, shown in Figure 3, employs a structured pipeline of feature extraction → constraint enforcement → classification → boosting → decision making to ensure accuracy and clinical reliability. The model begins with an input layer (224 × 224 × 3) to standardize thermal breast images. The CNN module consists of four convolutional blocks with filters (32, 64, 128, 256), each of which has ReLU, max pooling, and batch normalization. These blocks preserve discriminative patterns while progressively reducing the spatial size. The extracted features are flattened into a 1D vector (50,176 features) and then passed to fully connected dense layers with dropout (0.5) to avoid overfitting. A constraint layer then applies non-negativity, positivity, normalization, clipping, and upper-bound restrictions to ensure that values remain within clinically meaningful ranges. Finally, classification is improved and refined through dense layers to improve decision boundaries. By generating a probability score with a softmax function, the model consistently differentiates between benign and malignant cases.

Fig 3 | Architecture of the proposed model
Figure 3: Architecture of the proposed model.
Results

Figure 4 illustrates the confusion matrix, which gives a general evaluation of how well the model is performing by showing correctly as well as wrongly classified cases. Here, 1477 benign cases were correctly classified, and 43 benign cases were incorrectly classified as malignant. Likewise, 1433 cases of malignancy were correctly classified, but 89 malignancy cases were wrongly identified as benign. This gives a total of 132 misclassifications out of 3042 images, resulting in an overall accuracy of 95.67%. The confusion matrix can be used to evaluate not only accuracy but also the model’s capacity to reduce false positives and false negatives, both of which are of utmost importance in medical diagnostics. Through the analysis of these errors, model reliability can be improved and potential risks in real-world clinical use can be minimized.

Fig 4 | Confusion matrix
Figure 4: Confusion matrix.

A comparison of the suggested constraint-embedded CNN with boosting and a conventional CNN across important performance metrics is shown in Table 3. The suggested model exhibits superior classification capability with a significantly higher accuracy of 95.67 ± 0.42% as opposed to 86.93 ± 0.85% for the traditional CNN. Likewise, the model’s improved capacity to lower false positives and false negatives—a crucial prerequisite in medical diagnostics—is demonstrated by the significantly higher precision (90.12 ± 0.5%), recall (85.78 ± 0.66%), and F1-score (87.89 ± 0.59%). Additionally, the model’s enhanced ability to differentiate between benign and malignant cases is highlighted by its AUC-ROC of 96.45±0.38%. The integration of constraints and boosting techniques, which together improve generalization, robustness, and interpretability, is responsible for these steady gains.

Table 3: Comparative analysis of proposed model vs. Traditional CNN.
MetricProposed Model (%)Traditional CNN (%)
Accuracy95.67 ±0.4286.93 ±0.85
Precision90.12 ±0.5181.50 ±0.77
Recall85.78 ±0.6677.00 ±0.92
F1-Score87.89 ±0.5978.90 ±0.83
AUC-ROC96.45 ±0.3887.50 ±0.74

Figure 5 displays the performance indicators that measure the efficacy of the suggested model. The model as proposed attains an accuracy of 95.67%, showing robust generalization to unseen data. The use of constraints ensures clinically valid predictions, while boosting procedures improve decision boundaries, essentially lowering false positives and false negatives. The AUC-ROC of 96.45% also solidifies the reliability of the model in discriminating accurately between benign and malignant cases. Also, the accuracy of 90.12% and the recall of 85.78% point to the model’s ability in reducing misdiagnoses, which is essential for healthcare use where detection at an early stage is critical.

Fig 5 | Performance metrics of the proposed model
Figure 5: Performance metrics of the proposed model.

Figure 6 is a comparative line graph that indicates the training accuracy of 96.24% and the testing accuracy of 95.67% across consecutive epochs. The model is computationally light and deployable in real-world clinical settings. Inference time on an 11th Gen Intel Core i7-1165G7 processor with 12GB RAM is comparatively much slower than high-end GPUs but is within manageable limits for small-scale evaluation. The model size of 112MB, characterized by the CNN architecture and constraint inclusion, attains an optimal balance between performance and memory consumption. The absence of a dedicated GPU might limit large-scale deployment; however, optimizations like quantization or model pruning might increase efficiency.

Fig 6 | Testing vs. validation accuracy over training
Figure 6: Testing vs. validation accuracy over training.

When tested with low-resolution images, the model continues to have a classification accuracy of 91.3%, confirming its robustness in conditions of compromised imaging. The dataset which is used for experimentation is an open source and does not disclose any of the patients private data ; however, maintaining demographic diversity is still necessary for wider clinical utility. In addition, multi-center validation and regulatory approvals are critical steps prior to real-world deployment. Computational efficiency remains very high on specialized hardware, though tuning might be necessary for resource-constrained devices. Finally, the embedding of domain knowledge within the model allows predictions to fit clinical expectations, providing meaningful and reliable outputs for medical assisstance.

Conclusion And Future Work

In order to create a reliable model for identifying breast cancer from thermogram images, we have effectively combined CNN, boosting algorithms, and constraint reasoning in this work. With a very high accuracy of 95.67% and balanced precision, recall, and F1-score, the results show how effective domain-specific constraints are when applied to medical AI systems. By adding constraints like positivity, normalization, and clipping, the model’s predictive performance was enhanced and it complied with medical guidelines, making it applicable in real-world clinical settings. In subsequent research, we plan to work with hospitals and other healthcare facilities to test the model on real clinical datasets and make sure it applies to different patient groups. Conducting multi-center studies to assess its performance on different imaging devices and across patient groups will be a crucial first step. In order to better extract features and generalize across diverse patient populations, we also intend to look into alternative deep learning architectures, such as Vision Transformers (ViTs), Capsule Networks, and attention-based CNNs. To further improve the model’s fit to real diagnostic needs, we also plan to incorporate increasingly complex medical constraints tailored to specific clinical settings. Our goal is to reduce false positives and false negatives, increase prediction accuracy, and further integrate expert-led constraints to make AI-based breast cancer diagnosis more dependable and clinically valuable.

When using deep learning for breast cancer diagnosis, factors other than algorithmic performance need to be taken into account. To preserve patient privacy, thermogram data must be de-identified, securely stored, and subject to restricted access in compliance with HIPAA and GDPR. Another problem is dataset bias, which highlights the necessity of outside validation and balanced datasets because models trained on imaging protocols or small demographics may not generalize well. Clinical translation also requires adherence to regulatory pathways that assess safety, reproducibility, and utility, such as FDA and CE approvals. These components ensure that the proposed constraint-embedded CNN is accurate, ethical, and clinically reliable.

References
  1. Siegel MB, He X, Hoadley KA, Hoyle A, Pearce JB, Garrett AL, Kumar S, Moylan VJ, Brady CM, Van Swearingen AE, et al. Integrated RNA and DNA sequencing reveals early drivers of metastatic breast cancer. J Clin Invest. 2018;128(4):1371–83. https://doi.org/10.1172/JCI99380
  2. Mantravadi BS, Kandula D, Nukapeyi S. Forecast of COVID-19 by chest X-ray images using CNN algorithm with sequential and DenseNet models. In: 2022 International Conference on Breakthrough in Heuristics And Reciprocation of Advanced Technologies (BHARAT). IEEE; 2022. p. 121–6.
  3. Sarkar S, Chakraborty S, Bhowmik L, Paul R, Ghosh A. Classification of breast cancer using deep CNN: A comparative analysis. In: International Conference on Recent Advances in Artificial Intelligence & Smart Applications. Springer; 2023. p. 261–8.
  4. Passerini A, Tack G, Guns T. Introduction to the special issue on combining constraint solving with mining and learning. Constraints. 2017;22(2):125–8. https://doi.org/10.1007/s10601-017-9277-1
  5. Nehal M. Intelligent computing using deep learning for screening of breast cancer from breast thermograms. In: International Conference on Signal, Machines, Automation, and Algorithm. Springer; 2023. p. 181–95. https://doi.org/10.1007/978-981-99-6821-5_19
  6. Al Husaini MAS, Habaebi MH, Islam MR. Real-time thermography for breast cancer detection with deep learning. Discov Artif Intell. 2024;4(1):57. https://doi.org/10.1007/s44163-024-00157-z
  7. Dey S, Roychoudhury R, Malakar S, Sarkar R. Screening of breast cancer from thermogram images by edge detection aided deep transfer learning model. Multimed Tools Appl. 2022;81(7):9331–49. https://doi.org/10.1007/s11042-021-11296-x
  8. Pattanaik RK, Mishra S, Siddique M, Gopikrishna T, Satapathy S. Breast cancer classification from mammogram images using extreme learning machine-based DenseNet121 model. J Sens. 2022;2022:2731364. https://doi.org/10.1155/2022/2731364
  9. Chakravarthy S, Bharanidharan N, Khan SB, Kumar VV, Mahesh T, Almusharraf A, Albalawi E. Multi-class breast cancer classification using CNN features hybridization. Int J Comput Intell Syst. 2024;17(1):191. https://doi.org/10.1007/s44196-023-00239-z
  10. Abimouloud ML, Bensid K, Elleuch M, Ammar MB, Kherallah M. Advancing breast cancer diagnosis: token vision transformers for faster and accurate classification of histopathology images. Vis Comput Ind Biomed Art. 2025;8(1):1. https://doi.org/10.1186/s42492-024-00178-x
  11. Alshehri A, AlSaeed D. Breast cancer detection in thermography using convolutional neural networks (CNNs) with deep attention mechanisms. Appl Sci. 2022;12(24):12922. https://doi.org/10.3390/app122412922
  12. Osman AH, Aljahdali HMA. An effective ensemble boosting learning method for breast cancer virtual screening using neural network model. IEEE Access. 2020;8:39165–74. https://doi.org/10.1109/ACCESS.2020.2975284
  13. Singh D, Singh AK. Role of image thermography in early breast cancer detection—past, present and future. Comput Methods Programs Biomed. 2020;183:105074. https://doi.org/10.1016/j.cmpb.2019.105074
  14. Jalloul R, Krishnappa CH, Agughasi VI, Alkhatib R. Enhancing early breast cancer detection with infrared thermography: a comparative evaluation of deep learning and machine learning models. Technologies. 2024;13(1):7. https://doi.org/10.3390/technologies13010007
  15. Régin F, De Maria E. Generative constraint programming re-visited. In: 2024 IEEE 36th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE; 2024. p. 18–26. https://doi.org/10.1109/ICTAI58353.2024.00014
  16. Popescu A, Polat-Erdeniz S, Felfernig A, Uta M, Atas M, Le VM, Pilsl K, Enzelsberger M, Tran TNT. An overview of machine learning techniques in constraint solving. J Intell Inf Syst. 2022;58(1):91–118. https://doi.org/10.1007/s10844-021-00683-z
  17. Passerini A, Tack G, Guns T. Introduction to the special issue on combining constraint solving with mining and learning. Constraints. 2017;22(2):125–8. https://doi.org/10.1007/s10601-017-9277-1
  18. Bizzarri A, Fraccaroli M, Lamma E, Riguzzi F. Integration between constrained optimization and deep networks: a survey. Front Artif Intell. 2024;7:1414707. https://doi.org/10.3389/frai.2024.1414707
  19. Singh R, Prabha C. Enhancing accuracy in detection of glioma tumor using DenseNet CNN. In: 2023 International Conference on Research Methodologies in Knowledge Management, Artificial Intelligence and Telecommunication Engineering (RMKMATE). IEEE; 2023. p. 1–5. https://doi.org/10.1109/RMKMATE58397.2023.10360805
  20. Alanazi SA, Kamruzzaman M, Sarker MNI, Alruwaili M, Alhwaiti Y, Alshammari N, Siddiqi MH. Boosting breast cancer detection using convolutional neural network. J Healthc Eng. 2021;2021:5528622. https://doi.org/10.1155/2021/5528622.


Premier Science
Publishing Science that inspires