Revised Surgical CAse REport (SCARE) guideline: An update for the age of Artificial Intelligence

Ahmed Kerwan1 ORCiD, Ahmed Al-Jabir2 ORCiD, Ginimol Mathew3 ORCiD, Catrin Sohrabi3 ORCiD, Rasha Rashid4 ORCiD, Thomas Franchi5 ORCiD, Maria Nicola6 ORCiD, Maliha Agha7 ORCiD, Riaz A. Agha7 ORCiD; SCARE Group

  1. Harvard T.H. Chan School of Public Health, Boston, USA
  2. University College London Hospital, London, UK
  3. Royal Free London NHS Foundation Trust, London, UK
  4. Imperial College School of Medicine, London, UK
  5. Wellington Regional Hospital, Te Whatu Ora Capital Coast and Hutt Valley, Wellington, New Zealand
  6. Imperial College London, London, UK
  7. Premier Science, London, UK

Correspondence to: Riaz Agha, Premier Science, riaz@premierscience.com

download-pdf
Download the SCARE 2025 checklist

DOI: https://doi.org/10.70389/PJS.100079

SCARE Group Contributors

  1. Achilleas Thoma, McMaster University, Canada
  2. Alessandro Coppola, Sapienza University of Rome, Italy
  3. Andrew J Beamish, Swansea Bay University Health Board, Swansea University, UK
  4. Ashraf Noureldin, Almana Hospital, Khobar, Saudi Arabia
  5. Ashwini Rao, Manipal Academy of Higher Education Manipal, India
  6. Baskaran Vasudevan, MIOT Hospital, Chennai, India
  7. Ben Challacombe, Guy’s and St Thomas’ Hospitals, UK
  8. C S Pramesh, Tata Memorial Hospital, Homi Bhabha National Institute and National Cancer Grid, India
  9. Duilio Pagano, IRCCS-ISMETT – UPMC Italy, Italy
  10. Frederick Heaton Millham, Harvard Medical School, USA
  11. Gaurav Roy, Cactus Communications Pvt Ltd, India
  12. Huseyin Kadioglu, Saglik Bilimleri Universitesi, Turkiye
  13. Iain James Nixon, NHS Lothian, UK
  14. Indraneil Mukherjee, Staten Island University Hospital Northwell Health, USA
  15. James Anthony McCaul, Queen Elizabeth University Hospital Glasgow and Institute for Cancer Therapeutics University of Bradford, UK
  16. James Ngu, Changi General Hospital, Singapore
  17. Joerg Albrecht, Cook County Health, USA
  18. Juan Gomez Rivas, Hospital Clinico San Carlos, Madrid, Spain
  19. K Veena L Karanth, District Hospital Udupi, India
  20. Kandiah Raveendran, Fatimah Hospital, Malaysia
  21. M Hammad Ather, Aga Khan University, Pakistan
  22. Mangesh A. Thorat, Centre for Cancer Screening, Prevention and Early Diagnosis, Wolfson Institute of Population Health, Queen Mary University of London, London, UK; Breast Services, Homerton University Hospital, London, UK
  23. Mohammad Bashashati, Dell Medical School, UT Austin, USA
  24. Mushtaq Chalkoo, Government Medical College, Srinagar, Kashmir, India
  25. Oliver J. Muensterer, Dr. von Hauner Children’s Hospital, LMU Medical Center, Munich, Germany
  26. Patrick Bradley, Nottingham University Hospital, UK
  27. Prabudh Goel, All India Institute of Medical Sciences, New Delhi, India
  28. Prathamesh Pai, P D Hinduja Hospital, Khar, India
  29. Priya Shinde, Homerton University Hospital, UK
  30. Priya Ranganathan, Tata Memorial Centre, India
  31. Raafat Yahia Afifi Mohamed, Cairo University, Egypt
  32. Richard David Rosin, University of the West Indies Barbados, Barbados
  33. Roberto Cammarata, Fondazione Policlinico Campus Biomedico, Italy
  34. Roberto Coppola, Campus Bio Medico University, Italy
  35. Rolf Wynn, UiT The Arctic University of Norway, Norway
  36. Salim Surani, Texas A&M University, USA
  37. Salvatore Giordano, University of Turku, Finland
  38. Samuele Massarut, Centro di Riferimento Oncologico Aviano IRCCS, Italy
  39. Shahzad G. Raja, Harefield Hospital, UK
  40. Somprakas Basu, All India Institute of Medical Sciences Rishikesh, India
  41. Syed Ather Enam, Aga Khan University, Pakistan
  42. Teo Nan Zun, Changi General Hospital, Singapore
  43. Todd Manning, Bendigo Health and Monash University, Australia
  44. Veeru Kasivisvanathan, University College London, UK
  45. Vincenzo La Vaccara, Fondazione Policlinico Campus Bio-Medico di Roma, Italy
  46. Zubing Mei, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, China
Premier Journal of Science

Additional information

  • Ethical approval: N/a
  • Consent: N/a
  • Funding: None
  • Conflicts of interest: The authors have no financial, consultative, institutional, or other relationships that might lead to bias or a conflict of interest.
  • Author contribution: R.A.A.: conceptualisation and study design, supervision of the Delphi process, data interpretation, manuscript drafting and critical revision, approval of the final manuscript. A.K., A.A.-J., C.S., T.F., G.M., M.N., R.R., M.A. R.A.A: Participation in study design, generation of Delphi survey materials, data collection and analysis, contribution to drafting of new checklist items, manuscript writing and revision, approval of the final manuscript.
  • Guarantor: Riaz A Agha
  • Provenance and peer-review:
    Unsolicited and externally peer-reviewed
  • Data availability statement: The Delphi survey data that informed this guideline (individual expert ratings and comments) are confidential and not publicly available, in accordance with the consensus process protocol. All relevant aggregated results are reported in this article.

Keywords: scare guideline update, artificial intelligence in surgery, delphi consensus process, ai transparency and ethics, AI reporting standards

Received: 20 May 2025
Revised: 22 May 2025
Accepted: 22 May 2025
Published: 23 May 2025

Abstract

Introduction: Artificial intelligence (AI) is rapidly transforming healthcare and scientific publishing. Reporting guidelines need to be updated to take into account this advance.  The SCARE Guideline 2025 update adds a new AI-focused domain to promote transparency, reproducibility, and ethical integrity in surgical case reports involving AI.

Methods: A Delphi consensus exercise was conducted to update the SCARE guidelines. A panel of 49 surgical and scientific experts were invited to rate proposed new items. In round 1, participants scored each item on a 9-point Likert scale and provided feedback. Items not meeting consensus were  revised or discarded.

Results: A 94% response rate occurred amongst participants (46/49) in the first round. Ratings were analysed for agreement levels, and consensus was reached on all six proposed AI-related items. A revised SCARE checklist is presented which incorporates these new AI related items. Authors are now expected to disclose AI involvement not only in patient care but also in manuscript preparation, as exemplified by this paper.

Conclusion: The SCARE 2025 guideline provides an up-to-date framework for surgical case reports in the era of AI. Through a robust consensus process, we have added specific reporting criteria for AI to ensure that any use of artificial intelligence in a case report is clearly documented, explained and discussed including with respect to bias and ethics. This update will help maintain the quality, transparency, and clinical relevance of case reports, ultimately improving their educational value and trustworthiness for the surgical community.

Highlights
  • The SCARE 2025 update introduces a new Artificial Intelligence (AI) domain (checklist items 5a–5f) to ensure transparency in surgical case reports where AI is involved.
  • The revised guideline was developed via a one-round Delphi consensus exercise among 49 international experts, with 94% responding and showing strong agreement on all new AI-related items.
  • Six new checklist items cover identification of AI use, detailed reporting of AI methods, data and validation, bias mitigation, and ethical considerations in case reports.
  • In line with emerging publication standards, the authors used a generative AI tool for language editing of this manuscript and have transparently declared this use, exemplifying the new recommendations for AI disclosure.

Introduction

The concept of AI dates back to Turing’s seminal question “Can machines think?” in 1950.1 The official birth of AI as a field is traced to the 1956 Dartmouth conference led by John McCarthy, which coined the term “artificial intelligence” and conjectured that aspects of learning and intelligence could be simulated by machines.2 In the decades since, AI has transitioned from theory to real-world applications. Notably, the global AI market value was estimated at approximately $638 billion in 2024,3 and its impact on the world’s economy is projected to reach $15.7 trillion by 2030.4 From these origins, AI has grown into a global industry valued at approximately $638 billion in 20243 and projected to reach an economic impact of $15.7 trillion by 2030.4 This explosive growth (illustrated in Figure 1) is driven by breakthroughs in machine learning, big data, cloud computing and computational power.

Figure 1: Projected growth of the global artificial-intelligence market. Source: PwC Global AI Study, 2024.
Figure 1: Projected growth of the global artificial-intelligence market.
Source: PwC Global AI Study, 2024.

In medicine and surgery, AI applications are increasingly prevalent. Early successes of medical AI have been seen in diagnostic specialties – for example, AI-driven image analysis in radiology and pathology has achieved impressive accuracy, often exceeding human performance in detecting subtle findings.5 In surgical disciplines, AI is being explored for diagnostics, decision support, enhancing preoperative planning, intraoperative guidance (such as robotic surgery and real-time decision support), and postoperative outcome prediction.5 These technologies promise to augment the surgeon’s capabilities and personalise patient care. However, with this promise comes new responsibility: clinicians and researchers must ensure that when AI is involved in patient management, it is reported transparently and with sufficient detail to appraise its validity and safety.

Recognising this trend, the Surgical CAse REport (SCARE) guidelines – originally introduced in 20166 and last updated in 20237 – required further revision to urgently address AI-related reporting.8 Developed by Agha et al., the original SCARE checklist aimed to improve the clarity, consistency, and educational value of case reports in surgery.6 Subsequent updates in 2018,9 2020,10 and 20237 expanded and refined the criteria in response to feedback and evolving best practices. These guidelines have significantly improved reporting quality in surgical case publications, although adherence by authors and journals has varied.11

Despite the growing presence of AI in healthcare, a gap exists in the SCARE 2023 checklist – there were no specific items addressing how to report the use of AI in a case report. Omission of such details could lead to under-reporting of critical information. In other study designs, the need for AI-specific reporting guidelines has been recognised; for instance, the CONSORT-AI and SPIRIT-AI extensions have been published to guide reporting of clinical trials and protocols involving AI.8 To ensure that surgical case reports keep pace with these developments, an update to the SCARE guidelines was imperative. This 2025 update focuses on integrating AI-related reporting standards into the established SCARE structure.

In this paper, we describe the methods and outcomes of the SCARE 2025 guideline update. We introduce a new domain of checklist items dedicated to AI and elaborate on their rationale. We also discuss the importance of these additions in the context of transparency, bias mitigation, and reproducibility, which are crucial for maintaining trust in both case reports and AI systems. Notably, in alignment with the principle of transparency, we also document our own use of AI during the preparation of this manuscript, as recommended by emerging editorial policies.12-14 The updated SCARE 2025 guideline will help authors of case reports provide clear and accountable descriptions when AI is part of patient care or part of the report-generation process. Ultimately, this will enhance the value of case reports as scholarly contributions in an era where AI is becoming an integral part of healthcare.

Materials and Methods

Guideline development approach

The SCARE 2025 update was developed through a Delphi consensus process, consistent with the approach used in prior SCARE updates.6,15 An initial meeting was held by the SCARE Group steering committee which brainstormed important updates for the SCARE guideline. The senior author (RA) put forward AI as an important, timely and critical update to be made at this time.  Relevant AI specific items were then drafted, edited and approved to be put forward to a Delphi panel of experts. Invitations were sent via email to 49 experts in surgery, medicine and related fields. Invitees were provided with a summary of proposed new items (focused on AI reporting) and asked to participate in the consensus exercise.

In round 1, panelists rated each proposed checklist item on a 1–9 Likert scale (where 1 = strongly disagree, 9 = strongly agree) to indicate their agreement that the item should be included in the updated guidelines. Participants could also provide free-text comments suggesting modifications or justifications. We included six candidate items (labeled 5a through to 5f) in the domain of “Artificial Intelligence”, drafted based on a preliminary literature review of AI reporting recommendations and input from the guideline authors. After round 1, responses were analysed for consensus. An item was defined as achieving consensus for inclusion if ≥70% of respondents rated it 7–9 (agree) and <15% rated it 1–3 (disagree). This threshold was established a priori, in line with common Delphi methodology.6,15 Items that met consensus were provisionally accepted.

Data collection and analysis

The Delphi round was conducted via an online survey platform (Google Forms). Responses were collected anonymously, with panelists identified only by a study ID for tracking response rates. Quantitative data from Likert ratings were exported to Microsoft Excel for calculation of descriptive statistics. For each item, we computed the percentage of respondents who rated it in the high agreement range (7–9), moderate agreement range (4–6), and low agreement range (1–3). These are presented as consensus metrics. Table 1 summarises the score distribution for each new item (5a–5f) in the final round of Delphi.

Throughout the process, participants were encouraged to be critical and ensure each item added real value to the checklist. The high response rate (46 of 49 invited experts, i.e. 94%) and detailed comments provided indicate robust engagement from the expert panel. All data collected in the Delphi surveys were handled confidentially and were used solely for the purposes of this guideline development.

Integration into the SCARE checklist

After the Delphi process, the steering committee finalised the phrasing of each new item (5a–5f) based on the panel’s preferred wording. The new domain was inserted into the SCARE checklist as Section 5, titled “Artificial Intelligence”, following the Abstract section (Section 4) and preceding the previous Introduction section (which is now renumbered as Section 6 in the 2025 checklist). This renumbering was done to maintain logical flow: the checklist now first addresses Title, Keywords, Highlights and Abstract (Section 1–4), then the presence of any AI element (Section 5), then the Introduction of the case (Section 6), and so forth. The rest of the SCARE 2023 items were retained with minimal or no changes, aside from renumbering (e.g. what was item 5a “Background” in SCARE 2023 is now item 6a in SCARE 2025 etc.).

The final SCARE 2025 checklist thus contains a total of 48 items (up from 42 items in SCARE 2023) spanning all domains of a surgical case report. Table 2 provides the verbatim wording of the six new AI-focused items (5a–5f). These items are intended to be used by authors when preparing case reports: if an AI tool or algorithm was involved in the case in any manner, the author should address each of these points in the appropriate section of their report. If no AI was involved, these items would simply be marked “not applicable.” In the revised checklist document (available as supplementary material and on the SCARE website), the new items are highlighted for ease of adoption by authors and journal editors.

Results

Response rate

There were 46 people who participated in the Delphi consensus exercise and this represents a 94% participation rate (46/49).   Their characteristics by specialty and country are shown in figures 2 and 3 below.

Characteristics of participants

Figure 2. A bar chart showingspecialties that participants who responded practice in.
Figure 2. A bar chart showingspecialties that participants who responded practice in.
Figure 3. A bar chart showing Countries that participants who responded are from.
Figure 3. A bar chart showing Countries that participants who responded are from.

Delphi consensus outcomes

Table 1 below shows the Delphi consensus scores for new AI-related checklist items (Section 5 “Artificial Intelligence”). Each value represents the percentage of Delphi panel participants giving a score in that range on the 9-point Likert scale for the item during the final round. Consensus for inclusion was defined as ≥70% of respondents scoring 7–9. All six items exceeded this threshold by a wide margin (figures 4-10).

Table 1: Delphi consensus scores for new AI-related checklist items
ItemSummary of item1–3 (Disagree)4–6 (Neutral)7–9 (Agree)
5AI usage declaration00100% (46/46)
5aPurpose and Scope of AI Use2.2% (1/46)6.5% (3/46)91.3% (42/46)
5bAI Tool(s) and Configuration6.6% (3/46)15.1% (7/46)78.2% (36/46)
5cData Inputs and Safeguards6.5% (3/46)15.2% (7/46)78.2% (36/46)
5dHuman Oversight and Verification6.6% (3/46)2.2% (1/46)91.2% (42/46)
5eBias, Ethics and Regulatory Compliance4.4% (2/46)8.7% (4/46)86.9% (40/46)
5fReproducibility and Transparency8.7% (4/46)17.4% (8/46)73.9% (34/46)

Following consensus, the six AI items were formally added to the SCARE checklist. Free text comments made by some contributors led to minor changes like stating whether the AI was integrated with any other systems (added to item 5b), acknowledging the limitations of AI use (added to item 5d) and attempts independent replication of the query/input (added to item 5f). The wording of each item, as finalized, is shown in Table 2. Briefly, these items require authors to: 5a) declare any use of AI in the case and its purpose; 5b) provide details of the AI tool or algorithm (name, version, source); 5c) describe the development or training data of the AI tool (how it was developed and on what data, if known); 5d) report validation or performance metrics of the AI tool used which is relevant to the case; 5e) discuss any biases, limitations, or ethical issues related to AI’s use; and 5f) document patient consent or regulatory considerations for using AI, if applicable.

No changes were made to the core content of other sections of the checklist (Title, Abstract, Patient Information, etc.) aside from renumbering due to the insertion of the new section. One minor addition was an explanatory note in the checklist introduction: authors are advised that if AI was not involved in their case, they may skip Section 5, but if AI contributed to diagnosis, management, or even manuscript preparation, the relevant items in Section 5 should be addressed. This ensures that the checklist remains adaptable to all case reports, whether or not AI is a factor.

Figure 4: Delphi consensus results graph for new AI-related checklist item 5.
Figure 4: Delphi consensus results graph for new AI-related checklist item 5.
Figure 5: Delphi consensus results graph for new AI-related checklist item 5a: Purpose and Scope of AI Use.
Figure 5: Delphi consensus results graph for new AI-related checklist item 5a: Purpose and Scope of AI Use.
Figure 6: Delphi consensus results graph for new AI-related checklist item 5b: AI Tool(s) and Configuration.
Figure 6: Delphi consensus results graph for new AI-related checklist item 5b: AI Tool(s) and Configuration.
Figure 7: Delphi consensus results graph for new AI-related checklist item 5c: Data Inputs and Safeguards.
Figure 7: Delphi consensus results graph for new AI-related checklist item 5c: Data Inputs and Safeguards.
Figure 8: Delphi consensus results graph for new AI-related checklist item 5d: Human Oversight and Verification.
Figure 8: Delphi consensus results graph for new AI-related checklist item 5d: Human Oversight and Verification.
Figure 9: Delphi consensus results graph for new AI-related checklist item 5e: Bias, Ethics and Regulatory Compliance.
Figure 9: Delphi consensus results graph for new AI-related checklist item 5e: Bias, Ethics and Regulatory Compliance.
Figure 10: Delphi consensus results graph for new AI-related checklist item 5f: Reproducibility and Transparency.
Figure 10: Delphi consensus results graph for new AI-related checklist item 5f: Reproducibility and Transparency.

Table 2 shows New SCARE 2025 checklist items (Section 5: Artificial Intelligence). Each item should be addressed in the case report if applicable. “AI” refers to any artificial intelligence or machine-learning system relevant to the case. These items are intended to ensure transparency and reproducibility when AI is part of a surgical case report.

Table 2. New SCARE 2025 checklist items
Item (AI Domain)Checklist item description
5. AI usage declarationDeclaration of whether any AI was used in the research and manuscript development   If no, proceed to item 6. If yes, proceed to item 5a

5a. Purpose and Scope of AI Use
  – Precisely state why AI was employed (e.g. development of research questions, language drafting, statistical analysis/summarisation, image annotation, etc).  – Was generative AI utilised and if so, how? – Clarify the stage(s) of the reporting workflow affected (planning, writing, revisions, figure creation). – Confirmation that the author(s) take responsibility for the integrity of the content affected/generated
5b. AI Tool(s) and Configuration  – Name each system (vendor, model, major version/date). 
– State the date it was used – Specify relevant parameters (e.g. prompt length, plug-ins, fine-tuning, temperature).  – Declare whether the tool operated locally on-premises, or via a cloud API and any integrations with other systems.
5c. Data Inputs and Safeguards  – Describe categories of data provided to the AI (patient text, de-identified images, literature abstracts).  – Confirm that all inputs were de-identified and compliant with GDPR/HIPAA.  – Note any institutional approvals or data-sharing agreements obtained.

5d. Human Oversight and Verification  – Identify the supervising author(s) who reviewed every AI output.  – Detail the process for fact-checking, clinical accuracy checks – State whether any AI-generated text/figures were edited or discarded. – Acknowledge the limitations of AI and its use
5e. Bias, Ethics and Regulatory Compliance– Outline steps taken to detect and mitigate algorithmic bias (e.g. cross-checking against under-represented populations).  – Affirm adherence to relevant ethical frameworks. – Disclose any conflicts of interest or financial ties to AI vendors.
5f. Reproducibility and Transparency  – Provide the exact prompts or code snippets (as supplementary material if lengthy).  – Supply version-controlled logs or model cards where possible. – if applicable, state repository, hyperlink or digital object identifier (DOI) where AI-generated artefacts can be accessed, enabling attempts at independent replication of the query/input.

The above items (5a–5f) now form an integral part of the SCARE 2025 checklist. An author writing up a case report is expected to incorporate this information into the relevant sections of their manuscript. For example, item 5a would typically be covered in the Introduction or Case Presentation section of a report (where the setting and tools of care are described), whereas items 5b–5d might appear in the Case Presentation or Results section (detailing what AI was used and its performance), and items 5e–5f are likely to be addressed in the Discussion section (reflecting on biases and ethical considerations). By structuring the reporting in this way, readers of the case report will gain a clear understanding of what AI was used, why it was used, how it functioned, and what its limitations are in the context of the case. This level of detail is crucial for interpreting the case’s findings, especially as AI algorithms can significantly influence clinical outcomes.

Discussion

The SCARE 2025 guideline represents a proactive evolution of surgical case report standards in response to the growing influence of AI in healthcare. Compared to the SCARE 2023 update, which primarily refined existing sections, the defining feature of SCARE 2025 is the introduction of an entirely new domain dedicated to artificial intelligence. This addition marks a significant broadening of the checklist’s scope – acknowledging that “case reports” may now involve not only human clinicians and patients, but also AI tools as part of the diagnostic or therapeutic narrative. By explicitly addressing AI, the updated guidelines aim to enhance transparency and reproducibility in case reports. This aligns with broader efforts in medical research to improve reporting of AI.8

Transparency in reporting AI is the overarching theme of the new domain. Just as SCARE champions transparency in clinical reporting, we recognise that AI algorithms must not become “black boxes” in case descriptions. Item 5a ensures that authors explicitly declare the use of AI, preventing scenarios where AI’s involvement might be obscured or assumed. This is analogous to disclosing a diagnostic test or a surgical device – readers deserve to know if AI was behind a key decision or outcome. Transparency is also reinforced by item 5b (tool identification) and 5c (development/data), which compel authors to provide enough technical detail for readers to grasp what the AI tool actually is. These items promote reproducibility: a future researcher or clinician reading the case report should be able to identify the same AI tool, understand its training context, and thereby judge whether the case’s insights are transferable or credible in other settings.

The emphasis on bias mitigation and ethical considerations (item 5e and 5f) addresses increasing concerns about AI in medicine. AI systems, especially those based on machine learning, can inadvertently carry biases from their training data. If not reported, such biases could lead to misinterpretation of a case – for example, an AI diagnostic tool might perform poorly on certain demographic groups, which would be highly relevant if the case patient belongs to that group. By asking authors to discuss AI biases and limitations, SCARE 2025 aligns with the ethical principle of “do no harm” in publishing. It forces a moment of reflection: the case author must consider what AI might have missed or where it might be wrong. This practice can help mitigate over-reliance on AI and encourages authors to validate AI outputs with clinical judgment. In a broader sense, it contributes to the literature on AI by documenting real-world challenges and failures, not just successes, thereby preventing publication bias in favour of positive AI results.

Future directions for SCARE and AI in surgical case reports may include further refinements as the technology evolves. We expect that as more case reports are published under the SCARE 2025 criteria, a body of examples will accumulate, illustrating how authors have implemented these items. We will monitor the uptake of the AI domain – for instance, tracking if authors encounter difficulties in obtaining certain information about proprietary AI tools. If so, this might spur collaborations between clinicians and AI developers to improve transparency (e.g., requiring companies to provide model details when their AI is used in published case reports). Additionally, while our current items focus on AI in patient care, future updates might consider AI used in writing or reviewing case reports. In fact, the academic community is actively discussing standards for disclosing AI assistance in manuscript preparation. In this SCARE update, we touch on that aspect by encouraging disclosure (item 5f covers AI in manuscript if it involves patient data or content generation). It’s plausible that a formal guideline for reporting the use of generative AI in scientific writing will emerge; until then, we have set a precedent by openly stating our use of an AI language model for editing this paper.

It is worth reflecting on the limitations of our guideline update process. First, our Delphi panel, while diverse, was limited to 49 invitees with 46 responders. Important perspectives, such as patients or regulators, were not directly represented. Patients especially might have views on how they want AI usage reported in cases (perhaps desiring even more clarity on consent and privacy). In future guideline efforts, including patient representatives as well as other perspectives could be valuable. Second, the AI domain items are somewhat general and meant to apply across all types of AI. AI in surgery can range from simple diagnostic apps to complex autonomous robots; not every item will fit perfectly to every scenario. We attempted to strike a balance with broad wording, but there may be cases that require interpretation of how to apply an item. We will rely on the judgment of authors, reviewers, and editors to implement these guidelines sensibly on a case-by-case basis. Third, as with any consensus-based guideline, there is a degree of subjectivity in what was included and the language in which it is expressed. It is possible that some readers will feel an important AI-related item is missing. We welcome feedback from the surgical community, as the SCARE guideline is meant to be iterative – future revisions (beyond 2025) can certainly expand or adjust the AI domain as needed.

One immediate challenge is dissemination and training. Introducing six new items means authors must be educated about them. We plan to disseminate the SCARE 2025 checklist through the EQUATOR Network website, the Premier Science Journals, and presentations at surgical conferences. Additionally, we will encourage journals to require SCARE 2025 adherence in their case report submissions, as endorsement by journals greatly drives usage. Experience from previous SCARE iterations showed that when journal editors mandate the checklist and when authors see the benefit (in improved clarity of their reports), compliance increases. We anticipate a similar positive impact: clearer reporting of cases involving AI, which in turn will make it easier for readers to learn from those cases or even reproduce aspects of them (for instance, using the same AI tool on a similar patient). Ultimately, better reported case studies can feed into higher-level evidence; a well-documented case of AI successfully detecting a rare complication could spur larger studies or inspire others to utilise that AI tool.

During the preparation of this guideline manuscript, we made use of generative AI as a writing aid. Specifically, the tool was used in the later stages to assist with polishing language. No content generation (ideas or drafting of sections) was delegated to AI; it was employed similarly to a grammar/style assistant under close human oversight. We mention this to practice what we preach: transparency about AI usage. As journals and publishers, as well as COPE and WAME, increasingly require disclosure of AI assistance,12-14 we demonstrate that such disclosure is feasible and can be done without undermining the credibility of the work. The final content was rigorously verified by all authors to eliminate any potential AI-introduced errors (such as incorrect references or “hallucinated” facts). We found that using AI in this limited capacity did improve efficiency in editing, but human expertise remained essential for the substance and accuracy of the guideline. This experience underscores a broader point: AI can be a valuable tool in medical writing and research, but it must be applied responsibly and transparently.

Conclusion

Through a structured Delphi consensus and in response to the rapid expansion of artificial intelligence in healthcare, we have updated the SCARE guideline to produce SCARE 2025, a comprehensive reporting guideline for surgical case reports in the age of AI. The addition of the new AI-focused domain (items 5a–5f) fills a critical gap, ensuring that any use of AI in a case is transparently reported with details on its implementation, validation, and ethical considerations. This update preserves the familiar structure of the SCARE checklist while integrating modern considerations, thereby enabling authors to produce case reports that are both up-to-date and rigorous. By following SCARE 2025, clinicians and researchers will improve the clarity and reliability of case reports, facilitating better knowledge sharing and ultimately enhancing patient care. As surgical practice increasingly intersects with advanced technologies, SCARE 2025 will help maintain the integrity and educational value of case reports, ensuring they remain a cornerstone of surgical literature in the years to come.

References

1. Turing AM. I.-Computing machinery and intelligence. Mind. 1950;59(236):433-460.
https://doi.org/10.1093/mind/LIX.236.433
 
2. Artificial Intelligence (AI) coined at Dartmouth. Dartmouth College. Accessed May 18, 2025. https://home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth
 
3. Artificial Intelligence (AI) market size to hit USD 3,680.47 bn by 2034 [cited 2025 May 18]. Available from: https://www.precedenceresearch.com/artificial-intelligence-market
 
4. PricewaterhouseCoopers. PWC’s global Artificial Intelligence study: sizing the prize. [cited 2025 May 18]. Available https://www.pwc.com/gx/en/issues/artificial-intelligence/publications/artificial-intelligence-study.html
 
5. McCartney J. AI is poised to “revolutionize” surgery. ACS. [cited 2025 May 19]. Available from: https://www.facs.org/for-medical-professionals/news-publications/news-and-articles/bulletin/2023/june-2023-volume-108-issue-6/ai-is-poised-to-revolutionize-surgery/
 
6. Agha RA, Fowler AJ, Saeta A, Barai I, Rajmohan S, Orgill DP, et al. The SCARE statement: consensus-based Surgical CAse REport guidelines. International Journal of Surgery. 2016 Oct; 34:180-6.
 
7. Sohrabi C, Mathew G, Maria N, Kerwan A, Franchi T, Agha RA. The SCARE 2023 guideline: updating consensus Surgical CAse REport (SCARE) guidelines. International Journal of Surgery. 2023 Apr 5; 109(5):1136-40.
https://doi.org/10.1097/JS9.0000000000000373
 
8. Liu X, Rivera SC, Moher D, Calvert MJ, Denniston AK. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: The consort-ai extension. BMJ. 2020 Sept 9; m3164.
https://doi.org/10.1136/bmj.m3164
 
9. Agha RA, Borrelli MR, Farwana R, Koshy K, Fowler AJ, Orgill DP, et al. The SCARE 2018 statement: updating consensus Surgical CAse REport (SCARE) guidelines. International Journal of Surgery. 2018 Dec; 60:132-6.
 
10. Agha RA, Franchi T, Sohrabi C, Mathew G, Kerwan A, Thoma A, et al. The SCARE 2020 guideline: updating consensus Surgical CAse REport (SCARE) guidelines. International Journal of Surgery. 2020 Dec; 84:226-30.
 
11. Agha RA, Farwana R, Borrelli MR, Tickunas T, Kusu-Orkar T, Millip MC, et al. Impact of the SCARE guideline on the reporting of surgical case reports: A before and after study. International Journal of Surgery. 2017 Sept;45:144-8.
https://doi.org/10.1016/j.ijsu.2017.07.099
 
12. Science journals set new authorship guidelines for AI-generated text [Internet]. U.S. Department of Health and Human Services; [cited 2025 May 18]. Available https://factor.niehs.nih.gov/2023/3/feature/2-artificial-intelligence-ethics
 
13. COPE Council. COPE position-authorship and AI-English. Committee on Publication Ethics; 2023 [cited 2025 May 19]. Available from: https://doi.org/10.24318/cCVRZBms

14. Zielinski C, Winker MA, Aggarwal R, Ferris LE, Heinemann M, Lapeña JF, et al. Chatbots, generative AI, and scholarly manuscripts: WAME recommendations on chatbots and generative artificial intelligence in relation to scholarly publications. World Association of Medical Editors; 2023 May 31 [cited 2025 May 19]. Available from: https://wame.org/page3.php?id=106
https://doi.org/10.25100/cm.v54i3.5868

15. Nasa P, Jain R, Juneja D. Delphi methodology in healthcare research: How to decide its appropriateness. World J Methodol. 2021 Jul 20; 11(4):116-29.
https://doi.org/10.5662/wjm.v11.i4.116

Appendix – SCARE 2025 revised checklist
SCARE Guideline Checklist 2025
TopicItemDescriptionPage number
Title1– The words ‘case report’ should appear in the title. – The title should be concise and highlight the area of focus (e.g. presentation, patient population, diagnosis, surgical intervention, or outcome).

 
Key Words2– Include three to six keywords that identify what is covered in the case report (e.g. patient population, diagnosis or surgical intervention). – Include ‘case report’ as one of the keywords.

 
Highlights3– Include three to five bullet points that capture the novel findings of the report. – These should focus on providing a brief background to the report. Include the key results, their clinical relevance, and any validation performed.

 
Abstract4aStructure – Provide a structured abstract that includes the following headings: (1) introduction and importance, (2) presentation of case, (3) clinical discussion, and (4) conclusion.

 
4bIntroduction and Importance – Describe what is known currently on this topic, what is important, unique or educational about the case, and what this adds to the surgical literature.

 
4cPresentation of Case – Detail the presenting complaint(s), clinical and demographic details, and the patient’s main ideas, concerns, and expectations. – Detail the clinical findings, investigations performed, main differentials, and subsequent diagnosis. – Describe the rationale for choosing the intervention. – Describe what was the outcome.

 
4dClinical Discussion – Discuss the clinical findings in relation to what is currently known. 
4eConclusion – Describe the relevance and impact of the report. – Detail the main take away lessons or potential implications for clinical practice (minimum of three).

 

Artificial Intelligence (AI)
(some journals may prefer this in the methods and/or acknowledgments section and it should also be declared in the cover letter)

5

Declaration of whether any AI was used in the research and manuscript development
  If no, proceed to item 6. If yes, proceed to item 5a
 
 5aPurpose and Scope of AI Use – Precisely state why AI was employed (e.g. development of research questions, language drafting, statistical analysis/summarisation, image annotation, etc).  – Was generative AI utilised and if so, how? – Clarify the stage(s) of the reporting workflow affected (planning, writing, revisions, figure creation).
– Confirmation that the author(s) take responsibility for the integrity of the content affected/generated

 
 5bAI Tool(s) and Configuration – Name each system (vendor, model, major version/date). 
– State the date it was used – Specify relevant parameters (e.g. prompt length, plug-ins, fine-tuning, temperature).  – Declare whether the tool operated locally on-premises, or via a cloud API and any integrations with other systems.

 
 5cData Inputs and Safeguards – Describe categories of data provided to the AI (patient text, de-identified images, literature abstracts).  – Confirm that all inputs were de-identified and compliant with GDPR/HIPAA.  – Note any institutional approvals or data-sharing agreements obtained.

 
 5dHuman Oversight and Verification – Identify the supervising author(s) who reviewed every AI output.  – Detail the process for fact-checking, clinical accuracy checks – State whether any AI-generated text/figures were edited or discarded.
– Acknowledge the limitations of AI and its use

 
 5e Bias, Ethics and Regulatory Compliance – Outline steps taken to detect and mitigate algorithmic bias (e.g. cross-checking against under-represented populations).  – Affirm adherence to relevant ethical frameworks. – Disclose any conflicts of interest or financial ties to AI vendors.

 
 5fReproducibility and Transparency – Provide the exact prompts or code snippets (as supplementary material if lengthy).  – Supply version-controlled logs or model cards where possible. – if applicable, state repository, hyperlink or digital object identifier (DOI) where AI-generated artefacts can be accessed, enabling attempts at independent replication of the query/input.

 

Introduction

6a

Background
– Describe the area of focus and the relevant background contextual knowledge.

 
6bRationale – Describe why the case is different to what is already known in the literature. – Describe why it is important to report this case. Is the case rare or interesting for the specific healthcare setting, population or country?

 
6cGuidelines and Literature – Give reference to relevant surgical literature and current standards of care, including any specific guidelines or reports (e.g. government, national, international).

 

Guideline Citation

7

– At the end of the introduction, include reference to the SCARE 2025 publication by stating: ‘This case report has been reported in line with the SCARE checklist [include citation]’.

 

Timeline

8

– Summarise the sequence of events leading up to the patient’s presentation. – Report any delays from presentation to diagnosis and/or intervention. – Use tables or figures to illustrate the timeline of events if needed. – Use standardised units of time (mm:hh) and dates (dd/mm/yyyy).

 

Patient Information

9a

Demographic Details
– Include de-identified demographic information (e.g. age, sex, ethnicity, occupation). – Where relevant, include other useful information (e.g. body mass index, hand dominance, income, level of education, marital status).

 
9bPresentation – Describe the patient’s presenting complaint(s). – Include a collateral account of the history if relevant. – Describe how the patient presented (e.g. self-presentation, ambulance or referred by family physician or other hospital clinicians). – Describe where the patient presented (e.g. outpatient clinic, type of hospital, etv).

 
9cPast Medical and Surgical History – Include any previous interactions (e.g. prior admissions to hospital), medical or surgical interventions, and relevant outcomes.

 
9dDrug History and Allergies – Specify any acute, repeat, and discontinued medications. – Specify any contraindications to re-starting regular medicines e.g. increased bleeding risk. – Specify any allergies and/or adverse reactions.

 
9eFamily History – Include health information regarding first-degree relatives, specifying any inheritable conditions. Social History – Indicate any smoking, alcohol, and recreational drug use. – Indicate the level of social independence, the presence of any carers, driving status, and type of accommodation. Review of Systems – Provide any other information outside of the focused history (e.g. headaches, blurred vision, palpitations, abdominal pain, joint pain).

 
Clinical Findings10– Describe the general and significant clinical findings based on initial inspection and physical examination.

 

Diagnostic Assessment & Interpretation

11a

Diagnostics Assessment
– Bedside (e.g. urinalysis, electrocardiography, echocardiography). – Laboratory (e.g. biochemistry, haematology, immunology, microbiology, histopathology). – Imaging (e.g. ultrasound, X-ray, CT/MRI/PET). – Invasive (e.g. endoscopy, biopsy).

 
11bDiagnostic Challenges –           Where applicable, describe what was challenging about the diagnoses (e.g. access, financial, cultural). – Describe how these challenges were overcome.

 
11cDiagnostic Reasoning – Describe the differential diagnoses, why they were considered (e.g. given the initial presentation or after assessment and investigation), why and how they were excluded.

 
11dPrognostic Characteristics – Include where applicable (e.g. tumour staging) and how this was performed.

 

Intervention

12a

Pre-Operative Patient Optimisation
–  Lifestyle (e.g. weight loss). –  Medical (e.g. medication review, treating any relevant pre-existing medical concerns). – Procedural (e.g. nil by mouth, enema). –  Other (e.g. psychological support).

 

12b

Surgical Interventions
– Describe the type(s) of intervention(s) used (e.g. pharmacological, surgical, physiotherapy, psychological, preventative). – Describe any concurrent treatments (e.g. antibiotics, analgesia, antiemetics, venous thromboembolism prophylaxis). –  Medical devices should have manufacturer and model specifically mentioned.

 

12c

Specific Details Regarding the Intervention
– Describe the rationale behind the treatment offered, how it was performed and time to intervention. – For surgery, include details on the intervention (e.g. anaesthesia, patient position, skin preparation used such as chlorhexidine or shaving, use of other relevant equipment, sutures, devices, surgical stage). – For surgery, include any post-operative instructions (e.g. how long to keep an abdominal drain for, when to remove sutures or staples). – The degree of novelty for a surgical technique/device should be mentioned (e.g. ‘first in human’). – For pharmacological therapies, include information on the formulation, dosage, strength, route, and duration.

 

12d

Operator Details
– Where applicable, include operator experience and position on the learning curve, prior relevant training, and specialisation (e.g. ‘junior trainee with 3 years of surgical specialty training’). Setting of Intervention – Specify the setting in which the intervention was performed (e.g. district general hospital, major trauma centre). – Specify the level of experience that the centre has with performing the intervention. – Specify whether the procedure was performed in collaboration with another speciality (e.g. a hybrid procedure).

 

12e

Deviation from Initial Management Plan
– State if there were any changes in the planned intervention(s). – Provide an explanation for these changes alongside the rationale (e.g. delays to intervention, a laparoscopic procedure converted to open due to operative difficulties).

 

Follow-Up and Outcomes

13a

Specify Details Regarding the Follow-up
– When (e.g. how long after discharge in months or years, frequency, maximum follow-up length at time of submission). – Where (e.g. home via video consultation, primary care, secondary care). – With whom (e.g. appointment with the original operating surgeon). – How (e.g. telephone consultation, virtual or digital follow-up, clinical examination, blood tests, imaging). – Any specific long-term surveillance requirements (e.g. imaging surveillance for endovascular aneurysm repair or clinical exam/ultrasound of regional lymph nodes for skin cancer). – Any specific post-operative instructions (e.g. postoperative medications, targeted physiotherapy, psychological therapy).
 

13b

Intervention Adherence and Compliance
– Where relevant, detail how well the patient adhered to and tolerated the advice provided (e.g. avoiding heavy lifting for abdominal surgery, or tolerance of chemotherapy and pharmacological agents). – Explain how adherence and tolerance were measured. – Explain whether these results will have an impact on the long-term applicability of the intervention in clinical practice.

 

13c

Outcomes
– Expected versus attained clinical outcome as assessed by the clinician. Reference literature used to inform expected outcomes. – When appropriate, include patient-reported measures (e.g. questionnaires including quality-of-life scales). – Detail when the outcomes were recorded (e.g. at how many months or years post-operative).

 

13d

Complications and Adverse Events
– Precautionary measures taken to prevent complications (e.g. antibiotic or venous thromboembolism prophylaxis). – All complications and adverse or unanticipated events should be described in detail and ideally categorised in accordance with the Clavien-Dindo Classification (e.g. blood loss, length of operative time, wound complications, re-exploration or revision surgery). – If relevant, whether the complication was reported to the relevant national agency or pharmaceutical company. – Specify the duration of time between completion of the intervention and discharge, and whether this was within the expected timeframe (if not, why not). – Where applicable, the 30-day post-operative and long-term morbidity/mortality may need to be specified. – Where applicable, specify whether any complications or adverse outcomes were discussed locally (e.g. during team or morbidity and mortality meetings). – State if there were no complications or adverse outcomes.

 
Discussion14aSummary of Results – Provide a clear summary of the key findings of the report. – Provide a rationale for the conclusions drawn. 
14bRelevant Literature – Include a brief discussion of the relevant literature and, if appropriate, similar published cases. 
14cFuture Implications – Describe the future implications for clinical practice and guidelines. 
14dTake Away Lessons – Outline the key clinical lessons from this case report. – Discuss any differences in approach to diagnosis, investigation, or patient management which the authors might adopt in future cases, based on their experience of the current report.

 

Strengths and Limitations

15a

Strengths
– Describe the key strengths of the case. – Detail any multidisciplinary or cross-specialty relevance.

 

15b

Weaknesses and Limitations
– Describe the relevant weaknesses or limitations of the case. – If applicable, describe how these challenges were overcome. – For novel techniques or devices, outline any contraindications and alternatives, potential risks and possible complications if applied to a larger population.

 

Patient Perspective

16

– Where appropriate, the patient should be given the opportunity to share their perspective on the intervention(s) they received (e.g. sharing quotes from a consented and anonymised interview).

 

Informed Consent

17

– The authors must provide evidence of consent, where applicable, and if requested by the journal. – Consent should be provided for both the original intervention or procedure and publication of the current case report. – State the method of consent at the end of the article (e.g. verbal, written, digital/virtual). – If not provided by the patient, explain why (e.g. death of patient and consent provided by next of kin). If the patient or family members were untraceable, then document the tracing efforts undertaken.

 

Additional Information

18

– Please state any author contributions, acknowledgements, conflicts of interest, sources of funding, and where required, institutional review board or ethical committee approval. – Disclose whether the case has been presented at a conference or regional meeting. – Disclose whether this case is under consideration at any other journal.

 

Clinical Images and Videos

19

– Where relevant and available, include clinical images to help demonstrate the case pre-, peri-, and post-intervention (e.g. radiological, histopathological, patient photographs, intra-operative images). – Where relevant, ensure images are adequately annotated. – Where relevant and available, a link (e.g. Google Drive, YouTube) to the narrated operative video can be included to highlight specific techniques or operative findings. – Ensure all media files are appropriately captioned and indicate points of interest to allow for easy interpretation.

 


Premier Science
Publishing Science that inspires