Revised Preferred Reporting of Case Series in Surgery (PROCESS) Guideline: An update for the age of Artificial Intelligence

Riaz A. Agha1 ORCiD, Ginimol Mathew2 ORCiD, Rasha Rashid3 ORCiD, Ahmed Kerwan4 ORCiD, Ahmed Al-Jabir5 ORCiD, Catrin Sohrabi2 ORCiD, Thomas Franchi6 ORCiD, Maria Nicola7 ORCiD, Maliha Agha1 ORCiD; PROCESS Group

  1. Premier Science, London, UK
  2. Royal Free London NHS Foundation Trust, London, UK
  3. Imperial College School of Medicine, London, UK
  4. Harvard T.H. Chan School of Public Health, Boston, USA
  5. University College London Hospital, London, UK
  6. Wellington Regional Hospital, Te Whatu Ora Capital Coast and Hutt Valley, Wellington, New Zealand
  7. Imperial College London, London, UK

Correspondence to: Riaz Agha, Premier Science, riaz@premierscience.com

download-pdf
Download the PROCESS 2025 checklist

DOI: https://doi.org/10.70389/PJS.100080

PROCESS Group Contributors

  1. Achilleas Thoma, McMaster University, Canada
  2. Alessandro Coppola, Sapienza University of Rome, Italy
  3. Andrew J Beamish, Swansea Bay University Health Board, Swansea University, UK
  4. Ashraf Noureldin, Almana Hospital, Khobar, Saudi Arabia
  5. Ashwini Rao, Manipal Academy of Higher Education Manipal, India
  6. Baskaran Vasudevan, MIOT Hospital, Chennai, India
  7. Ben Challacombe, Guy’s and St Thomas’ Hospitals, UK
  8. C S Pramesh, Tata Memorial Hospital, Homi Bhabha National Institute and National Cancer Grid, India
  9. Duilio Pagano, IRCCS-ISMETT – UPMC Italy, Italy
  10. Frederick Heaton Millham, Harvard Medical School, USA
  11. Gaurav Roy, Cactus Communications Pvt Ltd, India
  12. Huseyin Kadioglu, Saglik Bilimleri Universitesi, Turkiye
  13. Iain James Nixon, NHS Lothian, UK
  14. Indraneil Mukherjee, Staten Island University Hospital Northwell Health, USA
  15. James Anthony McCaul, Queen Elizabeth University Hospital Glasgow and Institute for Cancer Therapeutics University of Bradford, UK
  16. Joerg Albrecht, Cook County Health, USA
  17. Juan Gomez Rivas, Hospital Clinico San Carlos, Madrid, Spain
  18. K Veena L Karanth, District Hospital Udupi, India
  19. Kandiah Raveendran, Fatimah Hospital, Malaysia
  20. M Hammad Ather, Aga Khan University, Pakistan
  21. Mangesh A. Thorat, Centre for Cancer Screening, Prevention and Early Diagnosis, Wolfson Institute of Population Health, Queen Mary University of London, London, UK; Breast Services, Homerton University Hospital, London, UK
  22. Mohammad Bashashati, Dell Medical School, UT Austin, USA
  23. Mushtaq Chalkoo, Government Medical College, Srinagar, Kashmir, India
  24. Oliver J. Muensterer, Dr. von Hauner Children’s Hospital, LMU Medical Center, Munich, Germany
  25. Patrick Bradley, Nottingham University Hospital, UK
  26. Prabudh Goel, All India Institute of Medical Sciences, New Delhi, India
  27. Prathamesh Pai, P D Hinduja Hospital, Khar, India
  28. Priya Shinde, Homerton University Hospital, UK
  29. Priya Ranganathan, Tata Memorial Centre, India
  30. Raafat Yahia Afifi Mohamed, Cairo University, Egypt
  31. Richard David Rosin, University of the West Indies Barbados, Barbados
  32. Roberto Cammarata, Fondazione Policlinico Campus Biomedico, Italy
  33. Roberto Coppola, Campus Bio Medico University, Italy
  34. Rolf Wynn, UiT The Arctic University of Norway, Norway
  35. Salim Surani, Texas A&M University, USA
  36. Salvatore Giordano, University of Turku, Finland
  37. Samuele Massarut, Centro di Riferimento Oncologico Aviano IRCCS, Italy
  38. Shahzad G. Raja, Harefield Hospital, UK
  39. Somprakas Basu, All India Institute of Medical Sciences Rishikesh, India
  40. Syed Ather Enam, Aga Khan University, Pakistan
  41. Teo Nan Zun, Changi General Hospital, Singapore
  42. Todd Manning, Bendigo Health and Monash University, Australia
  43. Veeru Kasivisvanathan, University College London, UK
  44. Vincenzo La Vaccara, Fondazione Policlinico Campus Bio-Medico di Roma, Italy
  45. Zubing Mei, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, China
Premier Journal of Science

Additional information

  • Ethical approval: N/a
  • Consent: N/a
  • Funding: None
  • Conflicts of interest: The authors have no financial, consultative, institutional, or other relationships that might lead to bias or a conflict of interest.
  • Author contribution: R.A.A.: conceptualisation and study design, supervision of the Delphi process, data interpretation, manuscript drafting and critical revision, approval of the final manuscript. A.K., A.A.-J., C.S., T.F., G.M., M.N., R.R., M.A. R.A.A: Participation in study design, generation of Delphi survey materials, data collection and analysis, contribution to drafting of new checklist items, manuscript writing and revision, approval of the final manuscript.
  • Guarantor: Riaz A Agha
  • Provenance and peer-review:
    Unsolicited and externally peer-reviewed
  • Data availability statement: The Delphi survey data that informed this guideline (individual expert ratings and comments) are confidential and not publicly available, in accordance with the consensus process protocol. All relevant aggregated results are reported in this article.

Keywords: PROCESS guideline update, artificial intelligence in surgery, delphi consensus process, AI transparency and ethics, AI reporting standards

Peer-review
Received: 20 May 2025
Revised: 22 May 2025
Accepted: 23 May 2025
Published: 23 May 2025

Abstract

Introduction: Artificial intelligence (AI) is rapidly transforming healthcare and scientific publishing. Reporting guidelines need to be updated to take into account this advance.  The PROCESS Guideline 2025 update adds a new AI-focused domain to promote transparency, reproducibility, and ethical integrity in surgical case series involving AI.

Methods: A Delphi consensus exercise was conducted to update the PROCESS guidelines. A panel of 49 surgical and scientific experts were invited to rate proposed new items. In round 1, participants scored each item on a 9-point Likert scale and provided feedback. Items not meeting consensus were  revised or discarded.

Results: A 92% response rate occurred amongst participants (45/49) in the first round. Ratings were analysed for agreement levels, and consensus was reached on all six proposed AI-related items. A revised PROCESS checklist is presented which incorporates these new AI related items. Authors are now expected to disclose AI involvement not only in patient care but also in manuscript preparation, as exemplified by this paper.

Conclusion: The PROCESS 2025 guideline provides an up-to-date framework for surgical case series in the era of AI. Through a robust consensus process, we have added specific reporting criteria for AI to ensure that any use of artificial intelligence in a case series is clearly documented, explained and discussed including with respect to bias and ethics. This update will help maintain the quality, transparency, and clinical relevance of case series, ultimately improving their educational value and trustworthiness for the surgical community.

Highlights
  • The PROCESS 2025 update introduces a new Artificial Intelligence (AI) domain (checklist items 5a–5f) to ensure transparency in surgical case series where AI is involved.
  • The revised guideline was developed via a one-round Delphi consensus exercise among 49 international experts, with 92% (45/49) participating and showing strong agreement on all new AI-related items.
  • Six new checklist items cover identification of AI use, detailed reporting of AI methods, data and validation, bias mitigation, and ethical considerations in case series.
  • In line with emerging publication standards, the authors used a generative AI tool for language editing of this manuscript and have transparently declared this use, exemplifying the new recommendations for AI disclosure.

Introduction

The concept of AI dates back to Turing’s seminal question “Can machines think?” in 1950.1 The official birth of AI as a field is traced to the 1956 Dartmouth conference led by John McCarthy, which coined the term “artificial intelligence” and conjectured that aspects of learning and intelligence could be simulated by machines.2 In the decades since, AI has transitioned from theory to real-world applications. Notably, the global AI market value was estimated at approximately $638 billion in 2024,3 and its impact on the world’s economy is projected to reach $15.7 trillion by 2030.4 This explosive growth (illustrated in Figure 1) is driven by breakthroughs in machine learning, big data, cloud computing and computational power.

Figure 1: Projected growth of the global artificial-intelligence market. Source: PwC Global AI Study, 2024.
Figure 1: Projected growth of the global artificial-intelligence market.
Source: PwC Global AI Study, 2024.

In medicine and surgery, AI applications are increasingly prevalent. Early successes of medical AI have been seen in diagnostic specialties – for example, AI-driven image analysis in radiology and pathology has achieved impressive accuracy, often exceeding human performance in detecting subtle findings.5 In surgical disciplines, AI is being explored for enhancing preoperative planning, intraoperative guidance (such as robotic surgery and real-time decision support), and postoperative outcome prediction.5 These technologies promise to augment the surgeon’s capabilities and personalise patient care. However, with this promise comes new responsibility: clinicians and researchers must ensure that when AI is involved in patient management, it is reported transparently and with sufficient detail to appraise its validity and safety. 

The Preferred Reporting of Case Series in Surgery (PROCESS) guideline – originally introduced in 20166 and last updated in 20237 – required further revision to urgently address AI-related reporting.8 Developed by Agha et al., the original PROCESS checklist aimed to improve the clarity, consistency, and educational value of case series in surgery.6 Subsequent updates in 2018,9 2020,10 and 20237 expanded and refined the criteria in response to feedback and evolving best practices. These guidelines have significantly improved reporting quality in surgical case series publications, although adherence by authors and journals has varied.11 Prior research in this area has shown significant deficiencies in reporting amongst 92 case series that met inclusion criteria.12 These included; failure to use standardised definitions (57%), missing or selective data (66%), transparency or incomplete reporting (70%), whether alternative study designs were considered (11%) and other issues (52%).12

Despite the growing presence of AI in healthcare, a gap exists in the PROCESS 2023 checklist – there were no specific items addressing how to report the use of AI in a case series. Omission of such details could lead to under-reporting of critical information. In other study designs, the need for AI-specific reporting guidelines has been recognised; for instance, the CONSORT-AI and SPIRIT-AI extensions have been published to guide reporting of clinical trials and protocols involving AI.8 To ensure that surgical case series keep pace with these developments, an update to the PROCESS guidelines was imperative. This 2025 update focuses on integrating AI-related reporting standards into the established PROCESS structure.

In this paper, we describe the methods and outcomes of the PROCESS 2025 guideline update. We introduce a new domain of checklist items dedicated to AI and elaborate on their rationale. We also discuss the importance of these additions in the context of transparency, bias mitigation, and reproducibility, which are crucial for maintaining trust in both case series and AI systems. Notably, in alignment with the principle of transparency, we also document our own use of AI during the preparation of this manuscript, as recommended by emerging editorial policies.13,14 The updated PROCESS 2025 guideline will help authors of case series provide clear and accountable descriptions when AI is part of patient care or part of the report-generation process. Ultimately, this will enhance the value of case series as scholarly contributions in an era where AI is becoming an integral part of healthcare.

Materials and Methods

Guideline development approach

The PROCESS 2025 update was developed through a Delphi consensus process, consistent with the approach used in prior PROCESS updates.6,16 An initial meeting was held by the PROCESS Group steering committee which brainstormed important updates for the PROCESS guideline. The senior author (RA) put forward AI as an important, timely and critical update to be made at this time.  Relevant AI specific items were then drafted, edited and approved to be put forward to a Delphi panel of experts. Invitations were sent via email to 49 experts in surgery, medicine and related fields. Invitees were provided with a summary of proposed new items (focused on AI reporting) and asked to participate in the consensus exercise.

In Round 1, panelists rated each proposed checklist item on a 1–9 Likert scale (where 1 = strongly disagree, 9 = strongly agree) to indicate their agreement that the item should be included in the updated guidelines. Participants could also provide free-text comments suggesting modifications or justifications. We included six candidate items (labeled 5a through to 5f) in the domain of “Artificial Intelligence”, drafted based on a preliminary literature review of AI reporting recommendations and input from the guideline authors. After Round 1, responses were analysed for consensus. An item was defined as achieving consensus for inclusion if ≥70% of respondents rated it 7–9 (agree) and <15% rated it 1–3 (disagree). This threshold was established a priori, in line with common Delphi methodology.6 Items that met consensus were provisionally accepted.

Data collection and analysis

The Delphi round was conducted via an online survey platform (Google Forms). Responses were collected anonymously, with panelists identified only by a study ID for tracking response rates. Quantitative data from Likert ratings were exported to Microsoft Excel for calculation of descriptive statistics. For each item, we computed the percentage of respondents who rated it in the high agreement range (7–9), moderate agreement range (4–6), and low agreement range (1–3). These are presented as consensus metrics. Table 1 summarises the score distribution for each new item (5a–5f) in the final round of Delphi.

Throughout the process, participants were encouraged to be critical and ensure each item added real value to the checklist. The high response rate (45 of 49 invited experts, i.e. 92%) and detailed comments provided indicate robust engagement from the expert panel. All data collected in the Delphi surveys were handled confidentially and were used solely for the purposes of this guideline development.

Integration into the PROCESS checklist

After the Delphi process, the steering committee finalised the phrasing of each new item (5a–5f) based on the panel’s preferred wording. The new domain was inserted into the PROCESS checklist as Section 5, entitled “Artificial Intelligence”, following the highlights section (Section 4) and preceding the previous Introduction section (which is now renumbered as Section 6 in the 2025 checklist). This renumbering was done to maintain logical flow: the checklist now first addresses Title, Keywords, Abstract and Highlights (Section 1–4), then the presence of any AI element (Section 5), then the Introduction of the case (Section 6), and so forth. The rest of the PROCESS 2023 items were retained with minimal or no changes, aside from renumbering (e.g. what was item 5a “Introduction” in PROCESS 2023 is now item 6a in PROCESS 2025 etc.).

The final PROCESS 2025 checklist thus contains a total of 49 items (up from 43 items in PROCESS 2023) spanning all domains of a surgical case series. Table 2 provides the verbatim wording of the six new AI-focused items (5a–5f). These items are intended to be used by authors when preparing case series: if an AI tool or algorithm was involved in the case in any manner, the author should address each of these points in the appropriate section of their report. If no AI was involved, these items would simply be marked “not applicable.” In the revised checklist document (available as supplementary material and on the PROCESS website), the new items are highlighted for ease of adoption by authors and journal editors.

Results

Response rate

There were 45 people who participated in the Delphi consensus exercise and this represents a 92% participation rate (45/49).  Their characteristics by specialty and country are shown in figures 2 and 3 below.

Characteristics of participants

Figure 2. A bar chart showingspecialties that participants who responded practice in.
Figure 2. A bar chart showingspecialties that participants who responded practice in.
Figure 3. A bar chart showing Countries that participants who responded are from.
Figure 3. A bar chart showing Countries that participants who responded are from.

Delphi consensus outcomes

Table 1 below shows the Delphi consensus scores for new AI-related checklist items (Section 5 “Artificial Intelligence”). Each value represents the percentage of Delphi panel participants giving a score in that range on the 9-point Likert scale for the item during the final round. Consensus for inclusion was defined as ≥70% of respondents scoring 7–9. All six items exceeded this threshold by a wide margin (figures 4-10).

Table 1: Delphi consensus scores for new AI-related checklist items
ItemSummary of item1–3 (Disagree) [%]4–6 (Neutral) [%]7–9 (Agree) [%]
5AI usage declaration02.2% (1/45)97.8% (44/45)
5aPurpose and Scope of AI Use02.2% (1/45)97.8% (44/45)
5bAI Tool(s) and Configuration6.7% (3/45)11.1% (5/45)84.1% (37/45)
5cData Inputs and Safeguards6.7% (3/45)6.7% (3/45)86.7% (39/45)
5dHuman Oversight and Verification4.4% (2/45)4.5% (2/45)91.1% (41/45)
5eBias, Ethics and Regulatory Compliance6.7% (3/45)13.3% (6/45)80.0% (36/45)
5fReproducibility and Transparency8.9% (4/45)13.3% (6/45)77.8% (35/45)

Following consensus, the six AI items were formally added to the PROCESS checklist. Free text comments made by some contributors led to minor changes like stating whether the AI was integrated with any other systems (added to item 5b), acknowledging the limitations of AI use (added to item 5d) and attempts independent replication of the query/input (added to item 5f).  The wording of each item, as finalized, is shown in Table 2. Briefly, these items require authors to: 5a) declare any use of AI in the case and its purpose; 5b) provide details of the AI tool or algorithm (name, version, source); 5c) describe the development or training data of the AI tool (how it was developed and on what data, if known); 5d) report validation or performance metrics of the AI tool used which is relevant to the case; 5e) discuss any biases, limitations, or ethical issues related to AI’s use; and 5f) document patient consent or regulatory considerations for using AI, if applicable.

No changes were made to the core content of other sections of the checklist (Title, Abstract, Patient Information, etc.) aside from renumbering due to the insertion of the new section. One minor addition was an explanatory note in the checklist introduction: authors are advised that if AI was not involved in their case, they may skip Section 5, but if AI contributed to diagnosis, management, or even manuscript preparation, the relevant items in Section 5 should be addressed. This ensures that the checklist remains adaptable to all case series, whether or not AI is a factor.

Figure 4: Delphi consensus results graph for new AI-related checklist item 5.
Figure 4: Delphi consensus results graph for new AI-related checklist item 5.
Figure 5: Delphi consensus results graph for new AI-related checklist item 5a: Purpose and Scope of AI Use.
Figure 5: Delphi consensus results graph for new AI-related checklist item 5a: Purpose and Scope of AI Use.
Figure 6: Delphi consensus results graph for new AI-related checklist item 5b: AI Tool(s) and Configuration.
Figure 6: Delphi consensus results graph for new AI-related checklist item 5b: AI Tool(s) and Configuration.
Figure 7: Delphi consensus results graph for new AI-related checklist item 5c: Data Inputs and Safeguards.
Figure 7: Delphi consensus results graph for new AI-related checklist item 5c: Data Inputs and Safeguards.
Figure 8: Delphi consensus results graph for new AI-related checklist item 5d: Human Oversight and Verification.
Figure 8: Delphi consensus results graph for new AI-related checklist item 5d: Human Oversight and Verification.
Figure 9: Delphi consensus results graph for new AI-related checklist item 5e: Bias, Ethics and Regulatory Compliance.
Figure 9: Delphi consensus results graph for new AI-related checklist item 5e: Bias, Ethics and Regulatory Compliance.
Figure 10: Delphi consensus results graph for new AI-related checklist item 5f: Reproducibility and Transparency.
Figure 10: Delphi consensus results graph for new AI-related checklist item 5f: Reproducibility and Transparency.

Table 2 below shows the new PROCESS 2025 checklist items (Section 5: Artificial Intelligence). Each item should be addressed in the case series if applicable. “AI” refers to any artificial intelligence or machine-learning system relevant to the case. These items are intended to ensure transparency and reproducibility when AI is part of a surgical case series.

Table 2. New PROCESS 2025 checklist items
Item (AI Domain)Checklist item description
5. AI usage declarationDeclaration of whether any AI was used in the research and manuscript development   If no, proceed to item 6. If yes, proceed to item 5a

5a. Purpose and Scope of AI Use

– Precisely state why AI was employed (e.g. development of research questions, language drafting, statistical analysis/summarisation, image annotation, etc).  – Was generative AI utilised and if so, how? – Clarify the stage(s) of the reporting workflow affected (planning, writing, revisions, figure creation). – Confirmation that the author(s) take responsibility for the integrity of the content affected/generated
5b. AI Tool(s) and Configuration  – Name each system (vendor, model, major version/date). 
– State the date it was used – Specify relevant parameters (e.g. prompt length, plug-ins, fine-tuning, temperature).  – Declare whether the tool operated locally on-premises, or via a cloud API and any integrations with other systems.
5c. Data Inputs and Safeguards  – Describe categories of data provided to the AI (patient text, de-identified images, literature abstracts).  – Confirm that all inputs were de-identified and compliant with GDPR/HIPAA.  – Note any institutional approvals or data-sharing agreements obtained.

5d. Human Oversight and Verification  – Identify the supervising author(s) who reviewed every AI output.  – Detail the process for fact-checking, clinical accuracy checks – State whether any AI-generated text/figures were edited or discarded. – Acknowledge the limitations of AI and its use
5e. Bias, Ethics and Regulatory Compliance– Outline steps taken to detect and mitigate algorithmic bias (e.g. cross-checking against under-represented populations).  – Affirm adherence to relevant ethical frameworks. – Disclose any conflicts of interest or financial ties to AI vendors.
5f. Reproducibility and Transparency  – Provide the exact prompts or code snippets (as supplementary material if lengthy).  – Supply version-controlled logs or model cards where possible. – if applicable, state repository, hyperlink or digital object identifier (DOI) where AI-generated artefacts can be accessed, enabling attempts at independent replication of the query/input.

The above items (5a–5f) now form an integral part of the PROCESS 2025 checklist. An author writing up a case series is expected to incorporate this information into the relevant sections of their manuscript. For example, item 5a would typically be covered in the Introduction of a report (where the setting and tools of care are described), whereas items 5b–5d might appear in the methods or results section (detailing what AI was used and its performance), and items 5e–5f are likely to be addressed in the Discussion section (reflecting on biases and ethical considerations). By structuring the reporting in this way, readers of the case series will gain a clear understanding of what AI was used, why it was used, how it functioned, and what its limitations are in the context of the case. This level of detail is crucial for interpreting the case’s findings, especially as AI algorithms can significantly influence clinical outcomes.

Discussion

The PROCESS 2025 guideline represents a proactive evolution of surgical case series standards in response to the growing influence of AI in healthcare. Compared to the PROCESS 2023 update, which primarily refined existing sections, the defining feature of PROCESS 2025 is the introduction of an entirely new domain dedicated to artificial intelligence. This addition marks a significant broadening of the checklist’s scope – acknowledging that “case series” may now involve not only human clinicians and patients, but also AI tools as part of the diagnostic or therapeutic narrative. By explicitly addressing AI, the updated guidelines aim to enhance transparency and reproducibility in case series. This aligns with broader efforts in medical research to improve reporting of AI.8

Transparency in reporting AI is the overarching theme of the new domain. Just as PROCESS champions transparency in clinical reporting, we recognise that AI algorithms must not become “black boxes” in case descriptions. Item 5a ensures that authors explicitly declare the use of AI, preventing scenarios where AI’s involvement might be obscured or assumed. This is analogous to disclosing a diagnostic test or a surgical device – readers deserve to know if AI was behind a key decision or outcome. Transparency is also reinforced by item 5b (tool identification) and 5c (development/data), which compel authors to provide enough technical detail for readers to grasp what the AI tool actually is. These items promote reproducibility: a future researcher or clinician reading the case series should be able to identify the same AI tool, understand its training context, and thereby judge whether the case’s insights are transferable or credible in other settings.

The emphasis on bias mitigation and ethical considerations (item 5e and 5f) addresses increasing concerns about AI in medicine. AI systems, especially those based on machine learning, can inadvertently carry biases from their training data. If not reported, such biases could lead to misinterpretation of a case – for example, an AI diagnostic tool might perform poorly on certain demographic groups, which would be highly relevant if the case patient belongs to that group. By asking authors to discuss AI biases and limitations, PROCESS 2025 aligns with the ethical principle of “do no harm” in publishing. It forces a moment of reflection: the case author must consider what AI might have missed or where it might be wrong. This practice can help mitigate over-reliance on AI and encourages authors to validate AI outputs with clinical judgment. In a broader sense, it contributes to the literature on AI by documenting real-world challenges and failures, not just successes, thereby preventing publication bias in favor of positive AI results.

Future directions for PROCESS and AI in surgical case series may include further refinements as the technology evolves. We expect that as more case series are published under the PROCESS 2025 guideline, a body of examples will accumulate, illustrating how authors have implemented these items. We will monitor the uptake of the AI domain – for instance, tracking if authors encounter difficulties in obtaining certain information about proprietary AI tools. If so, this might spur collaborations between clinicians and AI developers to improve transparency (e.g., requiring companies to provide model details when their AI is used in published case series). Additionally, while our current items focus on AI in patient care, future updates might consider AI used in writing or reviewing case series. In fact, the academic community is actively discussing standards for disclosing AI assistance in manuscript preparation. In this PROCESS update, we touch on that aspect by encouraging disclosure (item 5f covers AI in manuscript if it involves patient data or content generation). It’s plausible that a formal guideline for reporting the use of generative AI in scientific writing will emerge; until then, we have set a precedent by openly stating our use of an AI language model for editing this paper.

It is worth reflecting on the limitations of our guideline update process. First, our Delphi panel, while diverse, was limited to 49 invitees with 45 responders (92%). Important perspectives, such as patients or regulators, were not directly represented. Patients especially might have views on how they want AI usage reported in case series (perhaps desiring even more clarity on consent and privacy). In future guideline efforts, including patient representatives as well as other perspectives could be valuable. Second, the AI domain items are somewhat general and meant to apply across all types of AI. AI in surgery can range from simple diagnostic apps to complex autonomous robots; not every item will fit perfectly to every scenario. We attempted to strike a balance with broad wording, but there may be cases that require interpretation of how to apply an item. We will rely on the judgment of authors, reviewers, and editors to implement these guidelines sensibly on a case-by-case basis. Third, as with any consensus-based guideline, there is a degree of subjectivity in what was included and the language in which it is expressed. It is possible that some readers will feel an important AI-related item is missing. We welcome feedback from the surgical community, as the PROCESS guideline is meant to be iterative – future revisions (beyond 2025) can certainly expand or adjust the AI domain as needed.

One immediate challenge is dissemination and training. Introducing six new items means authors must be educated about them. We plan to disseminate the PROCESS 2025 checklist through the EQUATOR Network website, the Premier Science Journals, and presentations at surgical conferences. Additionally, we will encourage journals to require PROCESS 2025 adherence in their case series submissions, as endorsement by journals greatly drives usage. Experience from previous PROCESS iterations showed that when journal editors mandate the checklist and when authors see the benefit (in improved clarity of their reporting), compliance increases. We anticipate a similar positive impact: clearer reporting of case series involving AI, which in turn will make it easier for readers to learn from those cases or even reproduce aspects of them (for instance, using the same AI tool on a similar patient). Ultimately, better reported case studies can feed into higher-level evidence; a well-documented case of AI successfully detecting a rare complication could spur larger studies or inspire others to utilise that AI tool.

During the preparation of this guideline manuscript, we made use of generative AI as a writing aid. Specifically, the tool was used in the later stages to assist with polishing language. No content generation (ideas or drafting of sections) was delegated to AI; it was employed similarly to a grammar/style assistant under close human oversight. We mention this to practice what we preach: transparency about AI usage. As journals and publishers, as well as COPE and WAME, increasingly require disclosure of AI assistance,13-15 we demonstrate that such disclosure is feasible and can be done without undermining the credibility of the work. The final content was rigorously verified by all authors to eliminate any potential AI-introduced errors (such as incorrect references or “hallucinated” facts). We found that using AI in this limited capacity did improve efficiency in editing, but human expertise remained essential for the substance and accuracy of the guideline. This experience underscores a broader point: AI can be a valuable tool in medical writing and research, but it must be applied responsibly and transparently.

Conclusion

In surgery, AI tools are increasingly used for diagnostics, decision support, and robotics.5 Through a structured Delphi consensus and in response to the rapid expansion of artificial intelligence in healthcare, we have updated the PROCESS guideline to produce PROCESS 2025, a comprehensive reporting guideline for surgical case series in the age of AI. The addition of the new AI-focused domain (items 5a–5f) fills a critical gap, ensuring that any use of AI in a case is transparently reported with details on its implementation, validation, and ethical considerations. This update preserves the familiar structure of the PROCESS checklist while integrating modern considerations, thereby enabling authors to produce case series that are both up-to-date and rigorous. By following PROCESS 2025, clinicians and researchers will improve the clarity and reliability of case series, facilitating better knowledge sharing and ultimately enhancing patient care. As surgical practice increasingly intersects with advanced technologies, PROCESS 2025 will help maintain the integrity and educational value of case series, ensuring they remain a cornerstone of surgical literature in the years to come.

References
  1. Turing AM. I.-Computing machinery and intelligence. Mind. 1950;59(236):433-460.
    https://doi.org/10.1093/mind/LIX.236.433
  2. Artificial Intelligence (AI) coined at Dartmouth. Dartmouth College. Accessed May 18, 2025. https://home.dartmouth.edu/about/artificial-intelligence-ai-coined-dartmouth
  3. Artificial Intelligence (AI) market size to hit USD 3,680.47 bn by 2034 [cited 2025 May 18]. Available from: https://www.precedenceresearch.com/artificial-intelligence-market
  4. PricewaterhouseCoopers. PWC’s global Artificial Intelligence study: sizing the prize. [cited 2025 May 18]. Available https://www.pwc.com/gx/en/issues/artificial-intelligence/publications/artificial-intelligence-study.html
  5. McCartney J. AI is poised to “revolutionize” surgery. ACS. [cited 2025 May 19]. Available from: https://www.facs.org/for-medical-professionals/news-publications/news-and-articles/bulletin/2023/june-2023-volume-108-issue-6/ai-is-poised-to-revolutionize-surgery/
  6. Agha RA, Fowler AJ, Rajmohan S, Barai I, Orgill DP; PROCESS Group. Preferred reporting of case series in surgery; the PROCESS guidelines. International Journal of Surgery. 2016;36(Pt A):319-323.
    https://doi.org/10.1016/j.ijsu.2016.11.038 
  7. Mathew G, Sohrabi C, Franchi T, Nicola M, Kerwan A, Agha R; PROCESS Group. Preferred Reporting Of Case Series in Surgery (PROCESS) 2023 guidelines. International Journal of Surgery. 2023;109(12):3760-3769.
    https://doi.org/10.1097/JS9.0000000000000940
  8. Liu X, Rivera SC, Moher D, Calvert MJ, Denniston AK. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: The consort-ai extension. BMJ. 2020 Sept 9; m3164.
    https://doi.org/10.1136/bmj.m3164
  9. Agha RA, Borrelli MR, Farwana R, Koshy K, Fowler A, Orgill DP, For the PROCESS Group. The PROCESS 2018 Statement: Updating Consensus Preferred Reporting of CasE Series in Surgery (PROCESS) Guidelines, International Journal of Surgery 2018;60:279-282.
    https://doi.org/10.1016/j.ijsu.2018.10.031
  10. Agha RA, Sohrabi C, Mathew G, Franchi T, Kerwan A, O’Neill N for the PROCESS Group. The PROCESS 2020 Guideline: Updating Consensus Preferred Reporting Of CasE Series in Surgery (PROCESS) Guidelines, International Journal of Surgery 2020;84;231-235.
  11. Agha RA, Borrelli MR, Farwana R, Kusu-Orkar T, Millip MC, Thavayogan R, Garner J, Darhouse N, Orgill DP. Impact of the PROCESS guideline on the reporting of surgical case series: A before and after study. International Journal of Surgery. 2017;45:92-97
    https://doi.org/10.1016/j.ijsu.2017.07.079
  12. Agha R, Fowler A, Lee S, Gundogan B, Whitehurst B, Sagoo H, Jeong KJL, Altman D and Orgill D. A Systematic Review of the Methodological and Reporting Quality of Care Series in Surgery. British Journal of Surgery 2016;103(10):1253-8.
    https://doi.org/10.1002/bjs.10235
  13. Science journals set new authorship guidelines for AI-generated text [Internet]. U.S. Department of Health and Human Services; [cited 2025 May 18]. Available https://factor.niehs.nih.gov/2023/3/feature/2-artificial-intelligence-ethics
  14. COPE Council. COPE position-authorship and AI-English. Committee on Publication Ethics; 2023 [cited 2025 May 19]. Available from: https://doi.org/10.24318/cCVRZBms
    https://doi.org/10.24318/cCVRZBms
  15. Zielinski C, Winker MA, Aggarwal R, Ferris LE, Heinemann M, Lapeña JF, et al. Chatbots, generative AI, and scholarly manuscripts: WAME recommendations on chatbots and generative artificial intelligence in relation to scholarly publications. World Association of Medical Editors; 2023 May 31 [cited 2025 May 19]. Available from: https://wame.org/page3.php?id=106
    https://doi.org/10.25100/cm.v54i3.5868
  16. Nasa P, Jain R, Juneja D. Delphi methodology in healthcare research: How to decide its appropriateness. World J Methodol. 2021 Jul 20; 11(4):116-29.
    https://doi.org/10.5662/wjm.v11.i4.116

Appendix – PROCESS 2025 revised checklist
PROCESS 2025 Guideline Checklist

 
TopicItem                                       Item descriptionPage Number
Title1The phrase ‘case series’ is includedThe focus of the research study is mentioned (e.g. patient population, setting, diagnosis, intervention, outcome etc.) 
Key Words2Include three to six keywords that identify what is covered in the case series (e.g. patient population, setting, diagnosis, intervention, outcome etc.)Include ‘case series’ as one of the keywordsInclude the surgical subspecialty the case series pertains to as a keyword 
Abstract3aIntroduction – briefly describe:

BackgroundScientific rationale for this studyOverarching theme of the case seriesAims and objectives
 
3bMethods – briefly describe: Sample sizeTimeframe of researchCharacteristics of study design (e.g. prospective/retrospective, single-/multi-centre, informal/formal, consecutive/non-consecutive, exposure-/outcome-based sampling, clinical/population-based etc.)Inclusion and exclusion criteria 
3cResults – briefly describe: Outcomes of the intervention/management strategyAnalysis – narrative or statistical (report any statistical testing, although mostly inappropriate in case series studies) 
3dConclusion – briefly describe: Key findings and take-home messagesImpact on future clinical practiceDirection of future research 
 3ePresent a structured abstract Informal case series – introduction, case presentations (brief description of each case) and discussion/conclusionFormal case series – introduction, methods, results and discussion/conclusion 
Highlights4Convey the key findings of the research study in 3 to 5 bullet points 
Artificial Intelligence (AI)

(some journals may prefer this in the methods and/or acknowledgments section and it should also be declared in the cover letter)
5  Declaration of whether any AI was used in the research and manuscript development   If no, proceed to item 6.If yes, proceed to item 5a 
 5a  Purpose and Scope of AI Use   Precisely state why AI was employed (e.g. development of research questions, language drafting, statistical analysis/summarisation, image annotation, etc). Was generative AI utilised and if so, how?   Clarify the stage(s) of the reporting workflow affected (planning, writing, revisions, figure creation). Confirmation that the author(s) take responsibility for the integrity of the content affected/generated.

 
 5b  AI Tool(s) and Configuration   Name each system (vendor, model, major version/date). State the date it was usedSpecify relevant parameters (e.g. prompt length, plug-ins, fine-tuning, temperature). Declare whether the tool operated locally on-premises, or via a cloud API and any integrations with other systems.   
 5c  Data Inputs and Safeguards   Describe categories of data provided to the AI (patient text, de-identified images, literature abstracts). Confirm that all inputs were de-identified and compliant with GDPR/HIPAA. Note any institutional approvals or data-sharing agreements obtained.   
 5d  Human Oversight and Verification   Identify the supervising author(s) who reviewed every AI output. Detail the process for fact-checking, clinical accuracy checksState whether any AI-generated text/figures were edited or discarded.Acknowledge the limitations of AI and its use   
 5e  Bias, Ethics and Regulatory Compliance   Outline steps taken to detect and mitigate algorithmic bias (e.g. cross-checking against under-represented populations). Affirm adherence to relevant ethical frameworks. Disclose any conflicts of interest or financial ties to AI vendors. 
 5f
Reproducibility and Transparency   Provide the exact prompts or code snippets (as supplementary material if lengthy). Supply version-controlled logs or model cards where possible.If applicable, state repository, hyperlink or digital object identifier (DOI) where AI-generated artefacts can be accessed, enabling attempts at independent replication of the query/input.

 
Introduction6Introduction
comprehensively describe: Relevant background and scientific rationale for case series with reference to key scientific literatureOverarching theme (e.g. common patient population, setting, diagnosis, intervention, outcome etc.)Aims and objectivesAt the end of the introduction, refer to the PROCESS 2025 publication by stating: ‘This case series has been reported in line with the PROCESS guidelines [include citation]’

 
Methods7aParticipants
comprehensively describe: Relevant participant characteristics (e.g. demographics, comorbidities, ASA score, severity of surgery, urgency of surgery, smoking status, tumour staging etc.) and if relevant, exposure(s) of the participants (e.g. COVID-19)Subsequent inclusion and exclusion criteria with clear definitionsApproach to selecting patients (e.g. consecutive/non-consecutive, exposure-/outcome-based, formal/informal etc.) Methods used to ensure de-identification of patient information

 
7bRecruitment
comprehensively describe: Sources of recruitment (e.g. physician referral, electronic health record etc.) Any monetary incentivisation of patients for recruitment and retention should be declared; clarify the nature of any incentives provided
 
7cPre-intervention patient optimisation: Lifestyle (e.g. weight loss, nutritional support, exercise, smoking cessation etc.)Medication review (e.g. anticoagulation, oral hypoglycemics, insulin, oral contraceptive pill etc.)Pre-surgical stabilisation/preparation (e.g. treating hypothermia/-volemia/-tension, ICU care, nil by mouth, bowel preparation etc.) Other (e.g. psychological support, pre-operative education/counselling etc.) 
7dInterventions
comprehensively describe: Type of intervention (e.g. pharmacological, surgical, physiotherapy, psychological etc.)Aim of intervention (preventative/therapeutic)Concurrent treatments (e.g. antibiotics, analgesia, antiemetics, venous thromboembolism prophylaxis etc.)
 
7eIntervention specifics
comprehensively describe: Rationale for the treatment offeredTechniques involved in the administration of the interventionTime to interventionFor pharmacological therapies, include details such as formulation, dosage, strength, route and durationFor surgical intervention, include details on anaesthesia, patient positioning, preparation used, equipment needed, devices, sutures, surgical stage etc.Degree of novelty of surgical technique/device (e.g. ‘first in human’ or ‘first in this context’)Manufacturer and model of any medical devices used
 
7fOperator details
comprehensively describe: Relevant training, specialisation and operator’s experience (e.g. average number of the relevant procedures performed annually, independent, needs direct/indirect supervision etc.)Learning curve for techniqueRequirement for additional trainingCollaboration with other specialities (e.g. hybrid cardiac surgery)
 
7gQuality control
comprehensively describe: Measures taken to reduce inter- or intra-operator/operation variation, ensure quality and maintain consistency between cases (e.g. independent observers, lymph node counts, standard surgical technique etc.)Any specific disparities between cases
 
7hPost-operative care and follow-up comprehensively describe: Post-operative care (e.g. patient education, post-operative medications, early mobilisation, targeted physiotherapy, early enteral nutrition, early removal of catheters/drains, psychological therapy etc.)Follow-up timeframes (e.g. first follow-up post-discharge, follow-up duration at the time of submission etc.) and frequencyFollow-up setting (e.g. home via phone/video consultation, primary care, secondary care etc.)Follow-up method (e.g. history, clinical examination, blood tests, imaging etc.)Follow-up personnel (e.g. operating surgeon)Any specific long-term surveillance requirements (e.g. imaging surveillance of endovascular aneurysm repair, clinical/ultrasound examination of regional lymph nodes for skin cancer etc.)State if any participants were lost to follow-up and why

 
7iAnalysis

Narrative or statistical (report any statistical testing, although mostly inappropriate in case series studies)
 
Results8aParticipants
comprehensively describe: Number of patients involvedPatient characteristics (e.g. demographics, comorbidities, ASA score, severity of surgery, urgency of surgery, smoking status,  tumour staging etc.) and if relevant, exposure(s) of the participants (e.g. COVID-19)Include table showing baseline patient characteristics

 
8bDeviation from the initial management plan comprehensively describe: Any changes to the planned intervention with rationaleIf appropriate, include a suitable schematic diagram

 
8cOutcomes and follow-up
comprehensively describe: Expected versus attained clinician assessed outcome, providing reference to scientific literature used to inform expected outcomes (e.g. core outcome set)If appropriate, include patient-reported outcomes (e.g. quality-of-life)Use of validated outcome measures Time of outcome occurrencePercentage of patients lost to follow-up with rationale

 
8dIntervention adherence and compliance comprehensively describe: Assessment of patient’s adherence and tolerability of intervention and post-operative instructions (e.g. avoiding heavy lifting/strenuous activity, tolerance of chemotherapy/pharmacological agents etc.)Impact on long-term applicability of intervention in clinical practice

 
8eComplications and adverse events  comprehensively describe: Precautionary measures taken to prevent complications (e.g. antibiotic/venous thromboembolism prophylaxis)Complications and adverse events (e.g. blood loss, wound infection, deep vein thrombosis, pulmonary embolism etc.), categorised in accordance with the Clavien-Dindo classificationTiming of adverse eventsMitigation for adverse events (e.g. blood transfusion, wound care, re-exploration/revision surgery etc.)If relevant, whether complications or adverse events were discussed locally (e.g. morbidity and mortality meetings)If appropriate, whether complications or adverse events were reported to the relevant national agency or pharmaceutical companySpecify time to discharge following completion of intervention and whether this was within the expected timeframe or not (if not, why not)Where applicable, specify the 30-day post-operative and long-term morbidity/mortalityState if there were no complications or adverse events 
Discussion9aKey results
comprehensively describe: Key results Include table showing key results

 
9bScientific context and implications comprehensively describe: Relevant literature and if appropriate, similar published studiesImplications for clinical practice and guidelines (e.g. NICE)Comparison to current gold standard of careRelevant hypothesis generation 
9cStrengths
comprehensively describe: Strengths of the studyAny multidisciplinary or cross-speciality relevance  
 
9dWeaknesses and limitations
comprehensively describe: Weaknesses and limitations of the study, with potential impact on results and their interpretationDeviations from protocol, with reasonsFor novel techniques or devices, outline any contraindications/alternatives and potential risks/complications if applied to a larger population
 
9eDirections for future research
comprehensively describe: Impact on future research and clinical practiceQuestions that have arisen as a result of the studyAlternative study design(s) best suited to address these questions
 
  9fCost
comprehensively describe: Economic implication(s)Justify cost if intervention more expensive than current gold standard of careAny cheaper alternatives
 
Conclusions10aKey conclusions Outline the key conclusions from this study

 
10bRationale Explain the rationale behind those conclusions

 
10cFuture work
briefly describe: Any questions arisen from the studyAny differences in approach to patient diagnosis or management which authors might adopt in future similar studies

 
Patient and/or Carer Perspective11Where appropriate, the patient(s)/carers(s) should be given the opportunity to share their perspective on the intervention(s) they received (e.g. sharing quotes from a consented, anonymised interview or questionnaire)

 
Informed Consent12  The authors must provide evidence of consent, where applicable, and if requested by the journalState the method of consent at the end of the article (e.g. verbal or written)If not provided by the patients, explain why (e.g. death of patient and consent provided by next of kin). If the patients or family members were untraceable then document the tracing efforts undertaken. 
Additional Information13aState any conflicts of interest

 
13bState any sources of funding (e.g. grant details)Role of funder

 
13cOther relevant disclosures State any author contributions and acknowledgmentsIf appropriate, give details of institutional review board and ethical committee approvalDisclose whether the case has been presented at a conference or regional meeting 
Clinical Images and Videos14  Where relevant and available, include clinical images to help demonstrate the cases pre-, peri- and post-intervention (e.g. radiological, histopathological, patient photographs, intraoperative images etc.)Where relevant and available, include a link (e.g. Google Drive, YouTube etc.) to the narrated operative video to highlight specific techniques or operative findingsEnsure all media files are appropriately captioned and indicate points of interest to allow for easy interpretation 
Referencing the Checklist15Include reference to the PROCESS 2023 publication by stating: ‘This case series has been reported in line with the PROCESS Guideline’ at the end of the methods section and include citation in the references section

 


Premier Science
Publishing Science that inspires