Riaz A. Agha1 , Ginimol Mathew2
, Rasha Rashid3
, Ahmed Kerwan4
, Ahmed Al-Jabir5
, Catrin Sohrabi2
, Thomas Franchi6
, Maria Nicola7
, Maliha Agha1
; TITAN Group
- Premier Science, London, UK
- Royal Free London NHS Foundation Trust, London, UK

- Imperial College School of Medicine, London, UK

- Harvard T.H. Chan School of Public Health, Boston, USA
- University College London Hospital, London, UK

- Wellington Regional Hospital, Te Whatu Ora Capital Coast and Hutt Valley, Wellington, New Zealand
- Imperial College London, London, UK
Correspondence to: Riaz Agha, Premier Science riaz@premierscience.com


DOI: https://doi.org/10.70389/PJS.100082
TITAN Group Contributors
- Achilleas Thoma, McMaster University, Canada
- Alessandro Coppola, Sapienza University of Rome, Italy
- Andrew J Beamish, Swansea Bay University Health Board, Swansea University, UK
- Ashraf Noureldin, Almana Hospital, Khobar, Saudi Arabia
- Ashwini Rao, Manipal Academy of Higher Education Manipal, India
- Baskaran Vasudevan, MIOT Hospital, Chennai, India
- Ben Challacombe, Guy’s and St Thomas’ Hospitals, UK
- C S Pramesh, Tata Memorial Hospital, Homi Bhabha National Institute and National Cancer Grid, India
- Duilio Pagano, IRCCS-ISMETT – UPMC Italy, Italy
- Frederick Heaton Millham, Harvard Medical School, USA
- Gaurav Roy, Cactus Communications Pvt Ltd, India
- Huseyin Kadioglu, Saglik Bilimleri Universitesi, Turkiye
- Iain James Nixon, NHS Lothian, UK
- Indraneil Mukherjee, Staten Island University Hospital Northwell Health, USA
- James Anthony McCaul, Queen Elizabeth University Hospital Glasgow and Institute for Cancer Therapeutics University of Bradford, UK
- James Ngu, Changi General Hospital, Singapore
- Joerg Albrecht, Cook County Health, USA
- Juan Gomez Rivas, Hospital Clinico San Carlos, Madrid, Spain
- K Veena L Karanth, District Hospital Udupi, India
- Kandiah Raveendran, Fatimah Hospital, Malaysia
- M Hammad Ather, Aga Khan University, Pakistan
- Mangesh A. Thorat, Centre for Cancer Screening, Prevention and Early Diagnosis, Wolfson Institute of Population Health, Queen Mary University of London, London, UK; Breast Services, Homerton University Hospital, London, UK
- Mohammad Bashashati, Dell Medical School, UT Austin, USA
- Mushtaq Chalkoo, Government Medical College, Srinagar, Kashmir, India
- Oliver J. Muensterer, Dr. von Hauner Children’s Hospital, LMU Medical Center, Munich, Germany
- Patrick Bradley, Nottingham University Hospital, UK
- Prabudh Goel, All India Institute of Medical Sciences, New Delhi, India
- Prathamesh Pai, P D Hinduja Hospital, Khar, India
- Priya Shinde, Homerton University Hospital, UK
- Priya Ranganathan, Tata Memorial Centre, India
- Raafat Yahia Afifi Mohamed, Cairo University, Egypt
- Richard David Rosin, University of the West Indies Barbados, Barbados
- Roberto Cammarata, Fondazione Policlinico Campus Biomedico, Italy
- Roberto Coppola, Campus Bio Medico University, Italy
- Rolf Wynn, UiT The Arctic University of Norway, Norway
- Salim Surani, Texas A&M University, USA
- Salvatore Giordano, University of Turku, Finland
- Samuele Massarut, Centro di Riferimento Oncologico Aviano IRCCS, Italy
- Shahzad G. Raja, Harefield Hospital, UK
- Somprakas Basu, All India Institute of Medical Sciences Rishikesh, India
- Syed Ather Enam, Aga Khan University, Pakistan
- Teo Nan Zun, Changi General Hospital, Singapore
- Todd Manning, Bendigo Health and Monash University, Australia
- Veeru Kasivisvanathan, University College London, UK
- Vincenzo La Vaccara, Fondazione Policlinico Campus Bio-Medico di Roma, Italy
- Zubing Mei, Shuguang Hospital, Shanghai University of Traditional Chinese Medicine, China

Additional information
- Ethical approval: N/a
- Consent: N/a
- Funding: None
- Conflicts of interest: The authors have no financial, consultative, institutional, or other relationships that might lead to bias or a conflict of interest.
- Author contribution: R.A.A.: conceptualisation and study design, supervision of the Delphi process, data interpretation, manuscript drafting and critical revision, approval of the final manuscript. A.K., A.A.-J., C.S., T.F., G.M., M.N., R.R., M.A. R.A.A: Participation in study design, generation of Delphi survey materials, data collection and analysis, contribution to drafting of new checklist items, manuscript writing and revision, approval of the final manuscript.
- Guarantor: Riaz A Agha
- Provenance and peer-review:
Unsolicited and externally peer-reviewed - Data availability statement: The Delphi survey data that informed this guideline (individual expert ratings and comments) are confidential and not publicly available, in accordance with the consensus process protocol. All relevant aggregated results are reported in this article.
Keywords: AI transparency guidelines, AI use in research, delphi consensus exercise, scare process strocss updates, scholarly publishing.
Peer-review
Received: 22 May 2025
Revised: 23 May 2025
Accepted: 23 May 2025
Published: 23 May 2025
Abstract
The use of Artificial Intelligence (AI) in research and the literature is increasing. The need for transparency is clear. Here we present a guideline to transparently report the use of AI in any manuscript in general. The guideline items cover; declaration, purpose and scope, AI tools and configuration, data inputs and safeguards, human oversight and verification, bias, ethics and regulatory compliance and reproducibility and transparency. These items have been confirmed in a recent Delphi consensus exercise with high participation and agreement. This guide will evolve over time as technology, systems and behaviour evolve.
Introduction
Artificial intelligence (AI) is increasingly being used in research and the development of the scholarly literature.1-3 With this comes the need for transparency in the reporting of its use. It is now incumbent on editors, journals, publishers and the wider scholarly publishing community to ensure authors declare this in a transparent and comprehensive way. The recent update to the SCARE, PROCESS and STROCSS guidelines has moved us significantly in this direction and of course as AI and its use evolves, so will the guidelines.4-6 These guidelines were updated through a Delphi consensus exercise and the papers went through peer-review, AI review, editorial review and subsequent refinement.
Here we provide a short guideline which allows for the declaration of AI use in other article types, such as review articles, other experimental study types, editorials, letters and so on, to ensure transparency in their reporting too. The guideline items cover; declaration, purpose and scope, AI tools and configuration, data inputs and safeguards, human oversight and verification, bias, ethics and regulatory compliance and reproducibility and transparency.
Methods
The guideline development group responsible for the recent SCARE, PROCESS and STROCSS guideline updates reconvened to develop this general-purpose use of AI guideline. Here we utilize the same items that have already been approved through the SCARE, PROCESS and STROCSS guideline development process (table 1). Given how these items have already gone through a Delphi consensus exercise amongst 49 participants with over 90% response and strong agreement, we felt it unnecessary to repeat this exercise.
| Table 1: The TITAN Guideline items | |||
| TITAN Guideline Checklist 2025 | |||
| Topic | Item | Description | Page number |
| Artificial Intelligence (AI) (some journals may prefer this in the methods and/or acknowledgments section and it should also be declared in the cover letter) | 1 | Declaration of whether any AI was used in the research and manuscript development State no, if that’s the case. If yes, proceed to item 1a | |
| 1a | Purpose and Scope of AI Use – Precisely state why AI was employed (e.g. development of research questions, language drafting, statistical analysis/summarisation, image annotation, etc). – Was generative AI utilised and if so, how? – Clarify the stage(s) of the reporting workflow affected (planning, writing, revisions, figure creation). – Confirmation that the author(s) take responsibility for the integrity of the content affected/generated | ||
| 1b | AI Tool(s) and Configuration – Name each system (vendor, model, major version/date). – State the date it was used – Specify relevant parameters (e.g. prompt length, plug-ins, fine-tuning, temperature). – Declare whether the tool operated locally on-premises, or via a cloud API and any integrations with other systems. | ||
| 1c | Data Inputs and Safeguards – Describe categories of data provided to the AI (patient text, de-identified images, literature abstracts). – Confirm that all inputs were de-identified and compliant with GDPR/HIPAA. – Note any institutional approvals or data-sharing agreements obtained. | ||
| 1d | Human Oversight and Verification – Identify the supervising author(s) who reviewed every AI output. – Detail the process for fact-checking, clinical accuracy checks – State whether any AI-generated text/figures were edited or discarded. – Acknowledge the limitations of AI and its use | ||
| 1e | Bias, Ethics and Regulatory Compliance – Outline steps taken to detect and mitigate algorithmic bias (e.g. cross-checking against under-represented populations). – Affirm adherence to relevant ethical frameworks. – Disclose any conflicts of interest or financial ties to AI vendors. | ||
| 1f | Reproducibility and Transparency – Provide the exact prompts or code snippets (as supplementary material if lengthy). – Supply version-controlled logs or model cards where possible. – if applicable, state repository, hyperlink or digital object identifier (DOI) where AI-generated artefacts can be accessed, enabling attempts at independent replication of the query/input. | ||
Conclusion
The authors commend these items to the scholarly community to aid with transparency in the reporting of AI use (TITAN). We will monitor the development of AI use in research and the scholarly literature to ensure these guidelines remain up to date.
References
- Science journals set new authorship guidelines for AI-generated text [Internet]. U.S. Department of Health and Human Services; [cited 2025 May 18]. Available https://factor.niehs.nih.gov/2023/3/feature/2-artificial-intelligence-ethics
- COPE Council. COPE position-authorship and AI-English. Committee on Publication Ethics; 2023 [cited 2025 May 19]. Available from: https://doi.org/10.24318/cCVRZBms
- Zielinski C, Winker MA, Aggarwal R, Ferris LE, Heinemann M, Lapeña JF, et al. Chatbots, generative AI, and scholarly manuscripts: WAME recommendations on chatbots and generative artificial intelligence in relation to scholarly publications. World Association of Medical Editors; 2023 May 31 [cited 2025 May 19]. Available from: https://wame.org/page3.php?id=106
https://doi.org/10.25100/cm.v54i3.5868 - Ahmed Kerwan, Ahmed Al-Jabir, Ginimol Mathew, Catrin Sohrabi, Rasha Rashid, Thomas Franchi, Maria Nicola, Maliha Agha, Riaz A. Agha. Revised Surgical CAse REport (SCARE) guideline: An update for the age of Artificial Intelligence. Premier Journal of Science 2025:10;100079
- Agha RA, Mathew G, Rashid R, Kerwan A, Al-Jabir A, Sohrabi C, Franchi T, Nicola M, Agha M. Revised Preferred Reporting of Case Series in Surgery (PROCESS) Guideline: An update for the age of Artificial Intelligence. Premier Journal of Science 2025:10;100080
- Agha RA, Mathew G, Rashid R, Kerwan A, Al-Jabir A, Sohrabi C, Franchi T, Nicola M, Agha M. Revised Strengthening the reporting of cohort, cross-sectional and case-control studies in surgery (STROCSS) Guideline: An update for the age of Artificial Intelligence. Premier Journal of Science 2025:10;100081.







