Optimizing Dissemination to Target Users: The CHART Approach

Aaron Yu1 ORCiD, Daniel Fry2, Amy Boyle3, Gordon Guyatt4, Jeremy Ng4,5,6 and Bright Huo7
1. Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada Research Organization Registry (ROR)
2. Department of Molecular Genetics, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
3. Michael G. DeGroote School of Medicine, McMaster University, Hamilton, ON, Canada
4. Department of Health Research Methods, Evidence, and Impact, Faculty of Health Sciences, McMaster University, Hamilton, Canada
5. Institute of General Practice and Interprofessional Care, University Hospital Tübingen, Tübingen, Germany
6. Robert Bosch Center for Integrative Medicine and Health, Bosch Health Campus, Stuttgart, Germany
7. School of Public Health, Faculty of Health, University of Technology Sydney, Sydney, Australia
Correspondence to: Aaron Yu, fryd1@mcmaster.ca

Premier Journal of Science

Additional information

  • Ethical approval: N/a
  • Consent: N/a
  • Funding: No industry funding
  • Conflicts of interest: Bright Huo, Gordon Guyatt, and Jeremy Ng were collaborators on publishing the CHART guideline. The authors have no other conflicts to declare.
  • Author contribution: Aaron Yu, Daniel Fry, Amy Boyle, Gordon Guyatt, Jeremy Ng and Bright Huo – Conceptualization, Writing – original draft, review and editing
  • Guarantor:  Aaron Yu
  • Provenance and peer-review: Unsolicited and externally peer-reviewed
  • Data availability statement: N/a

Keywords: Author-targeted outreach, Chatbot assessment reporting tool (chart), Generative ai health advice studies, Pubmed metadata extraction, Reporting guideline dissemination.

Peer Review
Received: 9 November 2025
Last revised: 12 December 2025
Accepted: 14 December 2025
Version accepted: 2
Published: 30 January 2026

Reporting guidelines are essential tools for improving the reporting and thus facilitating the assessment of health research.1–3 They inform researchers of how, through formats such as flowcharts, plain text, or checklists, to clearly express what was done and what was found – so that their study’s contents can meaningfully contribute to research.3–5 Reporting guidelines aim to help researchers ensure all the necessary details for readers to evaluate the methodology of their work is included.3,4 A reporting guideline also ensures readers and reviewers can easily understand the procedural details of a study, facilitating interpretation of study findings.6,7 Current reporting guidelines suffer from inadequate uptake, so we aim to detail our methodology for increasing the awareness of our published guideline through retrospective emailing. The development of reporting guidelines is a strenuous and complex process requiring perspectives from multiple stakeholders.5 Reporting guideline developers first conduct a comprehensive scoping review followed by Delphi consensus and subsequent panel consensus meetings.5 For these efforts to be justified and translate into improved reporting standards, the widespread dissemination and adoption of new reporting guidelines is paramount.5

Despite the public accessibility of reporting guidelines and a shared duty to patients and fellow researchers for transparency and trust, even well-accepted reporting guidelines suffer from inadequate uptake. A survey of surgical journals in 2012 reported the adherence to Consolidated Standards of Reporting Trials (CONSORT) and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) at 30% and 10% respectively.8 Similarly, a 2020 analysis of two hundred randomly selected health research articles revealed that only 39% of articles properly adhered to the relevant reporting protocol.9 In a 2024 systematic review evaluating use and adherence to PRISMA, only 30.18% of studies reported using PRISMA with an average adherence of 43%.10 A 2024 retrospective evaluation of heart failure randomized controlled trials from 2016 to 2020 found adherence was 69.7% based on checklist items.11 Overall, these patterns of inadequate adherence to reporting guidelines reveal a systematic gap in healthcare research that undermines the values of trust and transparency between researchers, patients and colleagues. Limited user awareness about applicable reporting guidelines may explain this poor adherence. It is therefore crucial to improve awareness of reporting guidelines to increase adherence, improving transparency in published studies.

Reporting guideline authors may make efforts to facilitate dissemination of their work among target users. We developed the Chatbot Assessment Reporting Tool (CHART), a reporting guideline to provide guidance for chatbot health advice (CHA) studies.12 CHART consists of a abstract and full-text checklist, and a methodological diagram detailing characteristics of the generative AI model used in the study.12 CHART is unique as it is the first reporting guideline intended for clinical evidence and health advice studies with generative AI models.13 As more studies evaluate the performance of generative AI in providing health advice and support clinical decision-making, CHART guides the reporting of CHA study methods.12 To improve awareness among target users, CHART was published in several journals on August 1, 2025.12,14–18 However, co-publication alone may not adequately reach the target audience.8,9

To complement our dissemination through co-publication, we directly identified potential target users of the reporting guideline by contacting authors of published CHA studies. We completed a search for relevant studies on PubMed on June 29th, 2025, using a previously described search, inclusion, and exclusion criteria for CHA studies (Supplemental Digital Content (SDC) Text 1).19 We entered the compiled list of 4,194 article PubMed identifiers (PMIDs) into RedEye, a pipeline designed for the extraction and cleaning of metadata from the PubMed database. Within the pipeline, an R script extracted the author’s public email address, date of publication, institutional affiliation, and digital object identifier (DOI) for each article into a comma separated values (CSV) file. The R script then called a separate Python script within the RedEye pipeline to identify the most recent metadata for each email address and cleaned the overall CSV file by removing data without a corresponding email address (n = 3), and duplicate email addresses (n = 14) based on publication date. Finally, a python script identified improperly formatted email addresses (e.g. improper domain name, or email does not exist, etc. n = 1), and a manual correction was done if the corrected email address was not already present – if the email was present the incorrect entry was removed. The final cleaned file held a list of 1,652 target users of CHART.

We grouped researchers based on their corresponding time zones based on institutional affiliation and contacted them via email correspondence with only one email on August 21, 2025. Ethics approval was not required as all emails were publicly accessible, and all were listed as corresponding authors. We sent them links to the full text of CHART, details on how to implement the CHART checklist and flow diagram and invited them to provide feedback on the usability of the reporting guideline. The full content of the email is available in SDC Text 2. This was done to increase awareness of CHART among target users and gather feedback for potential improvements to the tool in future. Ethics approval was not obtained as no data was collected and recipients were not obligated to participate. Currently, feedback primarily consists of questions regarding the contents most applicable to the tool and how to incorporate it.

The described methodology facilitates the direct identification of target users of our reporting guideline. By directly contacting researchers most likely to benefit from CHART, we hope to increase reporting guideline uptake and collect feedback on its usability. This feedback will improve the usability of CHART and assist in developing future updates. By answering questions regarding the use of CHART, we are increasing the likelihood of its proper adherence. In combination with a co-publication strategy, it is our hope that optimizing exposure through directly reaching target users will improve the uptake of the CHART reporting guideline. With increased adherence to rigorous reporting frameworks, readers will be able to better interpret the findings of CHA studies as we move toward testing generative AI applications in more clinical settings.

References
  1. Plint AC, Moher D, Morrison A, et al. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Medical Journal of Australia. 2006;185(5):263–267. https://doi.org/10.5694/j.1326-5377.2006.tb00557.x
  2. Turner L, Shamseer L, Altman DG, et al. Consolidated standards of reporting trials (CONSORT) and the completeness of reporting of randomised controlled trials (RCTs) published in medical journals. Cochrane Database of Systematic Reviews. 2012;2013(1). https://doi.org/10.1002/14651858.MR000030.pub2
  3. Altman DG, Simera I. A history of the evolution of guidelines for reporting medical research: the long road to the EQUATOR Network. J R Soc Med. 2016;109(2):67–77. https://doi.org/10.1177/0141076815625599
  4. Simera I, Moher D, Hirst A, Hoey J, Schulz KF, Altman DG. Transparent and accurate reporting increases reliability, utility, and impact of your research: reporting guidelines and the EQUATOR Network. BMC Med. 2010;8(1):24. https://doi.org/10.1186/1741-7015-8-24
  5. Moher D, Schulz KF, Simera I, Altman DG. Guidance for Developers of Health Research Reporting Guidelines. PLoS Med. 2010;7(2):e1000217. https://doi.org/10.1371/journal.pmed.1000217
  6. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care. 2007;19(6):349–357. https://doi.org/10.1093/intqhc/mzm042
  7. Page MJ, Moher D, Bossuyt PM, et al. PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews. BMJ. 2021;372:n160. https://doi.org/10.1136/bmj.n160
  8. Shantikumar S, Wigley J, Hameed W, Handa A. A survey of instructions to authors in surgical journals on reporting by CONSORT and PRISMA. The Annals of The Royal College of Surgeons of England. 2012;94(7):468–471. https://doi.org/10.1308/003588412X13373405386619
  9. Caulley L, Catalá-López F, Whelan J, et al. Reporting guidelines of health research studies are frequently used inappropriately. J Clin Epidemiol. 2020;122:87–94. https://doi.org/10.1016/j.jclinepi.2020.03.006
  10. Ivaldi D, Burgos M, Oltra G, Liquitay CE, Garegnani L. Adherence to PRISMA 2020 statement assessed through the expanded checklist in systematic reviews of interventions: A meta-epidemiological study. Cochrane Evidence Synthesis and Methods. 2024;2(5):e12074. https://doi.org/10.1002/CESM.12074
  11. Jalloh MB, Bot VA, Borjaille CZ, et al. Reporting quality of heart failure randomized controlled trials 2000-2020: Temporal trends in adherence to CONSORT criteria. Eur J Heart Fail. 2024;26(6):1369–1380. https://doi.org/10.1002/EJHF.3229
  12. Reporting guidelines for chatbot health advice studies: explanation and elaboration for the Chatbot Assessment Reporting Tool (CHART). BMJ. 2025;390:e083305. https://doi.org/10.1136/bmj-2024-083305
  13. Huo B, Collins G, Chartash D, et al. Reporting guideline for chatbot health advice studies: The CHART statement. Artif Intell Med. 2025;168:103222. https://doi.org/10.1016/j.artmed.2025.103222
  14. Huo B. Reporting Guideline for Chatbot Health Advice Studies. The Annals of Family Medicine. Published online August 1, 2025:250386. https://doi.org/10.1370/afm.250386
  15. Huo B, Collins GS, Chartash D, et al. Reporting Guideline for Chatbot Health Advice Studies. JAMA Netw Open. 2025;8(8):e2530220. https://doi.org/10.1001/jamanetworkopen.2025.30220
  16. Huo B, Collins G, Chartash D, et al. Reporting guideline for Chatbot Health Advice studies: the CHART statement. BMC Med. 2025;23(1):447. https://doi.org/10.1186/s12916-025-04274-w
  17. Huo B, Collins G, Chartash D, et al. Reporting guideline for chatbot health advice studies: the Chatbot Assessment Reporting Tool (CHART) statement. British Journal of Surgery. 2025;112(8). https://doi.org/10.1093/bjs/znaf142
  18. Huo B, Boyle A, Marfo N, et al. Large Language Models for Chatbot Health Advice Studies: A Systematic Review. JAMA Netw Open. 2025;8(2):e2457879. https://doi.org/10.1001/jamanetworkopen.2024.57879
  19. Fry D, Al-Khafaji W. RedEye Pipeline [Software]. (V1.0). Toronto: Daniel Fry (2025). [07/08/25] Retrieved from https://github.com/Inebriateduck/RedEye_Pipeline. https://doi.org/10.5281/zenodo.16996504

Supplemental Digital Content

SDC Text 1: Full search syntax

Search: (((generative ai[Text Word]) OR (generative ai[MeSH Terms]) OR (generative artificial intelligence[Text Word]) OR (generative artificial intelligence[MeSH Terms])OR (AI-based chatbot[Text Word]) OR (AI-based chatbot[MeSH Terms]) OR (chatgpt[Text Word]) OR (chatgpt[MeSH Terms]) OR (large language model[Text Word]) OR (large language model[MeSH Terms]) OR (natural language processing[Text Word])) OR (natural language processing[MeSH Terms]) OR (LLM[Text Word]) OR (LLM[MeSH Terms]) OR (NLP[Text Word]) OR (NLP[MeSH Terms])) AND ((chatbot[Text Word]) OR (chatbot[MeSH Terms]))) OR ((chatgpt[Text Word]) OR (chatgpt[MeSH Terms]) OR (bing chat[Text Word]) OR (bing chat[MeSH Terms]) OR (google bard[Text Word]) OR (google bard[MeSH Terms]) OR(gpt-4[Text Word]) OR (gpt-4[MeSH Terms]) OR ([Text Word]) OR ([MeSH Terms])) AND ((clinical[text word] OR expert[text word] OR patient[text word] OR surgical[text word] OR medical[text word] OR health[text word] OR screening[text word] OR “health prevention”[text word] OR diagnos*[text word] OR “differential diagnosis”[text word] OR treatment[text word] OR management[text word]) AND (advice*[text word] OR “decision making”[text word] OR knowledge*[text word] OR questions*[text word] OR recommendations*[text word] OR “decision support”[text word] OR assessment*[text word] OR information*[text word]))

SDC Text 2: Email Correspondence

Dear colleague,

You are receiving this email as you were involved in a publication evaluating the accuracy of an LLM in summarizing clinical evidence and/or providing health advice – ie a chatbot health advice (CHA) study. 

Examples would include asking ChatGPT to diagnose a patient with diabetes, generate a differential for chest pain, or suggest therapy options for a patient with GERD, etc. 

We recently published the Chatbot Assessment Reporting Tool (CHART), which provides guidance for comprehensive reporting of CHA studies:

You can use the CHART checklist and flow diagram to plan any upcoming CHA studies, or to guide reporting during manuscript writing! Remember to include both documents in your manuscript submission. 

We welcome feedback on the usability of the CHART reporting guideline. Together, we will improve the transparency of CHA study methods to enable readers to better interpret their findings as the field evolves toward applying LLMs and generative AI models in clinical pathways to improve patient care. 

In collaboration,

The CHART Collaborative


Premier Science
Publishing Science that inspires