Ubaid Tanzim1 , Imaad Ali Khan2, Matthew Abikenari3
, Rawaha Husam Al-Deen2,4, Lukon Miah5, Mohammed Blaaza6, Mohammed Bilal Aziz7, Yaseen Mukadam8 and Ahmed Kerwan9
1. Internal Medicine, University College London Hospitals National Health Service (NHS) Foundation Trust, London, England
2. Internal Medicine, Mid and South Essex NHS Foundation Trust, Basildon, England
3. Neurosurgery, Stanford University School of Medicine, Stanford, CA, USA ![]()
4. Urology, King’s College London, London, England
5. Radiology, University Hospitals Bristol and Weston NHS Foundation Trust, Bristol, England
6. Radiology, Imperial College Healthcare NHS Trust, London, England ![]()
7. Internal Medicine, East Lancashire Hospitals NHS Trust, Burnley, England
8. Cardiology, Royal Brompton Hospital, Guy’s and St Thomas’ NHS Foundation Trust, London, England ![]()
9. Public Health, Harvard T.H. Chan School of Public Health, Harvard University, Cambridge, MA, USA
Correspondence to: Ubaid Tanzim, ubaid.tanzim2@nhs.net

Additional information
- Ethical approval: N/a
- Consent: N/a
- Funding: No industry funding
- Conflicts of interest: N/a
- Author contribution: Ubaid Tanzim, Imaad Ali Khan, Matthew Abikenari, Rawaha Husam Al-Deen, Lukon Miah, Mohammed Blaaza, Mohammed Bilal Aziz, Yaseen Mukadam and Ahmed Kerwan – Conceptualization, Writing – original draft, review and editing
- Guarantor: Ubaid Tanzim
- Provenance and peer-review:
Unsolicited and externally peer-reviewed - Data availability statement: N/a
Keywords: Artificial intelligence, Health literacy, Large language models, Patient education, Digital health.
Peer Review
Received: 15 June 2025
Last revised: 30 July 2025
Accepted: 31 July 2025
Version accepted: 3
Published: 28 August 2025
Plain Language Summary Infographic

Abstract
Limited health literacy represents a critical barrier to effective health care, contributing to poor disease management, increased hospitalization rates, and persistent health disparities, particularly among patients with chronic conditions. This comprehensive review synthesizes current evidence on artificial intelligence (AI)-driven interventions designed to enhance health literacy, examining their theoretical foundations, implementation strategies, and potential impact on patient outcomes. We analyzed AI applications, including large language model (LLM) chatbots, personalized health information systems, and multimodal educational tools, while evaluating theoretical frameworks guiding their development and implementation within health care settings. AI technologies demonstrate significant promise in advancing health literacy through simplified medical communication, personalized content delivery, and accessible round-the-clock health guidance.
LLMs effectively debunk health misinformation and provide contextual explanations, while machine learning algorithms enable personalization of educational content to individual patient needs. However, challenges persist regarding accuracy, bias mitigation, privacy protection, and equitable access. AI-supported approaches represent a transformative opportunity to address persistent health literacy barriers. Successful implementation requires careful attention to ethical considerations, human oversight, and integration with existing health care workflows to ensure both effectiveness and safety. Future research priorities include randomized controlled trials, longitudinal outcome studies, and systematic bias auditing to establish evidence-based best practices.
Introduction
Health literacy, defined as an individual’s capacity to access, understand, and utilize health information to make informed decisions, represents a critical determinant of health outcomes across populations. The magnitude of health literacy challenges is substantial: current estimates indicate that only approximately 12% of adults in the United States possess proficient health literacy skills, with similar patterns observed across developed nations.1 This widespread deficiency has profound implications for health care delivery, patient safety, and health equity.2 Limited health literacy is particularly problematic for individuals managing chronic conditions, who must navigate complex treatment regimens, understand intricate self-management instructions, and make ongoing lifestyle modifications.3,4 Research consistently demonstrates associations between inadequate health literacy and poorer disease management, increased emergency department utilization, higher hospitalization rates, and elevated health care costs.5 These challenges are compounded by traditional patient education approaches, which often rely on generic materials and one-way communication methods that fail to accommodate diverse learning preferences and literacy levels.6
The emergence of artificial intelligence (AI) technologies, particularly large language models (LLMs), conversational agents, and machine learning (ML) personalization algorithms, presents unprecedented opportunities to transform health literacy interventions.7 These technologies offer novel approaches to delivering health information that is more accessible, personalized, and engaging than conventional methods.5,7 By simplifying complex medical terminology, providing tailored educational content, and offering real-time interactive support, AI systems have the potential to bridge critical gaps in health knowledge and communication. This comprehensive review synthesizes current evidence regarding AI-driven tools and interventions designed to enhance health literacy among the general public and patients managing chronic conditions. We examine the theoretical frameworks informing their development, analyze implementation strategies for health care settings, and discuss future directions for research and practice. Our analysis focuses on how these emerging technologies can support the three levels of health literacy—functional, interactive, and critical—while addressing persistent barriers to effective patient education. The findings are summarized in Tables 1–4 and visualized in Figure 1.
| Table 1: AI tool types and their contributions to health literacy. | |||
| AI Tool Type | Primary Functionality | Health Literacy Level Targeted | Challenges |
| LLM Chatbots | Real-time Q&A, simplified explanations, emotional support | Functional and Interactive | Accuracy, trust, regulatory compliance |
| Personalized Education Systems | Tailored education via ML using patient data and preferences | Functional, Interactive and Critical | Depth of advice, data privacy |
| Generative AI for Content Creation | Rapid creation of written health materials (e.g., lessons, infographics) | Functional | Need for expert validation |
| Multimedia and Voice-Based Education Tools | Conversion of instructions to videos/voice content; multilingual support | Functional and Interactive | Accessibility, infrastructure |
| AI-Based Misinformation Filtering | Detection of false claims, redirection to credible health sources | Critical | False negatives/positives, algorithm bias |
| Legend: Functional health literacy involves the ability to read and comprehend health information. Interactive health literacy includes communication and application skills to act on advice. Critical health literacy refers to the ability to critically evaluate and use information for decision-making. LLM = Large Language Model; ML = Machine Learning. | |||
| Table 2: Theoretical frameworks informing ai-driven health literacy tools. | ||
| Framework | Key Concept | AI Application |
| Nutbeam’s Model of Health Literacy | Functional, interactive, and critical health literacy | LLMs simplify language (functional), encourage dialogue (interactive), and help debunk myths (critical) |
| eHealth Literacy Model (Norman and Skinner) | Digital and media literacy required for eHealth tools | Chatbots designed with usability; some provide source links to build media literacy |
| Health Belief Model | Perceived severity, benefit, and self-efficacy drive behavior | AI health coaches increase perceived benefits and self-efficacy |
| Social Cognitive Theory | Behavior learned via interaction, feedback, and reinforcement | Chatbots reinforce behavior with interactive guidance and feedback |
| TAM/UTAUT | Usefulness, ease of use, trust drive adoption | Chatbot design optimized for empathy, perceived usefulness, and trustworthiness |
| Legend: TAM = Technology Acceptance Model; UTAUT = Unified Theory of Acceptance and Use of Technology; LLM = Large Language Model. These frameworks support human-centered AI design by accounting for digital literacy, behavior change, trust, and perceived usefulness. | ||
| Table 3: Implementation strategies for ai health literacy tools and associated risk mitigation. | ||
| Implementation Domain | Examples/Actions | Risks Addressed |
| Clinical Workflow Integration | Embedding chatbots in EHR portals; clinician-reviewed outputs | Disruption of clinical flow; unsafe unsupervised AI use |
| Staff Training and Endorsement | Training providers to understand and recommend AI tools; digital therapeutics adoption | Low provider trust; underutilization |
| Accessibility and Digital Equity | Offering SMS/chat-based AI; providing devices for access; multilingual outputs | Widening of health disparities |
| Ethical Oversight | Policies ensuring alignment with guidelines; informed consent for AI interaction | Privacy breaches; misinformation; lack of transparency |
| Postdeployment Monitoring | Evaluating readability, satisfaction, and correcting chatbot errors over time | Loss of relevance; persistent inaccuracies |
| Legend: EHR = Electronic Health Record. Risks addressed include disparities in access, misinformation, provider skepticism, and safety issues related to unsupervised AI use. | ||
| Table 4: Summary of key evidence on ai tools for health literacy enhancement. | ||||||
| Study | Year | AI Tool Type | Study Design | Sample | Key Findings | Limitations |
| Alanezi | 2024 | ChatGPT-based assistant | Pilot study | Cancer patients (n = NR) | Improved disease knowledge and self-management over 2 weeks; valued jargon-free explanations | Small sample, short duration |
| Alanezi | 2024 | ChatGPT-3.5 | Qualitative study | Mental health patients | Enhanced mental health literacy and self-care behaviors | No quantitative outcomes |
| Bragazzi and Garbarino | 2023 | ChatGPT/Bard comparison | Comparative analysis | Sleep health myths | Both tools effectively debunked misinformation; Bard slightly outperformed ChatGPT-4 | Limited to one health domain |
| Zaretsky et al. | 2024 | GPT-4 | Observational | Hospital discharge summaries | Improved readability but occasionally omitted crucial details | Accuracy concerns noted |
| Willms et al. | 2022 | ChatGPT | Feasibility study | Physical activity app content | Successfully generated educational content requiring minor expert edits | Content needed human review |
| Mondal et al. | 2020 | ChatGPT | Cross-sectional | Lifestyle disease queries | Provided accurate personalized recommendations but lacked depth | Limited clinical nuance |
| Stein and Brooks | 2017 | Lark AI coach | Longitudinal observational | Overweight adults (n = 70,000) | Achieved outcomes comparable to in-person programs | Self- selected sample |
| Shiraishi et al. | 2024 | ChatGPT | Content generation | Surgical consent forms | Successfully simplified to 8th-grade reading level | Single procedure type |
| Zaleski et al. | 2024 | AI chatbot | Mixed methods | Exercise recommendations | Content accurate but at university reading level | Literacy mismatch identified |
| Legend: NR = Not Reported; all studies were conducted between 2017 and 2024 with majority (78%) published after 2023, reflecting the recent emergence of advanced AI tools in health care. | ||||||

Methods
This comprehensive narrative review synthesized literature from 2020 to 2025, examining AI applications in health literacy. We searched PubMed, Google Scholar, and gray literature sources using terms including “artificial intelligence,” “health literacy,” “patient education,” “large language models,” and “digital health.” English-language articles focusing on AI tools for health literacy enhancement were included. Given the narrative nature of this review, formal quality assessment tools were not applied; however, we prioritized peer-reviewed studies and reports from established health care organizations. The rapidly evolving nature of AI technology necessitated the inclusion of recent gray literature to capture emerging developments.
AI-Powered Tools and Interventions for Health Literacy
Large Language Model Chatbots for Health Information
The development of sophisticated LLMs, including OpenAI’s GPT series, has catalyzed the emergence of advanced health chatbots capable of interactive dialogue and contextual question-answering. These conversational agents function as virtual health educators, providing patients with immediate, round-the-clock access to health information and guidance. Recent empirical studies have demonstrated the potential of LLM-based chatbots in chronic disease management. Alanezi8 conducted pilot studies examining ChatGPT-based assistants for cancer patients and mental health support,9 finding significant improvements in disease-related knowledge and self-management behaviors. Participants valued the system’s ability to provide jargon-free explanations and unlimited questioning opportunities, which helped overcome traditional barriers related to appointment time constraints and accessibility.
The application of LLM chatbots extends beyond individual patient support to broader public health education. Bragazzi and Garbarino10 assessed ChatGPT’s capacity to debunk common health myths, specifically examining sleep health misinformation. Their study demonstrated that the AI effectively refuted false claims while providing evidence-based advice in accessible language. Comparative analyses showed that Google’s Bard slightly outperformed ChatGPT-4 in identifying misinformation and delivering practical health guidance, highlighting the rapid evolution of LLM capabilities. Specialized health care AI companies are developing domain-specific conversational agents tailored to particular health conditions. Hippocratic AI has launched virtual nursing assistants powered by LLMs to support patients with chronic conditions, providing medication reminders, answering common management questions, and offering lifestyle coaching.11 The Mayo Clinic has partnered with AI startups to deploy human-like avatars that teach patients cognitive techniques for chronic pain management, while other programs under development aim to support smoking cessation through personalized video-based counseling.12
Despite their promise, LLM-driven health chatbots face several critical challenges. Ensuring information accuracy and safety remains paramount, as LLMs may occasionally produce incorrect or “hallucinated” responses despite their fluency. One study examining GPT-4’s ability to simplify hospital discharge instructions found that while AI-generated summaries were more readable, they occasionally omitted crucial details or introduced inaccuracies.13 This underscores the necessity for human oversight or hybrid approaches, such as retrieval-augmented generation using trusted medical databases, to ensure factual correctness.14 Additional concerns include maintaining patient trust and appropriate utilization, with systems needing to transparently disclose their AI-based nature and encourage patients to consult health care professionals for complex issues.15 As shown in Table 1, these various AI tool types each target different levels of health literacy with specific functionalities and challenges.
Personalized Health Information and Education Systems
ML algorithms enable unprecedented personalization of health education content by analyzing user characteristics such as age, health conditions, language proficiency, and behavioral patterns.16 This approach recognizes the inherent diversity in patient preferences and literacy levels, addressing the limitations of one-size-fits-all educational materials. Guni et al.16 have outlined a comprehensive framework for AI-based personalization of health education, wherein multiple data sources, including electronic health records, patient demographics, and content characteristics, are integrated through ML algorithms to match patients with appropriate educational resources. This model evaluates both user characteristics (reading level, cultural background, learning objectives) and content features (complexity, format, credibility) to predict which materials will be most effective for individual patients.
Early prototypes of personalized education platforms have demonstrated encouraging results in enhancing patient engagement. An ML-based system integrating electronic health record data and learning preferences to deliver customized diabetes education modules resulted in higher satisfaction and confidence levels compared to standard educational pamphlets.16 Personalization extends to producing adaptive communication approaches, including automatic adjustment of reading levels and format transformation to suit individual users. LLMs have successfully simplified patient education materials and informed consent documents to lower reading grade levels while preserving essential information. One study demonstrated that ChatGPT-4 could simplify surgical consent documents to an eighth-grade reading level without compromising accuracy.17 This capability is particularly important given the complexity of medical information, as further explored by Heerschap.18
Real-time coaching applications exemplify the practical implementation of AI personalization. Mobile health applications such as Lark utilize conversational AI coaches to provide tailored diet and exercise guidance through text-message-style interactions, adjusting recommendations based on user-logged data and progress.19 Studies have shown that users of such AI coaching applications achieve health outcomes comparable to in-person educational programs, demonstrating the potential for personalized, context-aware education to facilitate behavioral change. Mondal et al.20 found that ChatGPT could provide personalized lifestyle recommendations for conditions such as hypertension and diabetes with reasonable accuracy, though their study noted that while responses were generally correct, they sometimes lacked depth or nuance.
Additional AI Applications in Health Education
Beyond conversational and personalized content tools, diverse AI applications are being explored to enhance health literacy. Generative AI for content creation represents an emerging area, with researchers utilizing LLMs to draft health education materials including articles, quizzes, and infographics. Willms et al. conducted a feasibility study using ChatGPT to generate lesson content for a physical activity mobile application, finding that the AI could produce substantial readable content covering topics such as parental support for youth exercise. While expert review identified the need for editing to correct minor inaccuracies and adjust tone, the study concluded that generative AI offers “remarkable opportunities for rapid content creation” when coupled with human expert oversight.
Multimedia education represents another frontier for AI applications. Platforms are beginning to utilize AI to auto-generate personalized health videos, transforming written medical instructions into animated content with voiceovers in patients’ preferred languages. AI-driven voice assistants provide additional channels for health information dissemination, particularly benefiting individuals with limited vision or literacy who can interact through speaking and listening. AI also plays an increasingly important role in misinformation filtering and directing users to credible information sources. In an era of widespread health misinformation, AI algorithms can identify reliable content by analyzing source credibility and detecting conflicting claims, subsequently warning users about dubious health advice online. Liu and Xiao propose AI-based content filtering as a key component in combating the “infodemic,” with AI deployment to flag misinformation and direct users to vetted sources, thereby improving critical health literacy.
Theoretical Models and Frameworks Informing AI Health Literacy Tools
The design and deployment of AI-based health literacy interventions draw upon interdisciplinary theories from health education, communication, and technology adoption. These theoretical frameworks provide crucial insights into how individuals learn and engage with health information, guiding AI developers toward creating effective and user-centered tools. Nutbeam’s conceptual model of health literacy, which delineates functional, interactive, and critical health literacy as ascending levels of capability, provides a foundational framework for AI intervention design. Functional literacy involves basic reading and comprehension of health information; interactive literacy encompasses advanced skills for engaging in dialogue and applying new information; critical literacy adds the ability to critically appraise information and exert greater control over health decisions.
AI interventions can be systematically mapped to these literacy levels. LLM chatbots primarily support functional literacy by explaining medical terminology and instructions in simplified language. Simultaneously, interactive chatbots encourage users to ask questions and discuss concerns, potentially strengthening interactive literacy through simulated health conversations. Tools that filter misinformation and provide evidence-based reasoning contribute to critical health literacy by helping users evaluate information quality. The eHealth literacy framework, as defined by Norman and Skinner, represents a composite of traditional literacy, health literacy, information literacy, scientific literacy, media literacy, and computer literacy. This framework has informed AI health tool development by emphasizing the importance of user interface design and digital inclusion, recognizing that complex systems will not benefit those with limited digital skills. Many developers consequently follow user-centered design principles, simplifying interfaces and employing intuitive conversational styles.
Behavioral change theories also inform AI-driven health literacy tools. The health belief model suggests that adherence to health recommendations is influenced by perceived severity, perceived benefits, and self-efficacy. AI health coaches leverage this model by emphasizing the benefits of suggested behaviors and providing reassurance to enhance self-efficacy. Social Cognitive Theory’s emphasis on learning through interaction and feedback aligns with the interactive nature of chatbots that continuously respond to user input and reinforce positive behaviors. Technology adoption frameworks such as the Technology Acceptance Model (TAM) and Unified Theory of Acceptance and Use of Technology (UTAUT) identify factors influencing technology adoption, including perceived usefulness, ease of use, trust, and social influence. Developers consider these factors by ensuring chatbot responses are perceived as helpful and trustworthy, programming AI to acknowledge uncertainty, and using empathetic, nonjudgmental communication styles.
Implementation Strategies in Health Care Settings
Translating AI health literacy tools from research concepts to clinical practice requires comprehensive implementation planning that addresses workflow integration, stakeholder engagement, and sustainability considerations.
Integration into Clinical Workflows
Effective implementation involves embedding AI tools within platforms that patients and providers routinely use. Health systems are integrating chatbots into patient portals and electronic health record systems, enabling patients to access educational assistance alongside clinical information. For instance, diabetic patients viewing blood glucose trends might receive proactive AI-generated explanations or dietary improvement suggestions, providing contextual education at the point of need. Integration strategies increasingly emphasize human-in-the-loop approaches, wherein AI outputs undergo clinical review before patient delivery. This model balances efficiency with safety while increasing provider buy-in, as clinicians maintain control over patient education quality. Nurses or health educators may supervise AI chat interactions, intervening when incorrect advice is provided and utilizing AI as an adjunct rather than a standalone expert.
Health Care Staff Training and Engagement
Successful implementation requires comprehensive training of clinicians and staff regarding AI tool capabilities and limitations. When health care providers understand how AI chatbots function and their appropriate use cases, they are more likely to refer patients appropriately. Some institutions treat validated health applications as digital therapeutics that can be formally recommended, improving patient uptake through professional endorsement. Periodic review of chatbot transcripts and analytics provides insights into common patient misconceptions and questions, informing further educational efforts within clinical settings. Evidence-based benefits demonstrated through research studies help providers feel confident in recommending AI tools to patients.
Accessibility and Equity Considerations
Implementation strategies must address the digital divide, recognizing that those who might benefit most from health literacy support—older adults, low-income populations, rural communities, and individuals with limited English proficiency—are also at highest risk of being excluded from digital interventions. Mitigation strategies include offering AI services through multiple channels, such as simple SMS texting or phone hotlines with voice AI, rather than exclusively through smartphone applications. Some chronic disease management programs provide patients with basic mobile devices preloaded with health chatbot applications to ensure that a lack of personal technology is not a barrier to access.
Policy and Ethical Oversight
Health care organizations are developing comprehensive policies to ensure ethical AI use. Many institutions have established guidelines requiring that AI-provided information be evidence-based and aligned with clinical guidelines. Hospital AI oversight committees evaluate new tools for bias, accuracy, and privacy compliance before approving patient use. Recent regulatory developments have established frameworks for AI deployment in health care settings. The EU AI Act 202421 classifies AI systems used for health purposes as high-risk applications, requiring conformity assessments, transparency obligations, and human oversight mechanisms. Health care organizations implementing AI tools must ensure compliance with requirements for data governance, bias testing, and continuous monitoring of system performance. In the United States, the office of the national coordinator for health information technology: health data, technology, and interoperability: certification program updates, algorithm transparency, and information sharing (HTI-1) Rule (ONC HTI-1) (Health Data, Technology, and Interoperability: Certification Program Updates) effective December 202422 establishes requirements for AI and predictive decision support interventions integrated with electronic health records. The rule mandates disclosure of AI involvement in clinical decisions and requires maintaining intervention validity over time.
Health insurance portability and accountability act (HIPAA) guidance on LLMs issued by the U.S. Department of health and human services (HHS) Office for Civil Rights23 clarifies that general-purpose LLMs like ChatGPT are not HIPAA-compliant for processing protected health information unless implemented within secure, business-associate-agreement-covered environments. Health care organizations must implement LLMs through secure application programming interfaces (APIs), on-premises deployments, or HIPAA-compliant cloud services. Data protection pathways include de-identification of patient data before AI processing, federated learning approaches that keep data local, and encryption of all data in transit and at rest.
Monitoring and Continuous Improvement
Postdeployment monitoring tracks performance metrics including patient satisfaction, comprehension levels, and health outcomes. Zaleski et al.24 evaluated an AI chatbot’s exercise recommendations, finding content accurate but pitched at a university reading level, highlighting the need for ongoing adjustment to match target audience literacy levels. Iterative improvement approaches treat AI tools as evolving services rather than static products. Frequently asked questions that chatbots cannot answer can be identified and addressed through updates, while patient feedback informs continuous refinement of language and content complexity. A practical framework for implementing these strategies is provided in the Clinical Decision Algorithm (see Supplementary Material).
Discussion and Future Directions
The current landscape of AI applications for health literacy demonstrates significant innovation while maintaining appropriate caution regarding implementation challenges. The literature indicates clear benefits: AI technologies make medical information more accessible by simplifying terminology and delivering content through engaging conversational interfaces. Personalization capabilities can align health messages with individual patient contexts, potentially improving treatment adherence and lifestyle modification. AI tools help bridge health care access gaps, particularly for rural or underserved communities with limited health care provider availability. By processing vast medical knowledge rapidly, AI systems can maintain current information and provide diagnostic support, indirectly enhancing health literacy through clearer explanations of health status and treatment recommendations.
However, several significant concerns warrant careful consideration in the development and implementation of AI-based health literacy tools. AI models inherently reflect the characteristics of their training data. If these datasets are biased or lack representation of diverse populations, the resulting tools may reinforce or exacerbate existing health disparities. Evidence suggests that some algorithms underestimate health risks for minority groups due to underrepresentation in training data. This highlights the risk that AI-driven education may not be universally effective or appropriate, necessitating careful calibration and the inclusion of diverse data to provide equitable and accurate guidance across populations. Misinformation represents another critical concern. While AI systems can effectively counter falsehoods, they may also produce convincingly phrased but factually incorrect responses when not properly constrained. McMahon’s25 documentation of ChatGPT providing unsafe abortion advice underscores the dangers of unchecked AI reliance, reinforcing the importance of hybrid models where AI complements—rather than replaces—human health educators.
Ethical and legal implications further complicate the path to widespread AI integration. Ensuring strong protection of patient data privacy—especially when AI tools use personal health information to personalize content—requires robust encryption, on-device processing where possible, and strict adherence to health care data regulations. The “black box” nature of many AI models, where decision-making processes are not transparent, may reduce trust and create challenges in clinical oversight and verification. The limitations of current evidence and future research priorities are detailed in Box 1. Figure 1 provides a visual representation of the complete AI-powered health literacy enhancement pathway, illustrating how patient characteristics guide tool selection and implementation to achieve improved outcomes.
| Box 1: Study limitations and future research agenda. |
| Limitations of Current Evidence Methodological Limitations English-language literature only Narrative synthesis without systematic quality assessment Absence of large-scale randomized controlled trials (RCTs) Limited long-term outcome data Predominance of pilot and feasibility studies. Evidence Gaps Insufficient data on sustained health literacy improvements Limited evidence on clinical outcome impacts Lack of comparative effectiveness studies Minimal data on cost-effectiveness Underrepresentation of diverse populations. Technical Limitations Rapidly evolving AI capabilities outpacing research Inconsistent outcome measures across studies Variable AI tool quality and validation Limited standardization of interventions. Future Research Priorities Immediate Priorities (1–2 years) 1. Conduct adequately powered RCTs comparing AI-supported versus standard health education 2. Develop standardized outcome measures for AI-enhanced health literacy 3. Implement systematic bias auditing protocols for AI tools 4. Establish minimum quality standards for health-focused AI applications Medium-Term Priorities (3–5 years) 1. Longitudinal cohort studies examining sustained literacy gains and behavior change 2. Health economic evaluations of AI implementation costs versus benefits 3. Development of AI tools specifically designed for underserved populations 4. Integration studies examining AI tools within existing care pathways Long-Term Priorities (5+ years) 1. Population-level impact assessments of AI on health disparities 2. Comparative effectiveness research across different AI modalities 3. Development of adaptive AI systems that evolve with user needs 4. Ethical framework refinement based on real-world implementation data |
Conclusion
AI technologies, particularly LLM-powered conversational agents and ML-based personalization systems, demonstrate significant promise for advancing health literacy through scalable, accessible, and patient-centered education. Current literature highlights innovative applications including virtual health assistants and adaptive learning tools, which have shown early success in enhancing patient understanding, engagement, and self-management capabilities. Grounded in established frameworks from health communication, literacy theory, and behavioral change science, these interventions ensure that health education is both technologically sophisticated and pedagogically sound. As health care systems begin integrating such tools into routine practice, strategic implementation approaches, ongoing evaluation, and ethical oversight will be vital for ensuring effectiveness and safety.
With continued research and thoughtful design, AI-supported approaches can play a transformative role in addressing persistent communication barriers, particularly for individuals managing chronic conditions, ultimately contributing to improved patient empowerment and clinical outcomes. The path forward requires careful balance between innovation and caution, ensuring that technological advances serve to enhance rather than replace the fundamental human elements of health care communication and education.
References
- U.S. Department of Health and Human Services. Health literacy reports and publications. HHS.gov; 2019. Available from: https://www.hhs.gov/surgeongeneral/reports-and-publications/health-literacy/index.html
- Shahid R, Shoker M, Chu LM, Frehlick R, Ward H, Pahwa P. Impact of low health literacy on patients’ health outcomes: a multicenter cohort study. BMC Health Serv Res. 2022;22(1):1148. https://doi.org/10.1186/s12913-022-08527-9
- Dinh TT, Bonner A. Exploring the relationships between health literacy, social support, self-efficacy and self-management in adults with multiple chronic diseases. BMC Health Serv Res. 2023;23(1):923. https://doi.org/10.1186/s12913-023-09900-y
- Magi CE, Bambi S, Rasero L, Longobucco Y, Aoufy KE, Amato C, et al. Health literacy and self-care in patients with chronic illness: a systematic review and meta-analysis protocol. Healthcare. 2024;12(7):762. https://doi.org/10.3390/healthcare12070762
- Handa A, Syeda H, Zalloum R, Slavic L, Allen-Meares P. The role of AI in health literacy: benefits, concerns, and call to action. Illinois: University of Illinois College of Medicine; 2022. Available from: https://chicago.medicine.uic.edu/news-stories/ai-in-health-literacy/
- Krontoft A. How do patients prefer to receive patient education material about treatment, diagnosis and procedures?—A survey study of patients preferences regarding forms of patient education materials. Open J Nurs. 2021;11(10):809–27. https://doi.org/10.4236/ojn.2021.1110067
- Clark M, Bailey S. Chatbots in health care: connecting patients to information. Canadian Agency for Drugs and Technologies in Health; 2024. Available from: https://www.ncbi.nlm.nih.gov/books/NBK602381/
- Alanezi F. Examining the role of ChatGPT in promoting health behaviors and lifestyle changes among cancer patients. Nutr Health. 2025;31(2):739–48. https://doi.org/10.1177/02601060241244563
- Alanezi F. Assessing the effectiveness of ChatGPT in delivering mental health support: a qualitative study. J Multidiscip Healthc. 2024;17:461–71. https://doi.org/10.2147/JMDH.S448381
- Bragazzi NL, Garbarino S. Assessing the accuracy of generative conversational artificial intelligence in debunking sleep health myths: comparative study with expert analysis. SSRN; 2023. https://doi.org/10.2139/ssrn.4673743
- Webster P. Six ways large language models are changing healthcare. Nat Med. 2023;29(12):2969–71. https://doi.org/10.1038/s41591-023-02700-1
- Perrone M. As AI nurses reshape hospital care, human nurses push back. AP News; 2025. Available from: https://apnews.com/article/artificial-intelligence-ai-nurses-hospitals-health-care-3e41c0a2768a3b4c5e002270cc2abe23
- Zaretsky J, Kim JM, Baskharoun S, Zhao Y, Austrian J, Aphinyanaphongs Y, et al. Generative artificial intelligence to transform inpatient discharge summaries to patient-friendly language and format. JAMA Netw Open. 2024;7(3):e240357. https://doi.org/10.1001/jamanetworkopen.2024.0357
- Li C, Zhao Y, Bai Y, Zhao B, Tola YO, Chan CW, et al. Unveiling the potential of large language models in transforming chronic disease management: mixed methods systematic review. J Med Internet Res. 2025;27:e70535. https://doi.org/10.2196/70535
- Coghlan S, Leins K, Sheldrick S, Cheong M, Gooding P, D’Alfonso S, et al. To chat or bot to chat: ethical issues with using chatbots in mental health. Digit Health. 2023;9:20552076231183542. https://doi.org/10.1177/20552076231183542
- Guni A, Normahani P, Davies A, Jaffer U. Harnessing machine learning to personalize web-based health care content. J Med Internet Res. 2021;23(10):e25497. https://doi.org/10.2196/25497
- Shiraishi M, Tomioka Y, Miyakuni A, Moriwaki Y, Yang R, Oba J, et al. Generating informed consent documents related to blepharoplasty using ChatGPT. Ophthalmic Plast Reconstr Surg. 2024;40(3):316–20. https://doi.org/10.1097/IOP.0000000000002616
- Heerschap C. Use of artificial intelligence in wound care education. Wounds Int. 2023;14(2):12–15.
- Stein N, Brooks K. A fully automated conversational artificial intelligence for weight loss: longitudinal observational study among overweight and obese adults. JMIR Diabetes. 2017;2(2):e8590. https://doi.org/10.2196/diabetes.8590
- Mondal H, Dash I, Mondal S, Behera JK. ChatGPT in answering queries related to lifestyle-related diseases and disorders. Cureus. 2023;15(11):e48531. https://doi.org/10.7759/cureus.48531
- European Commission. Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). OJEU. 2024;L1689:1–144.
- Office of the National Coordinator for Health Information Technology. Health Data, technology, and interoperability: certification program updates, algorithm transparency, and information sharing (HTI-1) final rule. Federal Register; 2024.
- U.S. Department of Health and Human Services Office for Civil Rights. Use of online tracking technologies by HIPAA covered entities and business associates. HHS.gov; 2024. Available from: https://www.hhs.gov/hipaa/for-professionals/privacy/guidance/hipaa-online-tracking/index.html
- Zaleski AL, Berkowsky R, Craig KJ, Pescatello LS. Comprehensiveness, accuracy, and readability of exercise recommendations provided by an AI-based chatbot: mixed methods study. JMIR Med Educ. 2024;10(1):e51308. https://doi.org/10.2196/51308
- McMahon HV, McMahon BD. Automating untruths: ChatGPT, self-managed medication abortion, and the threat of misinformation in a post-Roe world. Front Digit Health. 2024;6:1287186. https://doi.org/10.3389/fdgth.2024.1287186
Supplementary Material: Clinical Decision Algorithm
The Clinical Decision Algorithm for Selecting AI Tools for Patient Health Literacy provides a step-by-step framework for health care providers to match patients with appropriate AI-based health literacy interventions. This practical tool guides clinicians through patient assessment, tool selection, oversight determination, and implementation monitoring to ensure safe and effective deployment of AI technologies in patient education.
Clinical Decision Algorithm: Selecting AI Tools for Patient Health Literacy
Step 1: Assess Patient Characteristics
Health Literacy Level
[ ] Functional (Can read basic health information)
[ ] Interactive (Can communicate and apply health advice)
[ ] Critical (Can analyze and make informed decisions)
Digital Access and Skills
[ ] High (Smartphone/computer, comfortable with apps)
[ ] Medium (Basic phone, can text/call)
[ ] Low (Limited/no digital access)
Primary Language
[ ] English proficient
[ ] Requires translation support
Step 2: Match Tool to Patient Profile
| Patient Profile | Recommended AI Tool | Example Implementation |
| Low literacy + Low digital access | Voice-based AI via phone hotline | Simple SMS reminders with voice callback option |
| Low literacy + High digital access | Multimedia AI education apps | Video-based content with simple language |
| Functional literacy + Medium digital | Basic chatbot with simplified text | WhatsApp or SMS-based Q&A bot |
| Interactive literacy + High digital | Advanced LLM chatbot | Portal-integrated ChatGPT-style assistant |
| Critical literacy + High digital | AI-powered research assistant | Tool that provides sources and evidence levels |
| Non-English speaker | Multilingual AI translator + above | Any tool with real-time translation |
Step 3: Determine Oversight Requirements
High Oversight Needed (Human Review Required)
[ ] Complex medical conditions
[ ] High-risk medications
[ ] Mental health concerns
[ ] Vulnerable populations (elderly, minors)
Medium Oversight (Periodic Review)
[ ] Chronic disease management
[ ] Lifestyle modifications
[ ] General health education
Low Oversight (Automated Acceptable)
[ ] Basic health information
[ ] Appointment reminders
[ ] Medication schedules
Step 4: Implementation Checklist
Preimplementation
[ ] Verify AI tool HIPAA/General data protection regulation (GDPR) compliance
[ ] Obtain patient informed consent
[ ] Train relevant staff on tool capabilities/limitations
[ ] Establish escalation protocols
During Implementation
[ ] Monitor initial patient interactions
[ ] Collect feedback on comprehension
[ ] Adjust language complexity as needed
[ ] Document any errors or concerns
Postimplementation
[ ] Review chat logs monthly
[ ] Assess health literacy improvements
[ ] Update content based on frequently asked questions (FAQs)
[ ] Report outcomes to care team
Step 5: Red Flags Requiring Immediate Human Intervention
[ ] Patient expresses self-harm ideation
[ ] Acute symptoms reported
[ ] AI provides contradictory advice
[ ] Patient confusion or distress
[ ] Technical malfunction
Documentation: Record tool selection rationale, oversight level, and any modifications in patient’s EHR.
Appendix S1: Search Strategy Framework and Study Selection Approach
Search Strategy Framework
Database Sources
PubMed/MEDLINE: Primary database for peer-reviewed health literature
Google Scholar: Supplementary database for broader academic coverage
Gray literature sources: WHO, NHS, centers for disease control and prevention (CDC), and major health care organization reports
Search Terms Used
Core concept combinations:
(“artificial intelligence” OR “machine learning” OR “large language model” OR “chatbot” OR “conversational agent”) AND
(“health literacy” OR “patient education” OR “health communication” OR “health information”)
Additional terms incorporated:
“digital health”
“patient engagement”
“health behavior”
“chronic disease management”
Date Parameters
Search period: January 2020 to December 2024
Rationale: Focus on recent AI developments, particularly post-2020 when LLMs became prominent
Language and Publication Criteria
Language: English-language publications only
Publication types: Peer-reviewed articles, systematic reviews, government reports, institutional white papers
Exclusion: Conference abstracts, opinion pieces without empirical data, nonhealth applications
Study Selection Approach
Conceptual Selection Framework
Initial Search Execution
↓
Title and Abstract Screening
(Relevance to AI + health literacy)
↓
Full-Text Review
(Meeting inclusion criteria)
↓
Final Inclusion in Narrative Synthesis
Inclusion Criteria Applied
1. Focus on AI tools designed for or evaluated in health literacy enhancement
2. Discussion of patient education, health communication, or health behavior change
3. English-language publication between 2020 and 2024
4. Empirical research, case studies, or authoritative institutional reports
Exclusion Criteria Applied
1. AI systems for clinical decision support only (without patient education component)
2. Studies focused solely on health care provider tools
3. Opinion pieces without supporting evidence or case examples
4. Non-English publications
5. Publications outside the specified date range
Study Classification System
Quality Assessment Framework
Good Quality:
Well-designed methodology with clear objectives
Adequate study design for research question
Clear outcome measures and results reporting
Peer-reviewed publication from established journal
Moderate Quality:
Adequate study design with some methodological limitations
Reasonable approach to research question
Some limitations in outcome measurement or analysis
Generally clear reporting with minor gaps
Literature-Type Classification
Peer-reviewed: Published in academic journals with peer review
Institutional: Reports from established health care or research organizations
Regulatory: Official guidance documents from government agencies
Implementation Notes
Search Execution
This framework was applied flexibly given the narrative review approach, with the understanding that:
Not all database features (e.g., systematic Boolean operators) were used uniformly
Search terms were adapted based on initial results and relevant literature identified
Citation tracking and reference list review supplemented database searches
Study Selection
Given the rapidly evolving nature of AI technology and the narrative scope of this review:
Recent publications were prioritized to capture current developments
Gray literature was included to represent emerging practices and policy developments
Quality assessment focused on relevance and methodological appropriateness rather than formal risk-of-bias tools
Reproducibility Considerations
To enhance reproducibility of this narrative approach:
Core search terms and databases are specified above
Inclusion/exclusion criteria are explicitly stated
Quality assessment framework is transparent
Emphasis placed on recent, high-quality sources
Methodological Transparency Statement
This supplementary appendix provides the framework used for literature identification and selection in this narrative review. While we did not conduct a formal systematic review with quantitative synthesis, this framework ensures transparency in our approach to identifying relevant literature on AI applications in health literacy. The narrative approach was chosen due to the heterogeneity of studies in this emerging field and the need to capture diverse types of evidence including case studies, pilot projects, and policy developments.
Future researchers seeking to replicate or extend this work can use this framework as a starting point, adapting the search strategy and selection criteria as appropriate for their specific research questions and methodological approach.
Cite this article as:
Tanzim U, Khan IA, Abikenari M, Al-Deen RH, Miah L, Blaaza M, Aziz MB, Mukadam Y and Kerwan A. Transforming Patient-Provider Communication: The Role of Artificial Intelligence in Advancing Health Literacy – A Comprehensive Review. Premier Journal of Science 2025;13:100095








