Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 7
June 2024
Volume 65, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2024
Assessing Readability of Patient Education Materials: A Comparative Study of ASRS Resources and AI-Generated Content by Popular Large Language Models (ChatGPT 4.0 and Google Bard)
Author Affiliations & Notes
  • Michael Shi
    West Virginia University Department of Ophthalmology, Morgantown, West Virginia, United States
  • Jovana Hanna
    West Virginia University School of Medicine, Morgantown, West Virginia, United States
  • Christine Clavell
    West Virginia University Department of Ophthalmology, Morgantown, West Virginia, United States
  • Kevin Eid
    University of Utah Health John A Moran Eye Center, Salt Lake City, Utah, United States
  • Alen Eid
    West Virginia University Department of Ophthalmology, Morgantown, West Virginia, United States
  • Ghassan Ghorayeb
    West Virginia University Department of Ophthalmology, Morgantown, West Virginia, United States
  • John Nguyen
    West Virginia University Department of Ophthalmology, Morgantown, West Virginia, United States
  • Footnotes
    Commercial Relationships   Michael Shi None; Jovana Hanna None; Christine Clavell None; Kevin Eid None; Alen Eid None; Ghassan Ghorayeb None; John Nguyen None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2024, Vol.65, 5646. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michael Shi, Jovana Hanna, Christine Clavell, Kevin Eid, Alen Eid, Ghassan Ghorayeb, John Nguyen; Assessing Readability of Patient Education Materials: A Comparative Study of ASRS Resources and AI-Generated Content by Popular Large Language Models (ChatGPT 4.0 and Google Bard). Invest. Ophthalmol. Vis. Sci. 2024;65(7):5646.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Online medical patient education readability is important in ensuring patient understanding. This study looked to evaluate readability of patient education materials from the American Society of Retina Specialists (ASRS) Website with patient education generated by large language model (LLM) systems such as OpenAI’s ChatGPT 4.0 and Google’s Bard.

Methods : 15 patient education topics related to diagnosis and procedures from the ASRS website were used. These same topics and contents of education were produced when prompted to keep similar word counts and content by ChatGPT 4.0 and Google Bard. The content produced by these systems were then used in a readability analysis using selected metrics with the open textstat python library. Various readability metrics were used including Flesch Reading Ease, Gunning Fog Index, Flesch-Kincaid Grade Level, Coleman-Liau Index, SMOG Index, Automated Readability Index, and Linsear Write to analyze readability metrics. Average scores were calculated for each source: ASRS Website, ChatGPT 4.0, ChatGPT 4.0 (adjusted for 6th-grade reading level), Google Bard, and Google Bard (adjusted for 6th-grade reading level).

Results : This study found that the ASRS patient education materials produced an average Flesch Reading Ease score of 42.6, suggesting relatively complex readability. ChatGPT 4.0 and Google Bard showed similar readability with average scores of 39.2 and 46.2; however, when prompted to adjust to a 6th-grade reading level, their FRE scores improved significantly averaging 80.3 and 78.5 respectively. Readability enhancement across other metrics consistently showed improvement with AI-generated content outperforming ASRS materials, although varying in extent between educational topics.

Conclusions : This analysis highlights the efficacy of advanced language models in creating patient education materials useful to a larger audience. The use of AI in generating patient education materials, namely when tailored for simpler language, can improve the accessibility of health-related information to the general population. Future investigations should delve into how these AI tools can be integrated into healthcare communication frameworks to maximize patient education and efficiently deliver content to patients.

This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.

 

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×