Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 7
June 2024
Volume 65, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2024
Multimodal Large Language Models in Vision and Ophthalmology
Author Affiliations & Notes
  • Zhiyong Lu
    National Institutes of Health, Bethesda, Maryland, United States
  • Footnotes
    Commercial Relationships   Zhiyong Lu None
  • Footnotes
    Support  NIH Intramural Research Program, National Library of Medicine
Investigative Ophthalmology & Visual Science June 2024, Vol.65, 3876. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zhiyong Lu; Multimodal Large Language Models in Vision and Ophthalmology. Invest. Ophthalmol. Vis. Sci. 2024;65(7):3876.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Presentation Description : The advent of multimodal large language models (LLMs) signifies a new era of healthcare diagnostics and decision-making. This work explores the transformative potential of a multi-modal approach for enhanced clinical analysis, leveraging the capabilities of LLMs. By fusing natural language understanding with computer vision, we present a unified framework in LLMs that facilitates comprehensive and interpretable insights into medical images and patient records. By showcasing real-world applications in ophthalmology, we underscore the invaluable contributions of multimodal LLMs to clinical settings. We conclude by addressing future directions and challenges, advocating for responsible AI integration, and the advancement of patient-centric healthcare through multi-modal technology.

This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×