Investigative Ophthalmology & Visual Science Cover Image for Volume 61, Issue 7
June 2020
Volume 61, Issue 7
Free
ARVO Annual Meeting Abstract  |   June 2020
Deep transfer learning classification algorithms built on fundus photos generalize with variable accuracy across devices.
Author Affiliations & Notes
  • Ashley Kras
    Retina, Mass. Eye and Ear, Caulfield, Victoria, Australia
    Royal Victorian Eye and Ear Hospital, Victoria, Australia
  • John B Miller
    Retina, Mass. Eye and Ear, Caulfield, Victoria, Australia
  • kun hsing yu
    Bioinformatics, Harvard Medical School, Massachusetts, United States
    Brigham and Women's Hospital, Massachusetts, United States
  • Footnotes
    Commercial Relationships   Ashley Kras, None; John Miller, None; kun hsing yu, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2020, Vol.61, 1652. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ashley Kras, John B Miller, kun hsing yu; Deep transfer learning classification algorithms built on fundus photos generalize with variable accuracy across devices.. Invest. Ophthalmol. Vis. Sci. 2020;61(7):1652.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : As deep learning applications to ophthalmology imaging increase in sophistication and clinical relevance, there is limited knowledge about the extent to which algorithm performance can predictably generalise. This study set out to determine how the accuracy of a fundus photo classifier built on one dataset can be replicated in a second dataset captured on a different device.

Methods : 25, 000 high quality fundus photos were manually selected from the UK Biobank (UKBB) (Topcon 3D OCT-1000, field angle 45°). A simple deep transfer learning model based on VGG architecture was built to classify images into right vs left eyes. This untouched algorithm was then validated on 2 smaller samples (n=430) of the fundus photos (Optos® California, field angle 200°) from Mass. Eye and Ear Infirmary (MEEI); the first sample was cropped to the posterior pole (MEEI-a) to approximate the region captured by the UKBB sample and the second same (same images) was cropped to the circular fundus edge (MEEI-b). The same process was then repeated in reverse; a model constructed on MEEI images was deployed on UKBB images.

Results : The UKBB laterality classification model (LCM) achieved AUROC 0.997. When evaluated on dataset MEEI-a and MEEI-b, the resulting AUROC’s were 0.944 and 0.778 respectively. The LCM subsequently built on MEEI-a achieved AUROC 0.991. When evaluated on MEEI-b and UKBB datasets, performance dropped to AUROC’s of 0.545 and 0.713 respectively.

Conclusions : Simple and accurate algorithms generalize variably across devices and image setting. This finding highlights the importance of validation studies prior to deployment for clinical use.

This is a 2020 ARVO Annual Meeting abstract.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×