Investigative Ophthalmology & Visual Science Cover Image for Volume 63, Issue 7
June 2022
Volume 63, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2022
Detection and Localization of Retinal Breaks in Ultra-Widefield Fundus Photography using a YOLO v3 Architecture-Based Deep Learning Model
Author Affiliations & Notes
  • Chang Ki Yoon
    Opthalmology, Seoul National University Hospital, Jongno-gu, Seoul, Korea (the Republic of)
  • Richul Oh
    Opthalmology, Seoul National University Hospital, Jongno-gu, Seoul, Korea (the Republic of)
  • Hyeong Gon Yu
    Opthalmology, Seoul National University Hospital, Jongno-gu, Seoul, Korea (the Republic of)
    Seoul National University College of Medicine, Seoul, Korea (the Republic of)
  • Footnotes
    Commercial Relationships   Chang Ki Yoon None; Richul Oh None; Hyeong Gon Yu None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2022, Vol.63, 181 – F0028. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Chang Ki Yoon, Richul Oh, Hyeong Gon Yu; Detection and Localization of Retinal Breaks in Ultra-Widefield Fundus Photography using a YOLO v3 Architecture-Based Deep Learning Model. Invest. Ophthalmol. Vis. Sci. 2022;63(7):181 – F0028.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : We aimed to develop a deep-learning model for detecting and localizing
retinal breaks in ultra-wide-field fundus (UWF) images

Methods : We retrospectively enrolled treatment-naive patients diagnosed with retinal
break or rhegmatogenous retinal detachment and who had UWF images. The YOLO
v3 architecture backbone was used to develop the model, using transfer learning. The
performance of the model was evaluated using per-image classification and per-object
detection.

Results : A total of 4,505 UWF images from 940 patients were used in the current
study. In the per-image classification, the model showed an area under the receiver
operating characteristic curve (AUROC) of 0.957 within the test set. With the best
threshold from the validation set, the accuracy, sensitivity, and specificity were 0.9118,
0.9474, and 0.8535, respectively. With respect to per-object detection , the average
precision for the object detection model considering every retinal break was 0.840. (Figure)

Conclusions : The UWF image-based deep-learning model evaluated in the current
study performed well in diagnosing and locating retinal breaks. Owing to its fast
detection speed, we conclude that this model can be generalized for the real-time
detection of retinal breaks.

This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×