Abstract
Purpose :
We aimed to develop a deep-learning model for detecting and localizing
retinal breaks in ultra-wide-field fundus (UWF) images
Methods :
We retrospectively enrolled treatment-naive patients diagnosed with retinal
break or rhegmatogenous retinal detachment and who had UWF images. The YOLO
v3 architecture backbone was used to develop the model, using transfer learning. The
performance of the model was evaluated using per-image classification and per-object
detection.
Results :
A total of 4,505 UWF images from 940 patients were used in the current
study. In the per-image classification, the model showed an area under the receiver
operating characteristic curve (AUROC) of 0.957 within the test set. With the best
threshold from the validation set, the accuracy, sensitivity, and specificity were 0.9118,
0.9474, and 0.8535, respectively. With respect to per-object detection , the average
precision for the object detection model considering every retinal break was 0.840. (Figure)
Conclusions :
The UWF image-based deep-learning model evaluated in the current
study performed well in diagnosing and locating retinal breaks. Owing to its fast
detection speed, we conclude that this model can be generalized for the real-time
detection of retinal breaks.
This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.