Abstract
Purpose :
Ultra-widefield fluorescein angiography (UWFA) is utilized to assess retinal vascular and choroidal abnormalities in retinal disease. During image acquisition, a series of numerous angiographic images are obtained. These images are subject to high variability in image quality due to multiple factors (i.e., patient-related, imaging-related). This variability can limit image utility and delay care. The purpose of this study was to evaluate the feasibility of a deep learning model for automated classification of UWFA quality.
Methods :
The UWFA dataset was composed of 5658 UWFA obtained during routine retinal care. Ground truth image quality was assessed by expert image review, and classified into one of four categories (un-gradable, poor, good, or best) based on key factors such as contrast, field of view, media opacity, and obscuration from external features. A randomized set of 3543 images to train the model. The initial testing set was composed of 615 images and the validation set included 1500 images.
Results :
By expert review of 5658 images, 153 (2.7%) were graded as best, 1514 (26.8%) as good, 1682 (29.7%) as poor and 2309 (40.8%) were ungradable. In the testing set, our classifier showed an overall accuracy of 87.1% for recognizing between gradable (including best, good, and poor) and ungradable images, a sensitivity of 92.7% and specificity of 82.1%. The receiver operating characteristic (ROC) curve measuring performance of two-class classification (non-gradable and gradable) had an AUC of 0.945.
Conclusions :
A deep learning model demonstrates successful automatic classification of UWFA image quality. This method may greatly reduce manual image grading workload and also provide near-instantaneous feedback on image quality during image acquisition.
This is a 2020 ARVO Annual Meeting abstract.