Abstract
Purpose :
To evaluate the retinal detachment identification performance of an AI model for Ultra-Wide Field (UWF) fundus cameras using real clinical data.
Methods :
A retinal detachment identification model (AI model) was created using UWF retinal detachment and normal images collected at Tsukazaki Hospital Ophthalmology. To evaluate the performance of this model, an evaluation dataset representing a hypothetical real-world deployment scenario with an underlying disease prevalence of 0.1% was created. This dataset consisted of 30 UWF images of 30 eyes that underwent retinal detachment surgery (retinal detachment cases) and 2,970 UWF images of 2,970 normal eyes determined to be normal at a health screening center. The number of misses by the AI model was assessed at five levels, from 0 to 4 missed cases. The positive predictive value (PPV) was calculated based on predicted identification values for each level.
Results :
With 0 misses (sensitivity 100%), the PPV was 1.1%; with one miss (sensitivity 96.7%) the PPV was 2.1%; with two misses (sensitivity 93.3%) the PPV was 2.3%; with three misses (sensitivity 90%) the PPV was 4.2%; and with four misses (sensitivity 86.7%) the PPV was 44.8%.
Conclusions :
The realistic performance of the retinal detachment identification model developed here for real-world deployment is considered to have a sensitivity of 86.7%.
This abstract was presented at the 2024 ARVO Imaging in the Eye Conference, held in Seattle, WA, May 4, 2024.