Abstract
Purpose :
Deep learning techniques have seen tremendous interest in ophthalmology, particularly in the use of convolutional neural networks (CNNs) for automated detection of ocular diseases using fundus photography. Even traits and behaviours that were previously thought not to be visible in fundus images, such as patient sex, can be detected via CNNs. Despite this success, lack of transparency due to the ‘black box’ nature of deep learning is a roadblock to their widespread adoption in medicine. Interpretations via posthoc explainability methods remain superficial and inadequate.
Methods :
We propose a novel methodology to overcome these limitations and translate what is learned by the model. To demonstrate the efficacy of our proposed methodology, we are using the classification of patient sex as a proof-of-concept. In Phase 1, we fine-tune a pre-trained InceptionResNet-v2 model using 4746 fundus images for sex-classification. In Phase 2, we use posthoc interpretation tools to visualize the model’s decisions including the Grad-CAM and activation maximization algorithms. Importantly, we use visualizations exclusively as “inspiration” to develop exploratory hypotheses. These are then tested on an unseen “exploration dataset” via t-tests. The hypotheses with significant sex differences in Phase 2 are short-listed for verification. In Phase 3, the short-listed hypotheses are re-tested on an unseen “verification set” using t-tests adjusted by the Benjamini-Hochberg method.
Results :
In Phase 1, our model showed generalization AUCs of 79% and 68% on two test sets (p’s<0.001). In Phase 2, 14 exploratory hypotheses regarding sex differences in retinal vasculature, optic disc, and brightness of the peripapillary area were articulated, of which 9 showed significant differences. In Phase 3, 5 hypotheses were verified showing significantly greater length, nodes, and branches of retinal vasculature, and greater retinal area covered by the vessels in the superior temporal quadrant for males, and significantly brighter peripapillary area for females.
Conclusions :
Our research illustrates that AI can be utilized to uncover novel retinal characteristics that were previously unidentified. This approach can be expanded to other areas of clinical and fundamental significance. Finally, our findings translated into a training paradigm on AI-discovered retinal features can benefit ophthalmologists and expand their diagnostic toolkit.
This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.