Purchase this article with an account.
Marion Ronit Munk, Thomas Kurmann, Pablo Marquez-Neila, Martin Sebastian Zinkernagel, Sebastian Wolf, Raphael Sznitman; Extraction of Patient Specific Information from Fundus Images in the Wild. Invest. Ophthalmol. Vis. Sci. 2019;60(9):4803.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Fundus imaging is used for the diagnosis and treatment planning of a broad range of retinal pathologies. Among those pathologies is Diabetic Retinopathy whereby previous studies have shown that fundus images contain patient specific information such as gender, age and cardiovascular risk factors. Given the large spectrum of diseases typically found in a clinical setting, we ask the hypothesize that a machine learning based model is capable of extracting the gender of a patient's from fundus image acquired over a large and heterogeneous clinical population.
N=16196 patients (8180 female, 8016 male, aged 57.87 on average and with range 45 to 74) with a total of N=135690 fundus images were included. All images, irrespective of image quality, number of channels or device manufacturer were included. The dataset was divided into a training and test set, where the test set contained N=13730 images and the training N=121960, respectively. We trained an ensemble of 10 deep learning classifier to classify every fundus image as female or male. The classifier is based on a dilated residual network architecture which is pre-trained on Imagenet. We minimize the cross entropy loss with L2 regularization using Stochastic Gradient descent with early stopping. AUC operating characteristics curve, specificity, sensitivity and accuracy were assessed
Our method achieved an AUC performance of 0.829. We state a sensitivity of 0.687 and specificity of 0.802 with an accuracy of 0.74. Randomly sampling 260 predictions and manually binning them into 4 classes: 145 correct predictions with visible fovea and optic disc; 61 incorrect predictions with visible fovea and optic disc; 25 correct prediction with non visible fovea and optic disc and 29 incorrect prediction with non visible fovea and optic disc. Thus non-random gender prediction is only possible if the fovea and optic disc are visible on fundus images.
Deep learning methods can classify the gender of a patient’s from fundus images over a broad spectrum of patients and pathologies. This appears to be the case even when considering images irrespective of image quality, occlusion or any other degradation of the image. But it also highlights the difficulties when using fundus images acquired in daily clinical practice.
This abstract was presented at the 2019 ARVO Annual Meeting, held in Vancouver, Canada, April 28 - May 2, 2019.
This PDF is available to Subscribers Only