Purchase this article with an account.
Christian Wojek, Keyur Ranipa, abhishek rawat, Thomas Milde, Alexander Freytag; Image Quality Assessment of Fundus Images Using Deep Convolutional Neural Networks with Extremely Few Parameters. Invest. Ophthalmol. Vis. Sci. 2017;58(8):689.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Current state-of-the-art remote screening solutions work by classifying fundus images into healthy/diseased either by human readers or algorithms.It is tacitly assumed that images are recorded in optimal image quality (IQ).This assumption is heavily violated in field-mode in particular for hand-held fundus cameras, causing unreliable clinical findings.Here we describe a deep convolutional neural network (CNN) that predicts the IQ of a fundus image at acquisition time.Thus, non-experts can obtain feedback on their IQ and iterate until IQ allows for reliable clinical findings.
We trained CNNs for the task of IQ prediction based on 4,262 good & 178 poor quality images taken with VISUSCOUT® 100 (ZEISS, Jena, Germany).The most common error-types caused by non-optimal usage of hand-held fundus cameras are light-leakage,motion blur,overexposure & underexposure.Given the unbalanced sample bins,we additionally simulated poor IQ using good data and randomly adding one of the four sources of error as shown in Fig. 1.For every good image,we simulated two random versions per error type,resulting in another 34,096 images of poor quality.We used 70% randomly chosen images for training,10% for model validation and 20% for testing.We trained both a large and an extremely small CNN with 875,040 and 3,104 parameters respectively for the task of IQ prediction.
Our results are shown in Fig. 2. Both networks clearly learned to solve the task of automated IQ prediction by achieving AUC scores of >99.9%. Not surprisingly,a network trained on very little data (light red line) is not able to reach an acceptable accuracy level. In Fig. 2.b, we additionally show qualitative impressions. All shown images were rated as “good” by experts. The top row contains images with lowest score (i.e., reject) given by our network. As all of these images suffer from strong light leakage, this behavior is understandable. However, the scores are still larger than the majority of poor images.
We present a solution for automated IQ assessment of fundus images taken by a hand-held fundus camera VISUSCOUT 100. Our results surpass 99.8% AUC even with a tiny CNN that directly allows for in-field applications with limited hardware.
This is an abstract that was submitted for the 2017 ARVO Annual Meeting, held in Baltimore, MD, May 7-11, 2017.
Fig1: A well captured fundus image and four simulated images with failure cases that result in poor IQ
Fig2: Quantitative and qualitative results for automated IQ prediction
This PDF is available to Subscribers Only