Purchase this article with an account.
Bruno Lay, Ronan Danno, Gwenole Quellec, Etienne Decenciere, Ali Erginay, Pascale Massin, Alexandre LE GUILCHER, Mathieu Lamard, Beatrice Cochener, Robin Alais; RetinOpTIC - Automatic Evaluation of Diabetic Retinopathy. Invest. Ophthalmol. Vis. Sci. 2019;60(9):5312. doi: https://doi.org/.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
The RetinOpTIC project performs mass screening of color fundus images and assesses image quality and Diabetic Retinopathy (DR) grade. Algorithm performance is evaluated on the Messidor-2 image database.
Based on artificial intelligence (AI) solutions, referable DR is detected using convolutional neural networks (CNNs). The solution includes first the automatic assessment of the quality of the photography, and then the DR grade.About 10% of images acquired in e-medicine networks are considered as ungradable for quality reasons. The automatic detection of these cases, either for re-acquiring them if possible, or to prevent a useless analysis by readers, is an important step to improve the network performance. In the framework of the project, an AI solution is developed to automatically determine if the macula and surrounding vessels are visible, as well as an assessment of the global sharpness of the image, the local sharpness and the density of the vessel network.Once the image is correct, a set of CNNs produces one referral decision. Unlike competing AI solutions, CNNs are jointly trained in such a way they are complementary with one another. The proposed setof CNNs has been trained on more than 80,000 images from a well-established consortium of hospitals. Thanks to a proposed heatmap generation method, patterns that each CNN detects in images can be overlaid on images for pathology visualization.
The quality criterion was evaluated on 6098 images annotated by two experts. It reached 96.4%. Referable DR is detected with an area under the ROC curve of 0.988 on the Messidor-2 database, using the University of Iowa’s reference standard (sensitivity = 99.0%, specificity = 87.0%). Results are better than previously reported systems, evaluated under the same conditions. It was noticed each co-trained CNN specializes in one lesion type or category. Therefore, the system can produce lesion-specific heatmaps, while previously reported CNN heatmaps do not allow lesion differentiation.
The proposed jointly trained CNN ensemble improves fully automatic detection of referable DR adding an assessment of image quality. It provides more accurate predictions in less than a second. Algorithms will soon be CE marked to provide a fully automated system to be used in hospitals, private practices, and mass screening networks.
This abstract was presented at the 2019 ARVO Annual Meeting, held in Vancouver, Canada, April 28 - May 2, 2019.
This PDF is available to Subscribers Only