Purchase this article with an account.
Shuo Li, Xiaodan Sui, Yu Wang, Tongtong Che, Wanzhen Jiao, Bojun Zhao, Yuanjie Zheng; Screening for Glaucoma from Fundus Images via Multitask Deep Learning. Invest. Ophthalmol. Vis. Sci. 2020;61(7):4530.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
An assessment of structural changes at the optic nerve head (ONH) is commonly used in screening for glaucoma. It is typically carried out through an image segmentation of optic disc (OD) and optic cup (OC) followed by a calculation of the cup-to-disc ratio (CDR). However, it is plagued with high labor costs and inter/intra-subject variation. We develop an automated and objective tool by leveraging the cutting-edge multitask deep learning technique for screening for glaucoma. It works by assessing structural changes at the ONH but in an implicit way for which image segmentation and CDR calculation are not needed.
We employ the ORIGA-light dataset which contains 650 fundus images with 168 glaucomatous eyes and 482 healthy eyes. We propose a multitask deep learning framework (as shown in Figure 1) that contains four modules: (1) a convolution neural network (CNN) for extracting deep features expressing the complicated pattern of OD & OC; (2) a CNN for OD & OC segmentation as the 1st auxiliary task outputting OD & OC labels; (3) a CNN for regressing CDR as the 2nd auxiliary task outputting a CDR number; (4) a CNN for classifying between glaucoma and healthy subjects as the main task. The three tasks are in parallel during training and the training is conducted by supervision from expert's OD & OC segmentation, CDR number and diagnosis, respectively. The two auxiliary tasks are not considered during testing and therefore the related manual labels/numbers are not needed anymore. However, the two auxiliary tasks help to improve the main task due to their involvement in training.
Two-fold cross validation is used in our experiment. The AUC (area under the curve)/best-accuracy for distinguishing glaucoma from healthy subjects with the expert's CDR is 0.823/0.72. In contrast, our framework achieves an AUC/best-accuracy value of 0.844/0.77. The auxiliary tasks are validated to be helpful because the AUC/best-accuracy value is 0.811/0.71 when they are eliminated from our framework during training.
Our multitask deep learning framework offers an automated tool for screening for glaucoma accurately from fundus images. It involves the measurement of structural changes at the ONH during training but not in testing. This brings in a superiority that diagnosis (in testing) does not need any ONH structure information explicitly but considers it implicitly.
This is a 2020 ARVO Annual Meeting abstract.
Figure 1: The architecture of our framework.
This PDF is available to Subscribers Only