Purchase this article with an account.
Xiaodan Sui, Yanyun Jiang, Yanhui Ding, Yingjie Peng, Wanzhen Jiao, Bojun Zhao, Yuanjie Zheng; Human Grading of Diabetic Retinopathy Improves Deep Learning Based Automatic Segmentation of Microaneurysms from Fundus Image. Invest. Ophthalmol. Vis. Sci. 2020;61(7):2037.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Microaneurysms are the earliest clinical visible changes of diabetic retinopathy (DR) and its accurate detection from fundus image plays an important role in achieving reliable early diagnosis of DR. We propose a novel deep convolutional neural network (DCNN) which can segment microaneurysms from fundus image with a supervision from not only the commonly-used pixel-level microaneurysms annotations but also human grading of DR in order to show that the inclusion of the latter is capable of improving segmentation accuracy.
A typical deep-learning-based segmentation method requires pixel-level annotations of subject to segment. Collecting high-quality pixel-level annotations is very expensive and time-consuming. In contrast, manual grading of disease is relatively easier to obtain. We develop a novel DCNN to segment microaneurysms from fundus image with a supervision from not only pixel-level microaneurysms annotations but also DR grading of expert. It consists of two sub-networks for learning microaneurysms segmentation and grading DR, respectively. The grading sub-network predicts a pseudo lesion map which is then used to fine-tune microaneurysms segmentation together with the pixel-level human annotations of microaneurysms. We train our DCNN using some of fundus images from IDRiD and Kaggel DR Detection datasets. The IDRiD dataset was marked by several experienced ophthalmologists and consists of 81 fundus images with pixel-level microaneurysm annotations obtained by digital fundus camera with 50-degree field of view (FOV). The Kaggle DR detection dataset provided 2443 images of normal, and 2443 images of grade 1 annotations.
We train and evaluate the proposed DCNN using 5-fold cross-validation. The calculated Precision-Recall AUC values show that the proposed DCNN model is more accurate than the DCNN model supervised only by the pixel-level annotations. The proposed DCNN achieves an AUC-PR value of 0.488 in microaneurysms segmentation while the network without the supervision of DR grading produces an AUC-PR value of 0.453.
The proposed DCNN model improves microaneurysms segmentation from fundus image due to an additional supervision from expert’s DR grading. This is of great importance in practical clinics considering the fact that DR grading costs much less of expert’s work than the pixel-level microaneurysms annotations.
This is a 2020 ARVO Annual Meeting abstract.
This PDF is available to Subscribers Only