Purchase this article with an account.
Ryo Kawasaki, Liangzhi Li, Yuta Nakashima, Hajime Nagahara, Takayoshi Ohkubo, Kohji Nishida; A fully automated grading system for the retinal arteriovenous crossing signs using deep neural network. Invest. Ophthalmol. Vis. Sci. 2020;61(7):1930.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Grading of the retinal arteriovenous crossing signs such as by the Scheie’s classification have been done by subjective inspections by ophthalmologists. There is a critical issue of low reproducibility which makes it difficult to utilize this grading both in clinical and research purpose. We built a deep neural network (DNN)-based pipeline of fully automated grading system for the retinal arteriovenous crossing signs.
(I) Image patch extraction: From fundus image database of Ohasama study, Japan, we accumulated image patches around 4,240 arteriovenous crossing points with a simple DNN-based model. (II) Annotations: Each image patch was reviewed by a retina specialist and annotated to exclude bifurcations as well as a severity score as: (0) none, (1) mild , (2) moderate , (3) severe. Among the image patches, 2,507 were correctly detected, where the numbers of image patches with severity score 0 to 3 were 1,146, 800, 452, and 56, respectively. (III) Vessel segmentation : We applied vessel segmentation, and then arterioles and venules were classified. Arteriovenous crossing point candidates are detected from the segmented vessels, which are verified with a DNN-based model. (IV) Estimation of the severity: This was formulated as a DNN-based regression model considering a high correlation among the severity scores. We trained our vessel segmentation model based on ResNet-50. Our dataset was divided into training, validation, and test splits (consisting of 100 image patches), where the training split were used for training, while we chose the model that yielded the highest performance over the validation split.
Our grading pipeline of patch extraction, vessel segmentation and crossing point verifier was able to identify crossing points with precision and recall of 98.1% and 89.7%, respectively. Among correctly detected crossing points, the kappa value for the agreement between the grading by retina specialist and estimated score was 0.61, with accuracy of 0.77.
By combining DNN-based model, we could build a model reproducing retina specialist’s subjective grading without feature extractions. This proof of concept study demonstrated a possibility that the DNN-based model can be trained to provide a retinal screening for cardiovascular risk assessment.
This is a 2020 ARVO Annual Meeting abstract.
This PDF is available to Subscribers Only