Purchase this article with an account.
Kaito Narimoto, Naoki Okumura, Shohei Yamada, Kengo Okamura, Ayaka Izumi, Theofilos Tourtas, Friedrich E Kruse, Noriko Koizumi; Deep neural network by transfer learning for the analysis of guttae in the patients with Fuchs endothelial corneal dystrophy. Invest. Ophthalmol. Vis. Sci. 2021;62(8):817.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Transfer learning (TL) is a method in deep learning where knowledge is transferred from one model to another. It allows for generation of artificial intelligence (AI) by 1) limited amounts of data and 2) shortened time. We previously generated AI for analyzing guttae in a Fuchs endothelial corneal dystrophy (FECD) model mouse (Yamada, S, et al., ARVO, 2020). Here we report the use of TL to generate AI for analyzing human guttae from a mouse-model AI.
Of corneal endothelial images obtained from 20 patients with FECD via contact specular microscopy, 26 focused images were selected and the guttae area was manually annotated as ground truth. Our previously reported AI for analyzing mouse guttae, which was generated by using the FECD mouse-model data (n=2538), was then used to predict the area of human guttae. Next, training/testing was performed to accommodate the AI for mouse guttae to AI for human guttae via application of U-Net, a fully convolutional network architecture for biomedical image segmentation. TL AI was then evaluated by predicting the guttae areas of the human subjects and comparing it with the ground truth.
The AI for mouse guttae severely underestimated the guttae of human subjects, and Pearson correlation coefficient showed no significant correlation between the guttae area predicted by AI and ground truth (r= 0.21, p= 0.384). Thought sensitivity, specificity, and F-measure of the mouse-model AI was 84.6%, 99.8% and 88.8%, respectively, for analyzing mouse guttae and 5.8%, 99.9% and 10.9%, respectively, for analyzing human guttae. However, the AI for human subjects, which was generated by TL from the mouse-model AI, successfully recognized human guttae, and sensitivity, specificity, and F-measure were 86.6%, 94.7%, and 81.5%, respectively. Pearson correlation coefficient showed that guttae area predicted by TL AI for human subjects was strongly associated with manually annotated ground truth (r= 0.96, p= 1.60×10-11). Brand Altman analysis showed that the mean systematic error of the TL AI was -2.25±10.7%.
TL allowed for accommodation of AI for analyzing mouse guttae to AI for analyzing human-subject guttae. Our findings suggest that TL will be applicable in multiple ophthalmology fields; e.g., the generation of AI for device A from data set obtained by device B, C, and D.
This is a 2021 ARVO Annual Meeting abstract.
This PDF is available to Subscribers Only