July 2018
Volume 59, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2018
Deep Learning-Based Automated Classification of Multi-Categorical Abnormalities from Optical Coherence Tomography Images
Author Affiliations & Notes
  • Wei Lu
    Eye Center, Wuhan University, Wuhan, China
    Wuhan University, Wuhan, China
  • Yan Tong
    Eye Center, Wuhan University, Wuhan, China
    Wuhan University, Wuhan, China
  • Yue Yu
    Wuhan Hai Xing Tong Technology Limited Company, Wuhan, China
  • Bin Wang
    Wuhan Hai Xing Tong Technology Limited Company, Wuhan, China
  • Qinqin Deng
    Eye Center, Wuhan University, Wuhan, China
    Wuhan University, Wuhan, China
  • Xinlan Lei
    Eye Center, Wuhan University, Wuhan, China
    Wuhan University, Wuhan, China
  • Yin Shen
    Eye Center, Wuhan University, Wuhan, China
    Wuhan University, Wuhan, China
  • Footnotes
    Commercial Relationships   Wei Lu, None; Yan Tong, None; Yue Yu, None; Bin Wang, None; Qinqin Deng, None; Xinlan Lei, None; Yin Shen, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science July 2018, Vol.59, 1723. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Wei Lu, Yan Tong, Yue Yu, Bin Wang, Qinqin Deng, Xinlan Lei, Yin Shen; Deep Learning-Based Automated Classification of Multi-Categorical Abnormalities from Optical Coherence Tomography Images. Invest. Ophthalmol. Vis. Sci. 2018;59(9):1723.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Deep learning can discover intricate structure in large data sets without the need to specify rules explicitly. It has dramatically improved the state-of-the-art in image recognition. This study aims to apply deep learning to create algorithms for automated classification of various abnormalities from optical coherence tomography (OCT) images to realize intelligent diagnose.

Methods : We assessed 65219 OCT images in total and screened out 30219 images as input data to train, validate and test a 102-layer residual network. The input data was classified into normal retina, macular edema, epiretinal, macular hole, retinal detachment and ocular media opacity by 9 licensed ophthalmologists. Every single abnormal category together with normal images were fed into the network to obtain corresponding diagnostic model. We then estimate the models' performance by accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) got from the test sets.

Results : A total of 30219 eligible images were analyzed, resulting in 5 diagnostic models. The accuracy and specificity of these 5 models were all over 0.96 and AUCs almost all approximated to 0.99. These impressive outcomes were outperformed than that of other homogeneous studies and indicated extremely high reliability of the models when discriminated corresponding diseases from normal images. The sensitivity of the epiretinal model and macular edema model were 0.791 and 0.857 respectively while others were all above 0.94. Not enough data of these two categories might account for the outcome, which could be optimized by more data in future’s work.

Conclusions : These deep learning-based models are able to discriminate 5 different abnormalities on OCT scans with high reliability. These achievements have great value in increasing diagnostic efficiency and improving patient outcomes compared with current ophthalmologic assessment. Next work will be focused on establishing a single model to classify multiple intra-ocular diseases and more abnormality categories will be involved in to abundant the field of intelligent diagnose.

This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×