June 2020
Volume 61, Issue 7
Free
ARVO Annual Meeting Abstract  |   June 2020
DcardNet: Multi-Depth Diabetic Retinopathy Classification based on Structural and Angiographic Optical Coherence Tomography
Author Affiliations & Notes
  • Pengxiao Zang
    Oregon Health & Science University, Portland, Oregon, United States
  • Liqin Gao
    Oregon Health & Science University, Portland, Oregon, United States
    Beijing Tongren Hospital, Capital Medical University, Beijing, China
  • Tristan Hormel
    Oregon Health & Science University, Portland, Oregon, United States
  • Jie Wang
    Oregon Health & Science University, Portland, Oregon, United States
  • Qisheng You
    Oregon Health & Science University, Portland, Oregon, United States
  • Thomas S Hwang
    Oregon Health & Science University, Portland, Oregon, United States
  • Yali Jia
    Oregon Health & Science University, Portland, Oregon, United States
  • Footnotes
    Commercial Relationships   Pengxiao Zang, None; Liqin Gao, None; Tristan Hormel, None; Jie Wang, None; Qisheng You, None; Thomas Hwang, None; Yali Jia, Optovue, Inc (F), Optovue, Inc (P)
  • Footnotes
    Support  National Institutes of Health (R01 EY027833, R01 EY024544, P30 EY010572); unrestricted departmental funding grant and William & Mary Greve Special Scholar Award from Research to Prevent Blindness (New York, NY).
Investigative Ophthalmology & Visual Science June 2020, Vol.61, 1147. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Pengxiao Zang, Liqin Gao, Tristan Hormel, Jie Wang, Qisheng You, Thomas S Hwang, Yali Jia; DcardNet: Multi-Depth Diabetic Retinopathy Classification based on Structural and Angiographic Optical Coherence Tomography. Invest. Ophthalmol. Vis. Sci. 2020;61(7):1147.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Automated diabetic retinopathy (DR) classification using raw optical coherence tomography (OCT) and angiography (OCTA) have not been proposed due to the small number of available datasets, large variations between cases, and the small differences between DR grades. In this study, we propose a convolutional neural network (CNN) based method to address these challenges to achieve a multi-depth DR classification framework.

Methods : In this study, 303 eyes from 250 participants, including healthy volunteers and patients with diabetes (no DR, non-proliferative DR (NPDR), or proliferative DR (PDR)) were scanned by a spectral-domain OCT/OCTA system using a 3×3-mm scan pattern at the macula. Trained retina specialists graded the disease severity based on Early Treatment of Diabetic Retinopathy Study (ETDRS) scale using 7-field color photographs. We defined referable DR as level worse than ETDRS level 20, which coincides with moderate NPDR according to International Clinical Diabetic Retinopathy Scale. The framework produces two classification depths, frequently used as clinical categories (Fig. 1). To improve interpretability and discover which regions contribute to the diagnosis, class activation maps (CAMs) were also generated for each DR class. Six en face projections from OCT or OCTA were combined as input (Fig. 2). A new CNN architecture based on dense and continuous connection with adaptive rate dropout (DcardNet) was used in this study.

Results : We used 10-fold cross-validation with 10% of the data to assess the network's performance. The overall accuracies of the two depths were 95.7% and 85.0%. For first depth, the sensitivity of referable DR and non-referable DR were 91.0% and 98.0%. For second depth, the sensitivity of no DR, NPDR, and PDR were 87.1%, 85.4%, and 82.5% respectively. We also compared the overall accuracies of different input patterns, OCT, OCTA and OCT+OCTA. For first depth, the overall accuracies of the three input patterns were the same. For the second depth, the overall accuracies for OCT, OCTA and OCT+OCTA based inputs were 69.0%, 83.3% and 85.0% respectively. In addition, we observed the network's attention within the different cases through CAMs (Fig. 2). Its focus was mainly on the anatomic and vascular pathologies near the fovea.

Conclusions : The proposed DcardNet can accurately classify DR at two depths and generate CAMs.

This is a 2020 ARVO Annual Meeting abstract.

 

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×