Investigative Ophthalmology & Visual Science Cover Image for Volume 61, Issue 7
June 2020
Volume 61, Issue 7
Free
ARVO Annual Meeting Abstract  |   June 2020
Adversarial deep learning enables high-resolution reconstruction of wide-field OCT angiography from low transverse sampling
Author Affiliations & Notes
  • Jianlong Yang
    Cixi Institute of Biomedical Engineering, CAS, Ningbo, China
  • Ting Zhou
    Cixi Institute of Biomedical Engineering, CAS, Ningbo, China
  • Kang Zhou
    Shanghai Tech Univ., China
    Cixi Institute of Biomedical Engineering, CAS, Ningbo, China
  • Shenghua Gao
    Shanghai Tech Univ., China
  • Jiang Liu
    Southern University of Science and Technology, China
  • Footnotes
    Commercial Relationships   Jianlong Yang, None; Ting Zhou, None; Kang Zhou, None; Shenghua Gao, None; Jiang Liu, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2020, Vol.61, 4569. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jianlong Yang, Ting Zhou, Kang Zhou, Shenghua Gao, Jiang Liu; Adversarial deep learning enables high-resolution reconstruction of wide-field OCT angiography from low transverse sampling. Invest. Ophthalmol. Vis. Sci. 2020;61(7):4569.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Because of the low transverse sampling, current wide-field OCT angiography (OCTA) has a poor lateral resolution thus causes the inaccurate observation and quantification of vascular biomarkers. We propose a deep-learning-based approach for the high-resolution reconstruction of wide-field OCTA, which could be directly applied to the data from commercial OCT systems. It achieves capillary-level retinal vasculature over a large field of view (FOV) and matches the resolution of the angiograms using high transverse sampling.

Methods : We propose to learn the characteristics of high-resolution capillaries and micro-vessels by only leveraging the information provided by the 3×3 mm2 FOV centered on fovea, and teach them to 8×8 mm2 FOV. Our baseline method is cycle-consistent adversarial learning. However, we found it not only learned the resolution-related features but also mapped the spatial distribution of retinal vasculature, which is undesirable in this application. So we propose to split both the source and target domain OCTA images into 1×1 mm2 patches before feeding them into the adversarial network. We found this approach could efficiently eliminate the contamination of the spatial mapping. We used a ZEISS CIRRUS system for data acquisition. The OCTA data from 40 eyes of 20 healthy participants were randomly split into 38 training and 2 testing data sets.

Results : Figure 1 is a demonstration of the high-resolution reconstruction in wide-field OCTA. (a) is the original retinal OCTA image. (b) is the deep-learning-reconstructed high-resolution OCTA image. (c), (d), (e), and (f) are the zoom-in views inside the colored boxes in (a). Significant improvement in resolution could be observed in the entire FOV.

Conclusions : The proposed method could effectively solve the trade-off between resolution and FOV. It will benefit the accurate quantification of vascular biomarkers over a large FOV for assisting diagnosis and treatment.

This is a 2020 ARVO Annual Meeting abstract.

 

(a) Original retinal OCTA image. (b) Deep-learning-reconstructed high-resolution OCTA image. (c), (d), (e), and (f) are the zoom-in views inside the colored boxes in (a). Each of them has the original angiogram (left side) and the reconstructed image (right side) for comparison.

(a) Original retinal OCTA image. (b) Deep-learning-reconstructed high-resolution OCTA image. (c), (d), (e), and (f) are the zoom-in views inside the colored boxes in (a). Each of them has the original angiogram (left side) and the reconstructed image (right side) for comparison.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×