June 2022
Volume 63, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2022
Artificial Intelligence-assisted Projection-resolved Optical Coherence Tomographic Angiography (aiPR-OCTA)
Author Affiliations & Notes
  • JIE WANG
    Oregon Health & Science University, Portland, Oregon, United States
  • Tristan Hormel
    Oregon Health & Science University, Portland, Oregon, United States
  • Yali Jia
    Oregon Health & Science University, Portland, Oregon, United States
  • Footnotes
    Commercial Relationships   JIE WANG Optovue Inc., Code P (Patent); Tristan Hormel None; Yali Jia OptoVue Inc, Code F (Financial Support), OptoVue Inc, Code P (Patent), Optos, Code P (Patent)
  • Footnotes
    Support  National Institutes of Health (R01 EY027833, R01 EY024544, P30 EY010572); Unrestricted Departmental Funding Grant and William & Mary Greve Special Scholar Award from Research to Prevent Blindness (New York, NY)
Investigative Ophthalmology & Visual Science June 2022, Vol.63, 2910 – F0063. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      JIE WANG, Tristan Hormel, Yali Jia; Artificial Intelligence-assisted Projection-resolved Optical Coherence Tomographic Angiography (aiPR-OCTA). Invest. Ophthalmol. Vis. Sci. 2022;63(7):2910 – F0063.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To improve the voxel-wise projection-resolved optical coherence tomographic angiography (PR-OCTA) using artificial intelligence.

Methods : In this study, a total of 4600 OCTA scans from 708 eyes, including 3224 AMD scans and 1376 DR scans, were acquired in the 3×3-mm central macular area. The projection-resolved ground truth was generated by allowing graders to adjust parameters within the rules-based PR-OCTA algorithm in order to independently optimize the appearance of flow signal in inner/outer retina, choroid and the area below the large vessels, separately. This enabled graders to ensure that residual artifacts were cleaned while real flow signal from posterior vessels, including pathological choroidal neovascularization, were preserved. The model in this study consists of a combined convolutional neural network and sequence-to-sequence network that produce the PR OCTA volume from volumetric structural OCT and uncorrected OCTA inputs. The performance of the proposed aiPR-OCTA algorithm was evaluated on 126 normal eyes by quantifying the vessel density (VD), flow signal-to-noise ratio (fSNR), vessel connectivity (VC), structural similarity between the vascular patterns in an en face angiogram and another formed by projection over all anterior layers, and the remaining artifacts in outer retina.

Results : Compared to the previous reflectance-based PR (rbPR-OCTA) algorithm, the aiPR-OCTA algorithm was able to remove more projection artifacts and preserve more flow signal. The aiPR-OCTA algorithm was able to remove large residual vessel patterns in the rbPR-OCTA (Fig.1 case1&2 B2) while preserving true anatomic detail at the capillary scale (Fig.1 case1&2 B3). The large vessel shadows caused by overprocessing in rbPR-OCTA (Fig.1 case1&2 C2) were also filled with capillary flow in aiPR-OCTA (Fig.1 case1&2 C3). Quantitative assessment also indicates that the aiPR-OCTA also increased both VD and VC (Table 1), which is consistent with aiPR-OCTA preserving more flow signal then rbPR-OCTA. Finally, aiPR-OCTA suppressed more background artifacts as the fSNR was improved and remaining artifacts in outer retina was reduced (Table 1).

Conclusions : The proposed aiPR-OCTA algorithm can remove more projection artifacts and preserve more flow signals than previous approaches. This voxel-wise aiPR-OCTA algorithm could enable reliable vascular quantification in deeper anatomic slabs.

This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.

 

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×