Investigative Ophthalmology & Visual Science Cover Image for Volume 61, Issue 7
June 2020
Volume 61, Issue 7
Free
ARVO Annual Meeting Abstract  |   June 2020
Hard attention deep neural network for automated retinal vessel segmentation
Author Affiliations & Notes
  • Dongyi Wang
    Bioengineering, University of Maryland, College Park, College Park, Maryland, United States
  • Ayman Haytham
    Aureus University School of Medicine, Oranjestad, Aruba
  • Osamah Saeedi
    Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, Maryland, United States
  • yang tao
    Bioengineering, University of Maryland, College Park, College Park, Maryland, United States
  • Footnotes
    Commercial Relationships   Dongyi Wang, None; Ayman Haytham, None; Osamah Saeedi, None; yang tao, None
  • Footnotes
    Support  National Institutes of Health/National Eye Institute Career Development Award (K23 EY025014).
Investigative Ophthalmology & Visual Science June 2020, Vol.61, 2020. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Dongyi Wang, Ayman Haytham, Osamah Saeedi, yang tao; Hard attention deep neural network for automated retinal vessel segmentation. Invest. Ophthalmol. Vis. Sci. 2020;61(7):2020.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Whereas deep networks based retinal blood vessel segmentation have shown excellent segmentation results, they still exhibit deficiencies in segmenting intricate vascular regions. In this study, we propose a new network design, called hard attention net (HAnet), which can dynamically focus the network’s attention on vessels that are “hard” to segment, and effectively delineate these intricate areas by employing two decoders that segment “hard” and “easy” regions independently. Our proposed model shows superior vessel segmentation performance versus state-of-the-art deep learning models.

Methods : HAnet is equipped with an encoder and three decoder networks. The encoder extracts high level image features. The first decoder along with the encoder, forms a U-net structure, and maps image features to a coarse segmentation result. Based on the coarse result and a bi-level threshold, two masks are generated to define the regions which are “hard” or “easy” to segment, and they are expected outputs for another two decoder networks. During the training, an attention gate and an difference map are implemented to force the network's attention on regions which are “hard” to segment. Finally, a shallow U-net structure fuses the original input with the three decoder outputs to generate a refined segmentation output. The model was evaluated on four public fundus photography datasets (DRIVE, STARE, CHASE_DB1, HRF), two color scanning laser ophthalmoscopy image datasets (IOSTAR, RC-SLO), and two self-collected datasets using indocyanine green angiography (ICGA) and erythrocyte mediated angiography (EMA).

Results : Fig. 1 shows visualization results of the first DRIVE image. (a) is the raw image. (b) is HAnet output. (c) includes “hard” (red) or “easy” (green) regions defined in HAnet. (d) is the generated attention map. Quantitative results on various retinal image datasets are shown in Fig. 2. On public datasets, HAnet acquired superior/comparable statistics compared to other recent deep learning studies. On self-collected datasets, HAnet outperforms the baseline network: “U-net”.

Conclusions : Our results show that HAnet, which treats “hard” and “easy” vessels separately in the network, achieves better/comparable retinal vessel segmentation results than current leading methods (accuracy, AUC values). The novel design can also potentially accelerate the understanding of deep learning models for vessel segmentation.

This is a 2020 ARVO Annual Meeting abstract.

 

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×