June 2022
Volume 63, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2022
Mobile-RetinaNet : A Computationally Efficient DeepNet for Retinal Fundus Image Segmentation for Use in Low-resource Settings
Author Affiliations & Notes
  • Ranit Karmakar
    Electrical and Computer Engineering, Michigan Technological University, Houghton, Michigan, United States
  • Saeid Nooshabadi
    Electrical and Computer Engineering, Michigan Technological University, Houghton, Michigan, United States
  • Allen Eghrari
    Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States
  • Footnotes
    Commercial Relationships   Ranit Karmakar None; Saeid Nooshabadi None; Allen Eghrari None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2022, Vol.63, 2064 – F0053. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ranit Karmakar, Saeid Nooshabadi, Allen Eghrari; Mobile-RetinaNet : A Computationally Efficient DeepNet for Retinal Fundus Image Segmentation for Use in Low-resource Settings. Invest. Ophthalmol. Vis. Sci. 2022;63(7):2064 – F0053.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Retinal fundus photography is used by physicians to detect and track different eye diseases such as glaucoma and diabetic retinopathy (DR). Manual segmentation is time-consuming and may introduce observational bias. This work presents a computer-aided automatic segmentation model for the retinal blood vessels and optic disc in retinal fundus images. Accurate automatic detection of these image features will reduce the manual effort while producing consistent results in clinical settings instantaneously.

Methods : The efficient use of bottleneck residual blocks on the U-Net like encoder-decoder convolutional neural network (CNN) architecture requires a significantly lesser number of floating-point operations (FLOPs) to achieve the desired accuracy. The model has been trained and tested on two publicly available retinal datasets, digital retinal images for vessel extraction (DRIVE) and child heart and health study in England (CHASE). The model's performance is compared with the prior art using widely used accuracy, sensitivity, specificity, and the area under the curve (AUC). For the OD segmentation, we proposed a fully automatic segmentation that uses classical image processing to localize the OD and then our network to do the semantic segmentation.

Results : For retinal vessel segmentation, we achieved an AUC score of 0.968 for the DRIVE dataset and 0.985 for the CHASE dataset which for the state-of-the-art is 0.986 and 0.991 respectively. With this small degradation in performance, our model needs 2.5 times a lesser number of parameters and 4.5 times fewer FLOPs. For OD segmentation, we achieved an AUC score of 0.950 and 0.981 for the DRIVE and CHASE datasets respectively

Conclusions : While deep learning models can be high resource-consuming, successfully developed a model that achieves very high efficiency for medical image segmentation task without loosing much accuracy.

This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.

 


(a) Mobile-RetinaNet architecture, (b) Inverted residual convolution building block, (c) Output convolution bloc


(a) Mobile-RetinaNet architecture, (b) Inverted residual convolution building block, (c) Output convolution bloc

 

(Left) Dice score comparison between models with respect to the FLOPs count. The size of the circles is evaluated by taking a ratio of Dice score and FLOPs count. (Right) Dice score comparison between models with respect to the number of parameters. The size of the circles is evaluated by taking a ratio of the Dice score and the number of parameters. Bigger the circle better the performance.

(Left) Dice score comparison between models with respect to the FLOPs count. The size of the circles is evaluated by taking a ratio of Dice score and FLOPs count. (Right) Dice score comparison between models with respect to the number of parameters. The size of the circles is evaluated by taking a ratio of the Dice score and the number of parameters. Bigger the circle better the performance.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×