August 2021
Volume 62, Issue 11
Open Access
ARVO Imaging in the Eye Conference Abstract  |   August 2021
Deep Learning Based Ocular Disease Classification using Retinal Fundus Images
Author Affiliations & Notes
  • Aman Shrivastava
    AIRA Matrix Pvt. Ltd., Thane, India
  • Ravi Kamble
    AIRA Matrix Pvt. Ltd., Thane, India
  • Sucheta Kulkarni
    H. V. Desai Eye Hospital, Pune, India
  • Shivangi Singh
    H. V. Desai Eye Hospital, Pune, India
  • Atul Hegde
    H. V. Desai Eye Hospital, Pune, India
  • Rashmi Kashikar
    H. V. Desai Eye Hospital, Pune, India
  • Taraprasad Das
    L V Prasad Eye Institute (LVPEI), Hyderabad, India
  • Footnotes
    Commercial Relationships   Aman Shrivastava, None; Ravi Kamble, None; Sucheta Kulkarni, None; Shivangi Singh, None; Atul Hegde, None; Rashmi Kashikar, None; Taraprasad Das, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science August 2021, Vol.62, 39. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Aman Shrivastava, Ravi Kamble, Sucheta Kulkarni, Shivangi Singh, Atul Hegde, Rashmi Kashikar, Taraprasad Das; Deep Learning Based Ocular Disease Classification using Retinal Fundus Images. Invest. Ophthalmol. Vis. Sci. 2021;62(11):39.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Purpose : The use of fundus images for the screening of different eye diseases is of significant clinical importance. Early detection and diagnosis of ocular pathologies enable efficient management of potentially blinding eye diseases. The medical image analysis performance has steadily improved through deep learning models that automatically learn relevant features for specific tasks instead of handcrafted algorithms. This study proposes a deep learning-based automated screening system capable of detecting and diagnosing diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD), and few other pathologies.

Methods : We have used 3264/350/427 images from seven public datasets (IDRiD, MESSIDOR, REFUGE-2, STARE, ISBI-2019, OPTOS, and RIDD) to train/validate/test our model. The image distribution was approximately 6.95% age-related macular degeneration (AMD), 63.69% diabetic retinopathy (DR), 5.26% glaucoma, 8.82% other retinal diseases, and 15.28% normal retina. Image cropping and enhancements were done using the CLAHE algorithm for pre-processing. Also, we used data augmentation, including image rotation, vertical and horizontal flip. The method uses an Efficient-Net B4 and B7 Convolution Neural Network (CNN) ensemble with fine-tuning for multiple disease classification (Figure 1 for network architecture). The overall accuracy, specificity, sensitivity, Cohen’s kappa of the deep learning models were evaluated and compared with ground truth labels by an expert.

Results : The Cohen’s kappa score was 97.6 (Figure 2 (a) for confusion matrix) with an accuracy of 97.0% for the multiple disease classification. The sensitivity was as follows: normal retina- 99.0%, glaucoma-100.0%, DR-99.1%, AMD- 88.3%, and others- 100.0%. Further, the model was also applied to a secondary validation dataset from a tertiary eye care facility in India to examine the model’s generalizability. Cohen’s kappa score was 93.7%, with an accuracy of 96% for the multiple disease classification (Figure 2(b)). The sensitivity was as follows: normal retina- 90.0%, glaucoma-100.0%, DR-94.2%, and AMD-100.0%.

Conclusions : Our deep learning approach with an ensemble of CNN models showed reliable performance for detecting multiple diseases in retinal fundus images. Our investigation underpins the effectiveness of DL models in categorizing retinal imaging characteristics and raises the possibilities of using them clinically.

This is a 2021 Imaging in the Eye Conference abstract.




This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.