June 2023
Volume 64, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2023
Cascaded Defending and Detecting of Adversarial Attacks Against Deep Learning System in Ophthalmic Imaging
Author Affiliations & Notes
  • Wei Yan Ng
    Singapore National Eye Centre, Singapore, Singapore, Singapore
  • Yanyu Xu
    Agency for Science Technology and Research, Singapore, Singapore
  • Xinxing Xu
    Agency for Science Technology and Research, Singapore, Singapore
  • Daniel SW Ting
    Singapore National Eye Centre, Singapore, Singapore, Singapore
  • Footnotes
    Commercial Relationships   Wei Yan Ng None; Yanyu Xu None; Xinxing Xu None; Daniel Ting None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2023, Vol.64, 215. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Wei Yan Ng, Yanyu Xu, Xinxing Xu, Daniel SW Ting; Cascaded Defending and Detecting of Adversarial Attacks Against Deep Learning System in Ophthalmic Imaging. Invest. Ophthalmol. Vis. Sci. 2023;64(8):215.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Deep learning systems (DLS) for ophthalmic imaging are progressively being integrated into healthcare systems. Such DLS could achieve widespread utlity in the future. However, they may have potential vulnerabilities to adversarial attacks (AA), which can lead to significant performance degradation and potential risk. The purpose of this study is to assess the impact of AA on macular Optical Coherence Tomography (OCT) imaging DLS and the efficacy of a new anti-AA model with cascaded defense and detection.

Methods : We conducted a retrospective study of a DLS using 108,312 retinal OCT images covering three diseases: choroidal neovascularisation (CNV), diabetic macula edema (DME) and drusen. Six common and highly efficacious types of adversarial attacks inclusive of white and black-box attacks as well as targeted and non-targeted attacks were applied separately to the original classifier to induce misclassification. These attacks were then applied and evaluated against our proposed anti-AA DLS comprising of two components - An AA defense model based on a modified version of High-Level Representation Guided Denoiser with image restoration capabilities combined and a separate novel AA detection model.

Results : The area under curve (AUC) for the original classifier showed significant decrease from non-targeted white-box attacks with a decline from 0.9979-0.9997 to 0.0174-0.9123 with auto-projected gradient descent attacks achieving the greatest reduction. Non-targeted black-box attacks failed to induce significant AUC changes. Targeted white-box attacks resulted in performance degradation with AUC ranging from 0.3917 to 0.9417 with backward pass differentiable approximation attacks inducing the greatest misclassifcation. Similarly, black-box attacks had minimal impact on AUC results. The AA defense model provided robust AUC performance restoration ranging from 0.9951 to 0.9997 for non-targeted attacks and 0.9930 to 0.9997 for targeted attacks. This was accompanied by attack-free image restoration capability and robust AA detection performance with AUC ranging from 0.9227 to 1.0000.

Conclusions : Our study demonstrates a new approach to developing an end-to-end anti-AA DLS to address potential AA risk and secure clinical pathways for DLS integration with healthcare systems. Clinical utility is enhanced through the combination of attack detection, result normalisation and image restoration.

This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×