Abstract
Purpose :
Deep learning systems (DLS) for ophthalmic imaging are progressively being integrated into healthcare systems. Such DLS could achieve widespread utlity in the future. However, they may have potential vulnerabilities to adversarial attacks (AA), which can lead to significant performance degradation and potential risk. The purpose of this study is to assess the impact of AA on macular Optical Coherence Tomography (OCT) imaging DLS and the efficacy of a new anti-AA model with cascaded defense and detection.
Methods :
We conducted a retrospective study of a DLS using 108,312 retinal OCT images covering three diseases: choroidal neovascularisation (CNV), diabetic macula edema (DME) and drusen. Six common and highly efficacious types of adversarial attacks inclusive of white and black-box attacks as well as targeted and non-targeted attacks were applied separately to the original classifier to induce misclassification. These attacks were then applied and evaluated against our proposed anti-AA DLS comprising of two components - An AA defense model based on a modified version of High-Level Representation Guided Denoiser with image restoration capabilities combined and a separate novel AA detection model.
Results :
The area under curve (AUC) for the original classifier showed significant decrease from non-targeted white-box attacks with a decline from 0.9979-0.9997 to 0.0174-0.9123 with auto-projected gradient descent attacks achieving the greatest reduction. Non-targeted black-box attacks failed to induce significant AUC changes. Targeted white-box attacks resulted in performance degradation with AUC ranging from 0.3917 to 0.9417 with backward pass differentiable approximation attacks inducing the greatest misclassifcation. Similarly, black-box attacks had minimal impact on AUC results. The AA defense model provided robust AUC performance restoration ranging from 0.9951 to 0.9997 for non-targeted attacks and 0.9930 to 0.9997 for targeted attacks. This was accompanied by attack-free image restoration capability and robust AA detection performance with AUC ranging from 0.9227 to 1.0000.
Conclusions :
Our study demonstrates a new approach to developing an end-to-end anti-AA DLS to address potential AA risk and secure clinical pathways for DLS integration with healthcare systems. Clinical utility is enhanced through the combination of attack detection, result normalisation and image restoration.
This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.