June 2023
Volume 64, Issue 9
Open Access
ARVO Imaging in the Eye Conference Abstract  |   June 2023
Deep Learning based Adversarial Disturbances in Fundus Image Analysis
Author Affiliations & Notes
  • Mohammad Eslami
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Lakshmi Sritan Motati
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
    Computer Science, Thomas Jefferson High School for Science and Technology, Alexandria, Virginia, United States
  • Rohan Kalahasty
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
    Computer Science, Thomas Jefferson High School for Science and Technology, Alexandria, Virginia, United States
  • Saber Kazeminasab Hashemabad
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Min Shi
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Yan Luo
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Yu Tian
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Nazlee Zebardast
    Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Mengyu Wang
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Tobias Elze
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Footnotes
    Commercial Relationships   Mohammad Eslami, Genentech Inc (F); Lakshmi Sritan Motati, None; Rohan Kalahasty, None; Saber Kazeminasab Hashemabad, None; Min Shi, None; Yan Luo, None; Yu Tian, None; Nazlee Zebardast, None; Mengyu Wang, Genentech Inc (F); Tobias Elze, Genentech Inc (F)
  • Footnotes
    Support  NIH R01 EY030575, NIH P30 EY003790
Investigative Ophthalmology & Visual Science June 2023, Vol.64, PB002. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Mohammad Eslami, Lakshmi Sritan Motati, Rohan Kalahasty, Saber Kazeminasab Hashemabad, Min Shi, Yan Luo, Yu Tian, Nazlee Zebardast, Mengyu Wang, Tobias Elze; Deep Learning based Adversarial Disturbances in Fundus Image Analysis. Invest. Ophthalmol. Vis. Sci. 2023;64(9):PB002.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Deep Learning (DL) has shown promising results in medical image analysis. Patient measurements are increasingly performed remotely and then transferred to automated diagnostic procedures, such as clinical decision support systems, telemedicine, and E-health. This process is vulnerable to intentional attacks to undermine the diagnostic processor, which becomes a particular problem if data manipulations are invisible to human observers. Here, we investigate this risk in fundus image analysis with added invisible perturbations.

Methods : We consider three adversarial techniques called FGSM, SMIA, and ASMA to perturb the fundus images. In this study, without loss of generality, we select optic disc segmentation as the application of DL models which is beneficial for disc detection and Glaucoma diagnosis. In this regard, we train four different state-of-the-art segmentation models, one general method (DeepLabV3+), and three frequently used in fundus analysis (UNet, AttNet, CeNet). The segmentation models are trained using 3710 fundus images and evaluated with 1282 test images (without subject overlap). The images of the test set are additionally perturbed with three severity levels for adversaries that make 11538 perturbed images such that the perturbation artifact is not visible. The performance of the segmentation models is assessed with IoU and FNR metrics.

Results : Fig. 1 shows the results quantitively; the CeNet and DeepLabv3+ are slightly better than other segmentation models for the not-perturbed test set. The comparison between Fig. 1A/1C and figures 1B/1D shows that the performance of the segmentation models is reduced in the presence of adversarial disturbances (IoU %95 CI: 0.90 - 0.91 vs. 0.83 - 0.84, p<0.05), and the ratio of outliers is significantly larger (IoU < 0.5: 0.007 vs. 0.04). Fig. 2 shows some random examples and illustrates that for some perturbed images, the segmentation model completely failed to segment the disc.

Conclusions : Our evaluations express that the invisible adversarial disturbances lessen the performance of the DL models significantly, while humans would still perform equally well. We show that even in black-box scenarios when the deployed model is not available to the malicious actor, the model may completely fail to accomplish its task. Also, we show that the models’ vulnerability is different, so we suggest taking this aspect into account in future model evaluations.

This abstract was presented at the 2023 ARVO Imaging in the Eye Conference, held in New Orleans, LA, April 21-22, 2023.

 

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×