Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 7
June 2024
Volume 65, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2024
A New Deep Learning Technique Termed Fair Identity Scaling to Improve Model Equity for Diabetic Retinopathy Screening
Author Affiliations & Notes
  • Ava kouhana
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA, Boston, Massachusetts, United States
  • Yan Luo
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA, Boston, Massachusetts, United States
  • Yu Tian
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA, Boston, Massachusetts, United States
  • Min Shi
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA, Boston, Massachusetts, United States
  • Leo A Kim
    Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA, Boston, Massachusetts, United States
  • Louis R. Pasquale
    Eye and Vision Research Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA, New York, New York, United States
  • Meenakashi Gupta
    Eye and Vision Research Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA, New York, New York, United States
  • Tobias Elze
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA, Boston, Massachusetts, United States
  • Lucia Sobrin
    Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA, Boston, Massachusetts, United States
  • Mengyu Wang
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA, Boston, Massachusetts, United States
  • Footnotes
    Commercial Relationships   Ava kouhana None; Yan Luo None; Yu Tian None; Min Shi None; Leo Kim Ingenia Therapeutics, Code C (Consultant/Contractor), CureVac AG, Code F (Financial Support); Louis Pasquale Twenty Twenty and Character Bio, Code C (Consultant/Contractor); Meenakashi Gupta None; Tobias Elze Genentech, Code F (Financial Support); Lucia Sobrin None; Mengyu Wang Genentech, Code F (Financial Support)
  • Footnotes
    Support  This work was supported by NIH R00 EY028631, NIH R21 EY035298, NIH R01 EY030575, NIH P30 EY003790, Research to Prevent Blindness International Research Collaborators Award, Alcon Young Investigator Grant, and Grimshaw-Gudewicz Grant.
Investigative Ophthalmology & Visual Science June 2024, Vol.65, 5633. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ava kouhana, Yan Luo, Yu Tian, Min Shi, Leo A Kim, Louis R. Pasquale, Meenakashi Gupta, Tobias Elze, Lucia Sobrin, Mengyu Wang; A New Deep Learning Technique Termed Fair Identity Scaling to Improve Model Equity for Diabetic Retinopathy Screening. Invest. Ophthalmol. Vis. Sci. 2024;65(7):5633.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Proposing fair identity scaling (FIS) technique to improve fairness in artificial intelligence for diabetic retinopathy (DR) screening.

Methods : We developed fair identity scaling to integrate learnable group weights and individual loss information in training to improve model equity. VGG-derived 3D models were chosen as the backbone model to illustrate the impact of fairness identity normalization on model performance equity. We compared fair scaling with a widely used technique for improving model equity called adversarial (Adv) training. We used 10,000 3D OCT scans from 10,000 patients from Massachusetts Eye and Ear tested between 2015 and 2023. 6,000, 1,000, and 3,000 subjects were used for training, validation, and testing, respectively. 9.1% of patients were identified as having vision-threatening DR (moderate and severe non-proliferative DR [NPDR] plus PDR). The overall and group-wise areas under the receiver operating characteristic curve (AUC) were used for model assessment. A new fairness metric named performance-scaled disparity (PSD) is introduced to assess the model equity. Mean and max PSDs are computed as the standard deviation of group performance and the absolute maximum difference in group performance divided by overall performance, respectively.

Results : The average age is 64.1 ± 17.0 years. 57.1% are female. 78.6%, 13.7% and 7.7% of the patients are White, Black, and Asian, respectively. 3.8% are Hispanic and 96.2% are non-Hispanic. Concerning gender, the overall AUC on OCT-B scans for DR patients with fair identity scaling, increased from 0.92 to 0.94. Specifically for males, the AUC increased from 0.91 to 0.93. Regarding race, the overall AUC on OCT-B scans for DR patients with FIS increased from 0.92 to 0.93, with the AUC for Black people increasing from 0.85 to 0.89. Regarding Hispanic ethnicity, the overall AUC for DR patients with fair identity scaling increased from 0.92 to 0.93. Significantly, the mean PSD and max PSD for race groups decreased from 5.09 to 3.14 and 11.72 to 7.46, respectively. All statistical differences mentioned were significant with p < 0.001. Our fair scaling model consistently outperformed the adversarial training approach.

Conclusions : The fair identity scaling method mitigates group disparities while preserving overall performance.

This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.

 

 

Methods comparison using OCT scans for DR detection across different identity groups.

Methods comparison using OCT scans for DR detection across different identity groups.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×