Abstract
Purpose :
Artificial intelligence (AI) algorithms can learn and perpetuate racial biases from patterns in medical images if those images contain information relevant to self reported race or ethnicity. Recent studies have shown that convolutional neural networks (CNNs) can be trained to classify images as being from black or white patients from medical images that were not previously thought to contain information relevant to the classification of self-reported race. Herein, we evaluate whether grayscale retinal vessel maps (RVMs) of patients screened for retinopathy of prematurity (ROP) similarly contain the potential for racial bias.
Methods :
4095 retinal fundus images (RFIs) were collected from 245 Black and White infants (as labeled by self-report from parents). A U-Net generated RVMs from RFIs, which were subsequently thresholded, binarized, or skeletonized (Figure). CNNs were then trained to predict self-reported race from color RFIs, raw RVMs, and thresholded, binarized, or skeletonized RVMs. Area under the precision-recall curve (AUC-PR) was evaluated.
Results :
CNNs predicted self-reported race from RFIs near perfectly (image-level AUC-PR: 0.999, subject-level AUC-PR: 1.000). Raw RVMs were almost as informative as color RFIs (image-level AUC-PR: 0.938, subject-level AUC-PR: 0.995). Ultimately, CNNs were able to detect whether RFIs or RVMs were from self-reported Black or White babies, regardless of whether images contained color, vessel segmentation brightness differences were nullified, or vessel segmentation widths were normalized.
Conclusions :
Both Color RFIs and black and white RVMs contain information relevant to the race of patients. These results suggest that biomarker-based strategies to remove information relevant to race or ethnicity (such as skin or fundus pigmentation) may not be effective, and that the potential for racial bias exists even in images that do not appear to contain relevant information.
This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.