Abstract
Presentation Description :
Artificial intelligence models have shown promise in performing many medical imaging tasks. However, our ability to explain what signals these models have learned is severely lacking. Explanations are needed in order to increase the trust of doctors in AI-based models, especially in domains where AI prediction capabilities surpass those of humans. Moreover, such explanations could enable novel scientific discovery by uncovering signals in the data that aren’t yet known to experts. We present an approach for generating hypotheses by leveraging a generative model (StylEx, based on StyleGAN) to produce visual attributes associated with a prediction, followed by interdisciplinary expert review. We demonstrate results on retinal fundus photographs and external eye photographs, showing examples of both possibly novel attributes as well as confounders learned by the model. In contrast to some previous approaches that combine all associated changes into a single visual attribute, our approach has the strength of being able to separate different visual attributes for independent investigation. We hope that our approach can help others check their models for any confounders, and potentially learn previously unknown associations.
This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.