Abstract
Purpose :
Retinal images are commonly used to diagnose and manage ocular diseases. The retina can also reveal clues regarding cardiovascular, neurological and systemic health status, though many are often overlooked, or perhaps presently unknown. In recent work, we proposed a methodology to extract novel retinal characteristics from a deep-learning model trained to classify fundus images. Using this, we uncovered previously unidentified retinal features that show differences between females and males, a patient trait that ophthalmologists do not currently recognize. Here, we examine whether human observers can learn to recognize patient sex in fundoscopic images.
Methods :
We developed a training paradigm that consisted of didactic and practical components. In the didactic component, participants viewed a short presentation of descriptions of retinal characteristics that have been found to differ between males and females (brighter peripapillary region in females, greater vascular prominence in the superior temporal quadrant in males) and visuals of how these might present in fundoscopic images. In the practical component, participants completed up to 3 blocks of 50-trials of a sex-recognition task in which they chose the male image among male-female pairs in a two-alternative forced-choice (2-AFC) paradigm. Feedback was provided highlighting the correct choice. A separate block of 200 2-AFC trials without feedback was used to assess sex-recognition performance. This last test did not use any images seen in the practice component. Participants also completed a novel object memory test to assess general object recognition ability. 54 participants completed the study (M=34.2 years, 32 females, 23 ophthalmologists).
Results :
Pre-training sex-recognition accuracy was M=0.52 consistent with chance-level performance and did not differ between ophthalmologists and the non-expert group (p>0.9). Post-training performance was significantly higher, M= 0.66 (d=1.89, p<<0.01; ophthalmologists M=0.66, d= 2.38, p<<0.01). Performance on the NOMT test was not related to improvement in fundus classification.
Conclusions :
These results show that ophthalmologists do not recognize patient sex in fundus images, yet they can be trained to do so. Future work with this approach can be extended to discover and add novel signs of systemic and neurodegenerative disease in retinal images to the toolkit of ophthalmologists.
This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.