Abstract
Purpose :
Recognition of faces plays an important role in daily life, but is difficult to quantify. Those with face processing deficits cite noteable struggles in social engagements and peer relationships. We developed a face discrimination paradigm that can be self-administered and completed by clinical populations remotely. Additionally, we measured the perceptual summation of features to investigate their independence in face processing.
Methods :
Participants viewed 4 charts of 9 cells containing 2 faces created from the Basel Face Database (Figure 1) and selected face pairs with different identities using a mouse. Each face was generated from 199 parameters whose coefficients were varied to alter face features. 3 random cells had identical faces, 6 cells had faces which differed from their partner in 1 or 2 of the coefficients controlled by an adaptive FInD (Foraging Interactive D-prime) algorithm driven by signal detection responses. In a lab study, participants (N=14) each tested 50 parameters and the 10 most salient were isolated. We then measured face discrimination thresholds with the parameters singly and combined in a remote study with (N=8) visually normal participants and 2 with brain-based visual deficits (cerebral visual impairment).
Results :
In controls, identification thresholds were measured in 2 minutes. There was no significant difference between lab and remote thresholds (t(14)=0.231, p=.821). Thresholds for combined parameters were significantly lower than single parameters (t(7)=2.899, p<.05) by 1.4 units (Figure 2), consistent with probability summation and not superadditivity. Recruitment of clinical population is ongoing.
Conclusions :
We developed a task that can rapidly quantify face discrimination ability both in lab and remotely and by subjects with typical vision or visual impairment. Because combined parameters were consistently and significantly lower than single, but not to the degree of superadditivity, probability summation indicates that the processing of these features is independent.
This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.