Abstract
Purpose :
Develop a calibrated item bank for measuring the effectiveness of treatments for improving facial perception ability in low vision patients.
Methods :
On each trial, patients saw photographs presented in virtual reality of the same person's face from three different angles (left, center and right). They were tasked with identifying: 1) the gender, 2) the face having an emotional expression different from the other two, and 3) the emotional expression of that face (choices were: angry, sad or neutral). Images were selected from the Karolinska Directed Emotional Faces database and presented in an Oculus Rift DK2 head mounted display. There were 41 patients and 64 trials per patient. Half of the trials were done with a magnification bubble - a localized region of magnification - while the other half was done without magnification.
Results :
Each trial - a set of three images - was treated as an item for purposes of Rasch analysis. Three separate Rasch analyses were performed on the 64 items, one for each facial perception task. The resulting item measures were only weakly correlated with each other, suggesting different facial perception tasks require separate measures of item difficulty. Signal detection theory was used to measure d-primes (d') for performance with and without the magnification bubble. Without magnification, d' was lowest (task was hardest) for identifying the face with the "odd" expression, and d' was highest for gender discrimination. Magnification improved performance most for the hardest task and least for the easiest task.
Conclusions :
We applied Rasch analysis to construct three distinct measures of item difficulty for three different facial perception tasks: 1) gender discrimination, 2) identifying the face with the "odd" emotion, and 3) identifying the emotional expression of a face. These measures are useful for determining the effectiveness of treatments targeting improvements in facial perception. Using a magnification bubble improved facial perception ability in all three tasks; however, the easier the task was without magnification, the less improvement was seen with magnification.
This is an abstract that was submitted for the 2016 ARVO Annual Meeting, held in Seattle, Wash., May 1-5, 2016.