Abstract
Purpose: :
Patients with central vision loss have to use their peripheral vision for visual tasks, making many daily activities including recognizing facial expressions, difficult and challenging. Since the ability to correctly recognize facial expressions is crucial for social interaction, the knowledge of what visual information is the most important for facial expression recognition is therefore, vital to the visual rehabilitation of these patients. In this study, we examined how the performance for recognizing facial expressions depends on the spatial information along different orientations for people with central vision loss.
Methods: :
Four observers with central vision loss (aged 70 to 86, acuity: 0.4 to 1.0 logMAR) and four older adults with normal vision (aged 66 to 70) viewed face images (140 for each of the four categories) and categorized them into four facial expressions - angry, fear, happy and sad. Viewing distances and stimulus exposure durations were adjusted for each observer so that the accuracy for recognizing facial expressions for unfiltered images was approximately matched across observers. An orientation filter (bandwidth = 23°) was applied to restrict the spatial information within the face images, with the center of the filter ranged from 0° (horizontal) to 150° in steps of 30°. Accuracy for recognizing facial expression filtered with each of these six filters, as well as for the unfiltered condition, was measured. We computed recognition accuracy as d’ to separate discriminability from response bias.
Results: :
Performance for recognizing facial expressions was highly similar between observers with central vision loss and older adults with normal vision. For all four facial expressions, recognition accuracy peaked between -30° and 30° filter orientations, declined systematically as the filter orientation approached 90° (vertical). The drop in performance accuracy as the filter orientation changed from 0° to 90° was the largest for identifying happy faces (change in d’ = 3.12) and smallest for fearful faces (change in d’ = 1.11).
Conclusions: :
When the stimulus visibility was matched (by increasing the retinal image size and stimulus duration), people with central vision loss could recognize facial expressions just as well as their normally sighted counterparts. Also, similar to people with normal vision, people with central vision loss rely primarily on the spatial information around the horizontal orientation in face images for recognizing facial expressions.
Keywords: face perception • low vision • shape, form, contour, object perception