Abstract
Purpose:
Patients with central vision loss face daily challenges on performing various visual tasks. Categorizing facial expressions is one of the essential daily activities. The knowledge of what visual information is crucial for facial expression categorization is important to the understanding of the functional performance of these patients. Here we asked how the performance for categorizing facial expressions depends on the spatial information along different orientations for patients with central vision loss.
Methods:
Eight observers with central vision loss and five age-matched normally sighted observers categorized face images into four expressions: angry, fearful, happy, and sad. An orientation filter (bandwidth = 23°) was applied to restrict the spatial information within the face images, with the center of the filter ranged from horizontal (0°) to 150° in steps of 30°. Face images without filtering were also tested.
Results:
When the stimulus visibility was matched, observers with central vision loss categorized facial expressions just as well as their normally sighted counterparts, and showed similar confusion and bias patterns. For all four expressions, performance (normalized d′), uncorrelated with any of the observers' visual characteristics, peaked between −30° and 30° filter orientations and declined systematically as the filter orientation approached vertical (90°). Like normally sighted observers, observers with central vision loss also relied mainly on mouth and eye regions to categorize facial expressions.
Conclusions:
Similar to people with normal vision, people with central vision loss rely primarily on the spatial information around the horizontal orientation, in particular the regions around the mouth and eyes, for recognizing facial expressions.
Patients whose macular area is affected by eye diseases such as AMD often have to use their peripheral vision to explore the world around them. Many essential daily activities such as face recognition are severely affected by this visual impairment.
1–5 Because the ability to correctly categorize and interpret facial expressions is essential for social interaction, it is important to gain the knowledge of what visual information is crucial for facial expression categorization in patients with central vision loss. Not only will the information help us understand the functional performance of these patients, but it may also serve as a guide for their visual rehabilitation. For instance, if we know where the critical information resides in face images, with the advanced computer vision and image processing technology, we may selectively enhance the contrast of face images to increase the saliency of the critical information, which could be beneficial for people with central vision loss to categorize facial expressions.
To evaluate the perception of facial expressions, various tasks can be used such as detection (detecting the presence of expressions in face images) and categorization (categorizing face images into several predefined categories according to expressions). Each facial expression contains many signatures. For normal vision, it has been shown that features around the mouth region are important for both the detection
6 and the categorization of facial expressions.
7 For facial expression categorization, regions other than the mouth have been shown to be important for the task as well. For instance, recent studies
7,8 identified the eye region as being diagnostic for categorizing facial expressions. By examining how performance varied with information at the pixel level of face images, Yu and colleagues
7 found that nearly half of the variance of observers' responses could be explained by the information content in the mouth and eye regions. Further, it has been shown that while low spatial frequency information contained within face images is sufficient to support facial expression categorization, high spatial frequency information is necessary for detecting an expression.
9–11 For people with central vision loss, a previous study showed that these individuals, many of whom have reduced spatial resolution, experience difficulty in detecting the presence of facial expressions but not in categorizing these expressions,
12 a finding that is in accord with the notion of high spatial frequency information being useful for facial expression detection and low spatial frequency information for facial expression categorization.
Besides the dimension of spatial frequency, retinal images are also decomposed along the dimension of orientation during early visual processing.
13 Some research has explored how the orientation of spatial information affects face identification in people with normal vision
14,15 and in people with central vision loss.
2 In these studies, observers were asked to perform a face identification task: reporting the identity of face images when viewing the images filtered along different bands of orientation. For people with normal vision and people with central vision loss alike, identification performance was best when face images contained information in the vicinity of the horizontal orientation, and declined gradually when the filter orientation deviated away from horizontal, with the worst performance at the vertical orientation. To our knowledge, there are no a priori reasons to indicate that the same spatial information would be used for the two different tasks of face identification and facial expression categorization. Therefore, it is of no surprise that recently, studies have been performed to evaluate the dependency of facial expression categorization on the orientation of spatial information in people with normal vision.
7,8 By testing four facial expressions (angry, fearful, happy, and sad), Yu and colleagues
7 found that young adults with normal vision categorize facial expressions most effectively based on the spatial information around the horizontal orientation that captures primary changes of facial features across expressions, and the categorization performance declines systematically as the filter orientation approaches vertical. The link between the horizontal information and the successful categorization of facial expressions was also found for disgust expression but not for surprise expression.
8
In the present study, we assessed the dependency of facial expression categorization on the orientation of spatial information for people with central vision loss. Based on the prior findings,
2,12 we expect that people with central vision loss rely primarily on the same spatial information that is used by people with normal vision. In addition, we examined confusion patterns among different facial expressions across filter orientations, and the relationship between the information content at the pixel level and observer's response, to gain a fuller understanding of the critical information for facial expression categorization for people with central vision loss. For comparison, we also performed testing on a group of age-matched controls with normal vision.