Investigative Ophthalmology & Visual Science Cover Image for Volume 60, Issue 4
March 2019
Volume 60, Issue 4
Open Access
Low Vision  |   March 2019
Orientation Information in Encoding Facial Expressions for People With Central Vision Loss
Author Affiliations & Notes
  • Deyue Yu
    College of Optometry, The Ohio State University, Columbus, Ohio, United States
  • Susana T. L. Chung
    School of Optometry, University of California, Berkeley, California, United States
  • Correspondence: Deyue Yu, College of Optometry, The Ohio State University, 338 West 10th Avenue, Columbus, OH 43210, USA; [email protected]
Investigative Ophthalmology & Visual Science March 2019, Vol.60, 1175-1184. doi:https://doi.org/10.1167/iovs.18-25380
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Deyue Yu, Susana T. L. Chung; Orientation Information in Encoding Facial Expressions for People With Central Vision Loss. Invest. Ophthalmol. Vis. Sci. 2019;60(4):1175-1184. https://doi.org/10.1167/iovs.18-25380.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Patients with central vision loss face daily challenges on performing various visual tasks. Categorizing facial expressions is one of the essential daily activities. The knowledge of what visual information is crucial for facial expression categorization is important to the understanding of the functional performance of these patients. Here we asked how the performance for categorizing facial expressions depends on the spatial information along different orientations for patients with central vision loss.

Methods: Eight observers with central vision loss and five age-matched normally sighted observers categorized face images into four expressions: angry, fearful, happy, and sad. An orientation filter (bandwidth = 23°) was applied to restrict the spatial information within the face images, with the center of the filter ranged from horizontal (0°) to 150° in steps of 30°. Face images without filtering were also tested.

Results: When the stimulus visibility was matched, observers with central vision loss categorized facial expressions just as well as their normally sighted counterparts, and showed similar confusion and bias patterns. For all four expressions, performance (normalized d′), uncorrelated with any of the observers' visual characteristics, peaked between −30° and 30° filter orientations and declined systematically as the filter orientation approached vertical (90°). Like normally sighted observers, observers with central vision loss also relied mainly on mouth and eye regions to categorize facial expressions.

Conclusions: Similar to people with normal vision, people with central vision loss rely primarily on the spatial information around the horizontal orientation, in particular the regions around the mouth and eyes, for recognizing facial expressions.

Patients whose macular area is affected by eye diseases such as AMD often have to use their peripheral vision to explore the world around them. Many essential daily activities such as face recognition are severely affected by this visual impairment.15 Because the ability to correctly categorize and interpret facial expressions is essential for social interaction, it is important to gain the knowledge of what visual information is crucial for facial expression categorization in patients with central vision loss. Not only will the information help us understand the functional performance of these patients, but it may also serve as a guide for their visual rehabilitation. For instance, if we know where the critical information resides in face images, with the advanced computer vision and image processing technology, we may selectively enhance the contrast of face images to increase the saliency of the critical information, which could be beneficial for people with central vision loss to categorize facial expressions. 
To evaluate the perception of facial expressions, various tasks can be used such as detection (detecting the presence of expressions in face images) and categorization (categorizing face images into several predefined categories according to expressions). Each facial expression contains many signatures. For normal vision, it has been shown that features around the mouth region are important for both the detection6 and the categorization of facial expressions.7 For facial expression categorization, regions other than the mouth have been shown to be important for the task as well. For instance, recent studies7,8 identified the eye region as being diagnostic for categorizing facial expressions. By examining how performance varied with information at the pixel level of face images, Yu and colleagues7 found that nearly half of the variance of observers' responses could be explained by the information content in the mouth and eye regions. Further, it has been shown that while low spatial frequency information contained within face images is sufficient to support facial expression categorization, high spatial frequency information is necessary for detecting an expression.911 For people with central vision loss, a previous study showed that these individuals, many of whom have reduced spatial resolution, experience difficulty in detecting the presence of facial expressions but not in categorizing these expressions,12 a finding that is in accord with the notion of high spatial frequency information being useful for facial expression detection and low spatial frequency information for facial expression categorization. 
Besides the dimension of spatial frequency, retinal images are also decomposed along the dimension of orientation during early visual processing.13 Some research has explored how the orientation of spatial information affects face identification in people with normal vision14,15 and in people with central vision loss.2 In these studies, observers were asked to perform a face identification task: reporting the identity of face images when viewing the images filtered along different bands of orientation. For people with normal vision and people with central vision loss alike, identification performance was best when face images contained information in the vicinity of the horizontal orientation, and declined gradually when the filter orientation deviated away from horizontal, with the worst performance at the vertical orientation. To our knowledge, there are no a priori reasons to indicate that the same spatial information would be used for the two different tasks of face identification and facial expression categorization. Therefore, it is of no surprise that recently, studies have been performed to evaluate the dependency of facial expression categorization on the orientation of spatial information in people with normal vision.7,8 By testing four facial expressions (angry, fearful, happy, and sad), Yu and colleagues7 found that young adults with normal vision categorize facial expressions most effectively based on the spatial information around the horizontal orientation that captures primary changes of facial features across expressions, and the categorization performance declines systematically as the filter orientation approaches vertical. The link between the horizontal information and the successful categorization of facial expressions was also found for disgust expression but not for surprise expression.8 
In the present study, we assessed the dependency of facial expression categorization on the orientation of spatial information for people with central vision loss. Based on the prior findings,2,12 we expect that people with central vision loss rely primarily on the same spatial information that is used by people with normal vision. In addition, we examined confusion patterns among different facial expressions across filter orientations, and the relationship between the information content at the pixel level and observer's response, to gain a fuller understanding of the critical information for facial expression categorization for people with central vision loss. For comparison, we also performed testing on a group of age-matched controls with normal vision. 
Methods
Observers
Eight observers with long-standing central vision loss and five age-matched observers with normal or corrected-to-normal vision participated in the study. Table 1 shows the age, sex, diagnosis, time since diagnosis, and best-corrected distance visual acuity of the observers. All normal vision observers have no known eye disease or prior history of eye disease. For each observer with central vision loss, we used a Rodenstock scanning laser ophthalmoscope (Model 101; Rodenstock, Munich, Germany) to map the central scotomas and to identify the location of the preferred retinal locus for fixation (fPRL) for each eye. This measurement was not performed for the normal vision group. The research followed the tenets of the Declaration of Helsinki and was approved by the Committee for Protection of Human Subjects at the University of California, Berkeley. Observers gave written informed consent before the commencement of data collection. All observers were tested binocularly in a dimly lit room, and completed the experiment in one session lasting 1 to 2 hours. 
Table 1
 
Summary of Observers' Characteristics (Age, Sex, Diagnosis, Time Since Diagnosis, Best-Corrected Distance Visual Acuity, and fPRL) and Viewing Conditions (Stimulus Duration, Viewing Distance and Image Size)
Table 1
 
Summary of Observers' Characteristics (Age, Sex, Diagnosis, Time Since Diagnosis, Best-Corrected Distance Visual Acuity, and fPRL) and Viewing Conditions (Stimulus Duration, Viewing Distance and Image Size)
Apparatus and Stimuli
Custom-written MATLAB (version 7.7.0; Mathworks, Natick, MA, USA) software and Psychophysics Toolbox16,17 were used to control the experiment on a Macintosh computer (MacBook 5.1; Apple, Palo Alto, CA, USA). The face images were presented on a gamma-corrected SONY color graphic display (model: Multiscan E540; resolution: 1280 × 1024; dimensions: 39.3 cm × 29.4 cm; refresh rate: 75 Hz; SONY, Tokyo, Japan). 
Our stimuli and the psychophysical procedures for testing were similar to those described by Yu and colleagues.7 Four facial expressions, angry, fearful, happy, and sad, were tested. The source face stimuli were selected from the NimStim Set of Facial Expressions.18 To avoid the possible interfering effect of mouth openness, only closed-mouth versions were used in the present study. To ensure that each face image was to be presented to each observer without repetition, we generated more test faces by morphing between the images of two persons of the same sex and with the same facial expression (see Yu and colleagues7 for detailed morphing procedures). Ultimately, a total of 140 different face images were obtained for each facial expression. We then applied an orientation filter to restrict the information within these face images. The filter had a wrapped Gaussian distribution with an orientation bandwidth σ = 23° with the center of the filter ranged from 0° (horizontal) to 150° in steps of 30°. For each filtered image, only information within the filter orientation ± the bandwidth was retained. All face images were rendered in gray scale, cropped to an oval shape, and normalized to equate the root mean square (RMS) contrast (0.12) and mean luminance (0.5). A gray background (29 cd/m2) was used. Figure 1 gives examples of the four facial expressions in the unfiltered (original) and the six filtered conditions. Accuracy for categorizing facial expressions for each of the seven conditions was measured. 
Figure 1
 
Example stimuli of the four facial expressions, angry, fearful, happy, and sad, in the unfiltered (original) and the six filtered conditions. The spatial filter was applied along six orientations: −60° (120°), −30° (150°), 0° (horizontal), 30°, 60°, and 90° (vertical).
Figure 1
 
Example stimuli of the four facial expressions, angry, fearful, happy, and sad, in the unfiltered (original) and the six filtered conditions. The spatial filter was applied along six orientations: −60° (120°), −30° (150°), 0° (horizontal), 30°, 60°, and 90° (vertical).
Procedures
Twenty trials were tested for each of the 28 (four facial expressions × seven filtering conditions) and for each observer. A total of 560 trials were randomized and divided into four blocks with 140 trials per block. At the beginning of the experiment, each observer was given a practice session in which a different set of face images was used. Based on the performance in the practice session, an image exposure duration was selected for each observer (0.1 to 4.0 seconds; Table 1) to target the categorization accuracy (averaged across the four facial expressions) for the unfiltered condition at approximately 0.8. The actual average categorization accuracy during the testing ranged between 0.74 and 0.93. Statistics confirmed that performance for the unfiltered condition was similar between the two observer groups (F1,11 = 0.09; P = 0.77 from a repeated-measures ANOVA with facial expression as the within-subject factor and group as the between-subject factor). 
Before each trial, observers were instructed to fixate on a white dot centered on the screen. An experimenter pressed a mouse-button to initiate the trial after ensuring that observers were ready for the trial. A face image was then presented for a predefined, fixed amount of time (Table 1), followed by a white-noise post-mask for 500 ms. Observers were then given a response screen consisting of four words in large print (angry, fearful, happy, and sad), and provided their response verbally. The experimenter entered the observers' response by clicking on the respective word on the screen. Viewing distance was 40 cm for the age-matched controls. For each observer with central vision loss, proper adjustment was made to the viewing distance based on visual and ergonomic preference (20 to 40 cm; Table 1). Appropriate near corrections were provided to all observers to compensate for the accommodative demand for near viewing. No additional low vision devices were used. Table 1 also listed the angular image size (width of image). At a viewing distance of 40 cm, the angular subtense of the face images was 8° (horizontal extent) by 11.9° (vertical extent), which is similar to the angular size of a real-life face at a distance of 85 cm (a comfortable social distance). 
Results
Categorization Accuracy
The averaged categorization accuracy was plotted as a function of the orientation of the spatial filter for each facial expression and observer group in Figure 2. Individual data for observers with central vision loss were also plotted. The chance performance for the four-alternative forced-choice task is 0.25. Any performance lower than the chance level may be due to sampling error or response bias. First, for the unfiltered condition, happy expression outperformed the other three expressions (Ps ≤ 0.001). Second, performance changed with the orientation of the filters; in other words, there was an orientation tuning for performance. The tuning curves are roughly symmetrical around the horizontal orientation, and are similar between the two groups of observers. Averaged across observers, the tuning curves are bell-shaped for angry, happy, and sad expressions, and are inverted-bell shape for fearful expression. The difference between the pattern of categorization accuracy for the fearful expression versus the other expressions can be accounted for by response biases (see results on d′ and Response Bias). In the following, we shall examine the stimulus-response confusion matrix and the energy content in face stimuli for a more comprehensive evaluation. 
Figure 2
 
Proportion correct of categorizing facial expression is plotted as a function of orientation of filter. The black symbols and lines represent data from normally sighted controls (NV). The gray symbols and lines represent data from observers with central vision loss (CVL). Dashed lines denote the categorization performance for unfiltered images. For each set of data, performance for the vertical filter orientation is plotted twice at −90° and 90°. Error bars represent ± 1 SEM.
Figure 2
 
Proportion correct of categorizing facial expression is plotted as a function of orientation of filter. The black symbols and lines represent data from normally sighted controls (NV). The gray symbols and lines represent data from observers with central vision loss (CVL). Dashed lines denote the categorization performance for unfiltered images. For each set of data, performance for the vertical filter orientation is plotted twice at −90° and 90°. Error bars represent ± 1 SEM.
Confusion Matrices
A confusion matrix with data accumulated across observers was constructed for each condition and observer group (Figs. 3 and 4). In each matrix, the rows are targets presented (Y), and the columns are observer responses (X). Each cell contains the proportion of response X given a target Y. The diagonal elements represent the proportion of correct categorization of a given target. The rest of the matrix shows the pattern of confusions (false alarms and misses). False alarms refer to the errors of reporting a facial expression as being present when it was absent. Misses refer to the errors of reporting a facial expression as being absent when it was present. For both groups, the overall performance was best for unfiltered face images (proportion correct in the cells along the diagonal were quite high) and for recognizing happy expression (proportion correct was generally high for the cell along the diagonal that corresponded to happy expression in each matrix). The confusion rate (values in the “off-diagonal” cells) between facial expressions increased as the filter orientation approached vertical, except when the images contained fearful expressions. For fearful expression, categorization accuracy was lowest when filter orientation was at or near the horizontal (−30°, 0°, and 30°) and was highest when filtering is along the vertical. As revealed later, high accuracy in categorizing fearful expression for faces filtered with the vertical orientation should be discounted by observers' bias. Both observer groups showed high miss rate at ±60° filtered conditions for both angry and sad expressions. Both groups also had more false alarms for sad expression compared to the other expressions for all except the 90° filter orientation. 
Figure 3
 
Confusion matrices for the central vision loss group. One matrix was constructed for each of the seven filtering conditions (unfiltered, −60°, −30°, 0°, 30°, 60°, and 90° filtered conditions) with the data averaged across all observers in the group. The average confusion matrices across the six filtered conditions (“6 Filtered”) and across all conditions (“6 Filtered + Unfiltered”) were also constructed. In each matrix, the rows are targets presented (Y), and the columns are observer responses (X). Each cell contains the proportion of response X given a target Y. The diagonal elements represent the proportion of correct categorization of a given target. The rest of the matrix shows the pattern of confusions (false alarms and misses). The value under each column stands for the total proportion of response X for a given condition.
Figure 3
 
Confusion matrices for the central vision loss group. One matrix was constructed for each of the seven filtering conditions (unfiltered, −60°, −30°, 0°, 30°, 60°, and 90° filtered conditions) with the data averaged across all observers in the group. The average confusion matrices across the six filtered conditions (“6 Filtered”) and across all conditions (“6 Filtered + Unfiltered”) were also constructed. In each matrix, the rows are targets presented (Y), and the columns are observer responses (X). Each cell contains the proportion of response X given a target Y. The diagonal elements represent the proportion of correct categorization of a given target. The rest of the matrix shows the pattern of confusions (false alarms and misses). The value under each column stands for the total proportion of response X for a given condition.
Figure 4
 
Confusion matrices for the normal vision group. Details are the same as in Figure 3.
Figure 4
 
Confusion matrices for the normal vision group. Details are the same as in Figure 3.
d′ and Response Bias
From the confusion matrices, we computed d-prime (d′) values to examine observers' ability to distinguish target-present from target-absent using the equation d′ = ϕ−1(1 – Miss Rate) − ϕ−1(False-Alarm Rate). We further obtained a relative d′ (referred to as normalized d′ in this paper) by subtracting the d′ of the unfiltered condition from the d′ of each filtered condition. The normalization of d′ can help minimize the impact of possible faults of the source face stimuli. Our analysis also showed that the normalization removed the possible influence of viewing conditions (e.g., viewing distance) on the outcome measure. We also calculated response bias, c = −(ϕ−1[1 – Miss Rate] + ϕ−1[False-Alarm Rate])/2. A non-zero value of c indicates that observer biases toward a type of response regardless of the stimulus.19 These calculations were done for each facial expression, filtering condition and observer. 
In Figure 5, the averaged normalized d′ was plotted as a function of the orientation of the spatial filter for each facial expression and observer group. A two-factor repeated-measures ANOVA (facial expression × orientation with group as the between-subject factor) was performed. The tuning curves are roughly symmetrical around the horizontal filter orientation, appear to be bell-shaped for all four facial expressions (F5,55 = 75.27, P < 0.0005), and are similar between the two observer groups (F1,11 = 0.02, P = 0.89). The change in shape of the tuning curve for fearful expression, from inverted-bell shape for categorization accuracy (Fig. 2) to bell-shaped for normalized d′ (Fig. 5), may be explained by observers' response biases. All tuning curves have flat peaks, as demonstrated by the similar performance across the three orientations near the horizontal (−30°, 0°, and 30°; F2,22 = 1.76, P = 0.20), while the slope of the tuning curve (the change in performance between the peak and the vertical orientation) varies across facial expressions (F15,165 = 11.18, P < 0.0005). 
Figure 5
 
Normalized d′ as a function of orientation of filter for each facial expression and observer group. Dashed lines denote a normalized d′ of zero. Data points plotted at −90° are the same as those at 90°. Error bars represent ± 1 SEM.
Figure 5
 
Normalized d′ as a function of orientation of filter for each facial expression and observer group. Dashed lines denote a normalized d′ of zero. Data points plotted at −90° are the same as those at 90°. Error bars represent ± 1 SEM.
By fitting each of these tuning curves using a Gaussian function, we quantified the orientation selectivity for each facial expression and observer group using the tuning bandwidths (full-width at half-maximum). We compared these bandwidths for the two observer groups with those of normally sighted young adults tested by Yu and colleagues.7 For each observer group, bootstrapping was performed with 1000 iterations, based on which 95% confidence interval was derived for each facial expression. As shown in Table 2, the tuning bandwidth is similar across different observer groups for all four expressions. 
Table 2
 
Lower and Upper Values of the 95% Confidence Intervals of Tuning Bandwidths for the Four Facial Expressions and the Three Observer Groups: Observers With Central Vision Loss, Age-Matched Controls, and Normally Sighted Young Adults Tested by Yu and Colleagues7
Table 2
 
Lower and Upper Values of the 95% Confidence Intervals of Tuning Bandwidths for the Four Facial Expressions and the Three Observer Groups: Observers With Central Vision Loss, Age-Matched Controls, and Normally Sighted Young Adults Tested by Yu and Colleagues7
Figure 6 plotted response bias as a function of filtering orientation for the four facial expressions and the two observer groups. For the unfiltered conditions, no response bias was found for happy and sad expressions for both observer groups (Ps > 0.05). We observed small preferences toward the “absent” response (value > 0) for angry expression for the central vision loss group (c = 0.36, t[7] = 6.61, P < 0.0005) and for fearful expression for the age-matched control group (c = 0.56, t[4] = 5.14, P = 0.007). For the filtered conditions, response bias varied depending on observer group, facial expression, and filter orientation. Averaged across the four expressions, response bias was found to be the largest at the 90° (vertical) filtered condition for both observer groups. As shown in Figure 6, relative to the bias level at the unfiltered condition, observers tended to have a bias toward “present” response for fearful expression and “absent” for angry, happy, and sad expressions at the 90° filtered condition, although the relative biases did not reach significance for fearful and sad expressions for the central vision loss group. These results can account for the shapes of the tuning curves shown in Figure 2 (inverted-bell for fearful expression and bell-shaped for angry, happy, and sad expressions), and for the high false-alarm rates for fearful expression and high miss rates for the other three expressions in Figures 3 and 4
Figure 6
 
Response bias, \(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\(c\), as a function of the orientation of filter for each facial expression and observer group. Dashed lines denote c of the unfiltered condition. A negative value indicates a preference toward responding “present,” whereas a positive value indicates a bias toward responding “absent.” Data points plotted at −90° are the same as those at 90°. Error bars represent ± 1 SEM.
Figure 6
 
Response bias, \(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\(c\), as a function of the orientation of filter for each facial expression and observer group. Dashed lines denote c of the unfiltered condition. A negative value indicates a preference toward responding “present,” whereas a positive value indicates a bias toward responding “absent.” Data points plotted at −90° are the same as those at 90°. Error bars represent ± 1 SEM.
Pixel Level Face Image Analysis
Following the analysis conducted by Yu and colleagues,7 we examined the information content of face stimuli at the pixel level and evaluated its relationship with observers' response on the group level. Specifically, we performed image subtraction between each pair of different facial expressions for all seven filtering conditions using average face images. Figure 7 shows examples of images (between happy and angry expressions) after subtraction and the four defined facial regions (eye, nose, mouth, and the rest). To quantify the amount of information content, RMS contrast was computed for each facial region of the subtracted images. Linear regression was performed to model the confusion rate (the proportion of response X given a target Y averaged across observers within each group) as a function of RMSeye (range: 0.020–0.083), RMSnose (range: 0.019–0.067), RMSmouth (range: 0.015–0.070) and RMSrest (range: 0.005–0.019). The data were compiled across all filtering conditions. Both the linear and two-way interaction terms were examined. The results shown in Table 3 suggest that observers' response was mainly driven by the information contained within the mouth and eye regions. The information contained within the mouth and eye regions accounted for 46% of the total variance in observers' response for observers with central vision loss and 31% for the age-matched controls. The results on sr2 (the squared semipartial correlation) show that the mouth region provides more unique contribution to the total variation in observers' response than the eye region. 
Figure 7
 
Examples of pixel-by-pixel image subtraction between happy and angry expressions for unfiltered, horizontal-filtered, and vertical-filtered conditions. The eye, mouth, and nose regions are defined by the three white boxes (from top to bottom), respectively. The remaining region is categorized as “the rest.” This analysis was performed for all pairs of the four facial expressions investigated in this study.
Figure 7
 
Examples of pixel-by-pixel image subtraction between happy and angry expressions for unfiltered, horizontal-filtered, and vertical-filtered conditions. The eye, mouth, and nose regions are defined by the three white boxes (from top to bottom), respectively. The remaining region is categorized as “the rest.” This analysis was performed for all pairs of the four facial expressions investigated in this study.
Table 3
 
Linear Regression Results for Three Observer Groups: Observers With Central Vision Loss, Age-Matched Controls, and Normally Sighted Young Adults Tested by Yu and Colleagues7
Table 3
 
Linear Regression Results for Three Observer Groups: Observers With Central Vision Loss, Age-Matched Controls, and Normally Sighted Young Adults Tested by Yu and Colleagues7
Relationship to Other Factors
Linear regression modeling was used to examine the possible effects of observers' characteristics (Table 1) on the individual normalized d′. All combinations of orientation of filter and facial expression were considered. Across observers, we found that only age showed significant association with the outcome measure (r = −0.14, P = 0.01) with older observers having worse performance, but it merely accounted for approximately 2% of the difference in facial expression categorization performance among the observers. No other observer factors were significant predictors. Performance level was not correlated with diagnosis (presence or absence of central vision loss), time since diagnosis, best-corrected distance visual acuities, and the locations of fPRL (OD, OS, or the better eye). 
Discussion
Similar to face identity processing,20 facial expression processing is also strongly tuned to horizontal orientation.7 The current study assessed the dependency of facial expression categorization performance on the orientation of spatial information, and showed that the finding on horizontal tuning extends to older adults and people with central vision loss. After matching the baseline performance (categorization of unfiltered face images) by adjusting viewing duration and distance, we found little difference in performance (normalized d′) between the central vision loss group and the normal vision group. Previously, Yu and colleagues7 found that the curve of normalized d′ versus the orientation of the spatial filter was symmetrical bell-shaped with a flat peak for all four facial expressions for normally sighted young adults. This also appeared to be true for our central vision loss and age-matched control groups. In addition, tuning bandwidth was similar across observer groups for each of the four facial expressions (Table 2), that is, orientation selectivity for each expression was similar regardless of age and the presence of central vision loss. 
Pixel Level Face Image Analysis
Performance on categorizing facial expression is largely driven by stimulus images.21 Across all observer groups, the largest biases were observed at the vertical-filtered condition. As shown in Figure 1, the information carried by the vertical and its neighboring orientation channels primarily contains diagnostic features for fearful expression and varies minimally across expressions. This explains the strong “present” bias for fearful expression and “absent” bias for the other expressions. 
By analyzing the information content within face stimuli at the pixel level, we identified the key information contents (in terms of facial region) for facial expression categorization. Despite being older and having central vision loss, observers with central vision loss (at the group level), like their age-matched peers and young normally sighted people,7 mainly relied on the mouth and eye regions to categorize facial expression. The negative coefficients (Table 3) indicate that larger energy changes occurred in these two regions produced lower confusion rate between the two expressions. Interestingly, the mouth region always provides more unique contribution than the eye region. 
The generalization of the above findings to the real-life circumstance may depend on the quality of the source face stimuli, such as how well the source stimuli reflect the genuine emotions.22 With nongenuine emotion, the face image may contain exaggerated or non-natural features. For instance, if testing stimuli contains some non-natural features in the mouth region, it may cause an overstating of the importance of the mouth region in facial expression categorization. 
Happy Expression
Previous research has shown that happy expression often outperforms the other expressions. Goren and Wilson23 found that when moving face images from the fovea to the periphery, performance of facial expression categorization was lowered for angry, fearful, and sad expressions but not for happy expression. They attributed the performance degradation to the lack of high spatial frequency information in peripheral vision, and suggested that happy expression is impervious to the effect of eccentricity because of its particular saliency. The advantage of recognizing happy expression in the periphery has also been shown in another study,24 although the account of the advantage remains elusive. In the present study, among the four expressions, happy expression again yielded the highest categorization performance for unfiltered faces and most filtered faces (except when images were filtered along the vertical orientation; Fig. 2). Our findings indicate that happy expression has prominent diagnostic features that are carried by a broad range of orientation channels. Even with some substantial information removal (e.g., keeping only the information near 30° orientation), the residual information can still be adequate for observers with or without central vision loss to accurately categorize happy expression. 
Central Vision Loss
In face identification, performance of people with central vision loss seems to suffer more from information removal compared with performance of normal vision even when the remaining information is along the horizontal orientation.2 In the present study, we did not find any major difference in facial expression categorization between the central vision loss and normal vision groups. The two groups behaved similarly in categorizing expressions for face images filtered along various orientations. A caveat is that we matched the performance accuracy between the central vision loss and the age-matched control groups for the task of categorizing facial expressions of unfiltered images. 
In the present study, all observers performed the task using their best available vision (i.e., fovea for normal vision group and PRL for central vision loss group). We did not test the control group in the periphery. A prior study showed that when moving from normal fovea to normal periphery, the accuracy of face identification versus filter orientation function becomes flatter and more similar to the functions obtained in observers with central vision loss.2 It is likely that for facial expression categorization, our control group would also behave more similarly to the central vision loss group (e.g., similar degradation in baseline performance, requiring larger image size and/or longer stimulus duration) when using peripheral vision. Given the adaptation that might have occurred for observers with central vision loss,25 it is possible that these individuals might even outperform the age-matched control when tested at equivalent eccentricities. 
Information content of face stimulus was determined by the whole spectrum of spatial frequencies. It is known that older adults tend to lose sensitivity in the high spatial frequency range.26 For people with central vision loss, this sensitivity is further reduced.27 Given their impaired high spatial frequency processing, we expect that people with central vision loss categorize facial expressions primarily by retrieving configural information (i.e., the spatial relations between facial features) through low spatial frequency channels, which fortunately seems sufficient.12 Although sensitivity can also reduce in the lower spatial frequency range due to central vision loss,28 we did not find that information content was a weaker predictor of the confusion pattern for observers with central vision loss, compared with normally sighted people. Note that some of the observers with central vision loss performed the task at a shorter viewing distance, which resulted in shifted spatial frequency distributions of the face images. This makes it necessary to obtain normalized d′, which helps remove the possible effect of individual variations in viewing conditions on the outcome measure. 
In comparison to what we observed in the laboratory, facial expression categorization in real life may be much more challenging for the individuals with central vision loss because they may not be able to come very close to the target face to retrieve useful information. In addition, faces are typically present in a cluttered environment with other faces and objects, which makes it more difficult to recognize due to crowding (the increased difficulty in recognizing a target due to the interference from nearby objects29). For faces, crowding can occur both externally (adverse interference on face recognition from nearby faces or objects)30 and internally (impairment on recognizing a facial part from other parts of the face).31 Unfortunately, crowding is more prominent in peripheral vision on which people with central vision loss heavily depend. Apparently, besides viewing distance and crowding, other factors, such as viewing angle and illumination, also may influence everyday performance in facial expression categorization. 
A practical implication of our findings is that one way to alleviate the difficulty of facial expression categorization for people with central vision loss may be to selectively enhance information along horizontal and nearby orientations, and around the mouth and eye regions. Given the recent advancements in the fields of computer vision and image processing, the endeavor to isolate face images in real-life scenes and to process the relevant information of face images could present a viable option to help people with central vision loss to recognize facial expressions. 
Acknowledgments
Supported by National Institutes of Health Grants EY012810 and EY025658. 
Disclosure: D. Yu, None; S.T.L. Chung, None 
References
Tejeria L, Harper R, Artes P, Dickinson C. Face recognition in age related macular degeneration: perceived disability, measured disability, and performance with a bioptic device. Br J Ophthalmol. 2002; 86: 1019–1026.
Yu D, Chung STL. Critical orientation for face identification in central vision loss. Optom Vis Sci. 2011; 88: 724.
Bullimore M, Bailey IL, Wacker RT. Face recognition in age-related maculopathy. Invest Ophthalmol Vis Sci. 1991; 32: 2020–2029.
Haymes SA, Johnston AW, Heyes AD. The development of the Melbourne low-vision ADL index: a measure of vision disability. Invest Ophthalmol Vis Sci. 2001; 42: 1215–1225.
Alexander MF, Maguire MG, Lietman TM, Snyder JR, Elman MJ, Fine SL. Assessment of visual function in patients with age-related macular degeneration and low visual acuity. Arch Ophthalmol. 1988; 106: 1543–1547.
Gosselin F, Schyns PG. Bubbles: a technique to reveal the use of information in recognition tasks. Vision Res. 2001; 41: 2261–2271.
Yu D, Chai A, Chung STL. Orientation information in encoding facial expressions. Vision Res. 2018; 150: 29–37.
Duncan J, Gosselin F, Cobarro C, Dugas G, Blais C, Fiset D. Orientations for the successful categorization of facial expressions and their link with facial features. J Vis. 2017; 17 (14): 7.
Schyns PG, Oliva A. Dr. Angry and Mr. Smile: When categorization flexibly modifies the perception of faces in rapid visual presentations. Cognition. 1999; 69: 243–265.
Calder AJ, Young AW, Keane J, Dean M. Configural information in facial expression perception. J Exp Psychol Hum Percept Perform. 2000; 26: 527.
Vuilleumier P, Armony JL, Driver J, Dolan RJ. Distinct spatial frequency sensitivities for processing faces and emotional expressions. Nat Neurosci. 2003; 6: 624–631.
Boucart M, Dinon J-F, Despretz P, Desmettre T, Hladiuk K, Oliva A. Recognition of facial emotion in low vision: a flexible usage of facial features. Vis Neurosci. 2008; 25: 603–609.
Hubel DH, Wiesel TN. Receptive fields and functional architecture of monkey striate cortex. J Physiol. 1968; 195: 215–243.
Dakin SC, Watt RJ. Biological “bar codes” in human faces. J Vis. 2009; 9 (4): 2.
Goffaux V, Dakin S. Horizontal information drives the behavioral signatures of face processing. Front Psychol. 2010; 1: 143.
Brainard DH. The psychophysics toolbox. Spat Vis. 1997; 10: 433–436.
Pelli DG. The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spat Vis. 1997; 10: 437–442.
Tottenham N, Tanaka JW, Leon AC, et al. The NimStim set of facial expressions: judgments from untrained research participants. Psychiatry Res. 2009; 168: 242–249.
Stanislaw H, Todorov N. Calculation of signal detection theory measures. Behav Res Methods Instrum Comput. 1999; 31: 137–149.
Goffaux V, Greenwood JA. The orientation selectivity of face identification. Sci Rep. 2016; 6: 34204.
Li R, Cottrell G. A new angle on the EMPATH model: spatial frequency orientation in recognition of facial expressions. Proceedings of the Cognitive Science Society; 2012; 34: 1894–1899.
Dawel A, Wright L, Irons J, et al. Perceived emotion genuineness: normative ratings for popular facial expression stimuli and the development of perceived-as-genuine and perceived-as-fake sets. Behav Res Methods. 2017; 49: 1539–1562.
Goren D, Wilson HR. Quantifying facial expression recognition across viewing conditions. Vision Res. 2006; 46: 1253–1262.
Calvo MG, Nummenmaa L, Avero P. Recognition advantage of happy faces in extrafoveal vision: featural and affective processing. Visual Cognition. 2010; 18: 1274–1297.
Legge GE, Chung STL. Low vision and plasticity: implications for rehabilitation. Annu Rev Vis Sci. 2016; 2: 321–343.
Owsley C, Sekuler R, Siemsen D. Contrast sensitivity throughout adulthood. Vision Res. 1983; 23: 689–699.
Kleiner RC, Enger C, Alexander MF, Fine SL. Contrast sensitivity in age-related macular degeneration. Arch Ophthalmol. 1988; 106: 55–57.
Sjöstrand J, Frisén L. Contrast sensitivity in macular disease: a preliminary report. Acta Ophthalmol. 1977; 55: 507–514.
Levi DM. Crowding—an essential bottleneck for object recognition: a mini-review. Vision Res. 2008; 48: 635–654.
Louie EG, Bressler DW, Whitney D. Holistic crowding: selective interference between configural representations of faces in crowded scenes. J Vis. 2007; 7 (2): 24.
Martelli M, Majaj NJ, Pelli DG. Are faces processed like words? A diagnostic test for recognition by parts. J Vis. 2005; 5 (1): 58–70.
Figure 1
 
Example stimuli of the four facial expressions, angry, fearful, happy, and sad, in the unfiltered (original) and the six filtered conditions. The spatial filter was applied along six orientations: −60° (120°), −30° (150°), 0° (horizontal), 30°, 60°, and 90° (vertical).
Figure 1
 
Example stimuli of the four facial expressions, angry, fearful, happy, and sad, in the unfiltered (original) and the six filtered conditions. The spatial filter was applied along six orientations: −60° (120°), −30° (150°), 0° (horizontal), 30°, 60°, and 90° (vertical).
Figure 2
 
Proportion correct of categorizing facial expression is plotted as a function of orientation of filter. The black symbols and lines represent data from normally sighted controls (NV). The gray symbols and lines represent data from observers with central vision loss (CVL). Dashed lines denote the categorization performance for unfiltered images. For each set of data, performance for the vertical filter orientation is plotted twice at −90° and 90°. Error bars represent ± 1 SEM.
Figure 2
 
Proportion correct of categorizing facial expression is plotted as a function of orientation of filter. The black symbols and lines represent data from normally sighted controls (NV). The gray symbols and lines represent data from observers with central vision loss (CVL). Dashed lines denote the categorization performance for unfiltered images. For each set of data, performance for the vertical filter orientation is plotted twice at −90° and 90°. Error bars represent ± 1 SEM.
Figure 3
 
Confusion matrices for the central vision loss group. One matrix was constructed for each of the seven filtering conditions (unfiltered, −60°, −30°, 0°, 30°, 60°, and 90° filtered conditions) with the data averaged across all observers in the group. The average confusion matrices across the six filtered conditions (“6 Filtered”) and across all conditions (“6 Filtered + Unfiltered”) were also constructed. In each matrix, the rows are targets presented (Y), and the columns are observer responses (X). Each cell contains the proportion of response X given a target Y. The diagonal elements represent the proportion of correct categorization of a given target. The rest of the matrix shows the pattern of confusions (false alarms and misses). The value under each column stands for the total proportion of response X for a given condition.
Figure 3
 
Confusion matrices for the central vision loss group. One matrix was constructed for each of the seven filtering conditions (unfiltered, −60°, −30°, 0°, 30°, 60°, and 90° filtered conditions) with the data averaged across all observers in the group. The average confusion matrices across the six filtered conditions (“6 Filtered”) and across all conditions (“6 Filtered + Unfiltered”) were also constructed. In each matrix, the rows are targets presented (Y), and the columns are observer responses (X). Each cell contains the proportion of response X given a target Y. The diagonal elements represent the proportion of correct categorization of a given target. The rest of the matrix shows the pattern of confusions (false alarms and misses). The value under each column stands for the total proportion of response X for a given condition.
Figure 4
 
Confusion matrices for the normal vision group. Details are the same as in Figure 3.
Figure 4
 
Confusion matrices for the normal vision group. Details are the same as in Figure 3.
Figure 5
 
Normalized d′ as a function of orientation of filter for each facial expression and observer group. Dashed lines denote a normalized d′ of zero. Data points plotted at −90° are the same as those at 90°. Error bars represent ± 1 SEM.
Figure 5
 
Normalized d′ as a function of orientation of filter for each facial expression and observer group. Dashed lines denote a normalized d′ of zero. Data points plotted at −90° are the same as those at 90°. Error bars represent ± 1 SEM.
Figure 6
 
Response bias, \(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\(c\), as a function of the orientation of filter for each facial expression and observer group. Dashed lines denote c of the unfiltered condition. A negative value indicates a preference toward responding “present,” whereas a positive value indicates a bias toward responding “absent.” Data points plotted at −90° are the same as those at 90°. Error bars represent ± 1 SEM.
Figure 6
 
Response bias, \(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\(c\), as a function of the orientation of filter for each facial expression and observer group. Dashed lines denote c of the unfiltered condition. A negative value indicates a preference toward responding “present,” whereas a positive value indicates a bias toward responding “absent.” Data points plotted at −90° are the same as those at 90°. Error bars represent ± 1 SEM.
Figure 7
 
Examples of pixel-by-pixel image subtraction between happy and angry expressions for unfiltered, horizontal-filtered, and vertical-filtered conditions. The eye, mouth, and nose regions are defined by the three white boxes (from top to bottom), respectively. The remaining region is categorized as “the rest.” This analysis was performed for all pairs of the four facial expressions investigated in this study.
Figure 7
 
Examples of pixel-by-pixel image subtraction between happy and angry expressions for unfiltered, horizontal-filtered, and vertical-filtered conditions. The eye, mouth, and nose regions are defined by the three white boxes (from top to bottom), respectively. The remaining region is categorized as “the rest.” This analysis was performed for all pairs of the four facial expressions investigated in this study.
Table 1
 
Summary of Observers' Characteristics (Age, Sex, Diagnosis, Time Since Diagnosis, Best-Corrected Distance Visual Acuity, and fPRL) and Viewing Conditions (Stimulus Duration, Viewing Distance and Image Size)
Table 1
 
Summary of Observers' Characteristics (Age, Sex, Diagnosis, Time Since Diagnosis, Best-Corrected Distance Visual Acuity, and fPRL) and Viewing Conditions (Stimulus Duration, Viewing Distance and Image Size)
Table 2
 
Lower and Upper Values of the 95% Confidence Intervals of Tuning Bandwidths for the Four Facial Expressions and the Three Observer Groups: Observers With Central Vision Loss, Age-Matched Controls, and Normally Sighted Young Adults Tested by Yu and Colleagues7
Table 2
 
Lower and Upper Values of the 95% Confidence Intervals of Tuning Bandwidths for the Four Facial Expressions and the Three Observer Groups: Observers With Central Vision Loss, Age-Matched Controls, and Normally Sighted Young Adults Tested by Yu and Colleagues7
Table 3
 
Linear Regression Results for Three Observer Groups: Observers With Central Vision Loss, Age-Matched Controls, and Normally Sighted Young Adults Tested by Yu and Colleagues7
Table 3
 
Linear Regression Results for Three Observer Groups: Observers With Central Vision Loss, Age-Matched Controls, and Normally Sighted Young Adults Tested by Yu and Colleagues7
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×