Investigative Ophthalmology & Visual Science Cover Image for Volume 50, Issue 9
September 2009
Volume 50, Issue 9
Free
Visual Psychophysics and Physiological Optics  |   September 2009
Quantitative Assessment of Perceived Visibility Enhancement with Image Processing for Single Face Images: A Preliminary Study
Author Affiliations
  • Ming Mei
    From the Center for Vision Research, York University, Toronto, Ontario, Canada; and the
  • Susan J. Leat
    School of Optometry, University of Waterloo, Waterloo, Ontario, Canada.
Investigative Ophthalmology & Visual Science September 2009, Vol.50, 4502-4508. doi:https://doi.org/10.1167/iovs.08-3079
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ming Mei, Susan J. Leat; Quantitative Assessment of Perceived Visibility Enhancement with Image Processing for Single Face Images: A Preliminary Study. Invest. Ophthalmol. Vis. Sci. 2009;50(9):4502-4508. https://doi.org/10.1167/iovs.08-3079.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

purpose. To develop a method to quantitatively assess the visibility enhancement of single face images gained with digital filters for people with maculopathy. To apply this method to obtaining preliminary results of visibility enhancement with subjectively preferred filters for people with maculopathy.

methods. Six subjects with normal vision and two with maculopathy were required to recognize seven facial expressions of single face images with different display durations of 2 seconds, 1 second, and 0.73 second. As a result, four facial expressions (anger, disgust, fear, and sadness) and a display duration of 0.73 second were chosen to measure single face image visibility enhancement with subjectively preferred digital filters. Finally, nine subjects with maculopathy viewed 30 images with four facial expressions that were either unfiltered or filtered with subjectively preferred digital filters. Each subject was required to identify the facial expression in a four-alternative, forced-choice paradigm. The errors with original and filtered images were calculated.

results. The method with four facial expressions and display duration of 0.73 second prevented a ceiling effect. The nine subjects with maculopathy made significantly fewer errors with the filtered images than with the original ones images (P = 0.004).

conclusions. The developed method was effective in objective (quantitative) measurement of the enhancement in image visibility with digital filtering for people with maculopathy. There is a measurable improvement in facial expression recognition with subjectively preferred filters. The facial expression recognition task developed and validated in the present study is recommended as a method to be used in future studies of enhancement of face images.

People with maculopathy develop central vision loss, with its attendant reduced visual resolution. As a result, they have problems recognizing familiar faces and perceiving facial expressions. These are everyday tasks, required in face-to-face interaction and in viewing images, such as those on television and in pictures. Face perception is performed in a distinctive holistic manner, 1 2 (i.e., faces are processed as wholes rather than in independent parts). Therefore, most of the optical and nonoptical magnification devices that give high magnification for reading have a limited usefulness for face perception, because it is difficult to synthesize enlarged parts of a face for whole-face perception. 
As an alternative (or in addition) to magnification, digital image processing has been applied to both text 3 4 5 and picture images 6 7 8 9 10 11 12 13 to improve reading and perceived image visibility for people with visual impairment. The theory is that if the lost spatial frequency information of an image could be enhanced, the corresponding perception would be enhanced. It is not hard to anticipate that the optimal enhancement would be related to the extent of vision loss. Filters based on the ratio of the contrast sensitivity of the visually impaired person to that of the normal control have been applied to text. Two studies 3 4 reported significant improvement in the reading speed but another study 5 reported only slight improvements. Another method has been to use generic filters that enhance certain spatial information, but which are not directly related to the individual’s contrast sensitivity loss. Using this approach, Leat and Mei 6 showed that digital filtering could improve perceived face visibility for people with maculopathies (atrophic age-related macular degeneration [ARMD], exudative ARMD, and juvenile macular dystrophy [JMD]). They divided the digital filters into two groups: generic and custom-devised. Generic filters are not based on an individual’s specific vision loss, and include filters such as contrast enhancement, Peli’s adaptive enhancement, and high-pass filters. 9 Custom-devised filters are based on individual contrast loss at certain spatial frequencies. These are usually applied as band-pass filters and may be based on contrast sensitivity (CS) loss 5 9 or contrast matching (CM). 6 Leat and Mei 6 tested several different band-pass filters, the gains of which were based on either the CS or CM function, and compared them against certain generic filters. They applied the filters to two categories of images—faces and general scenes—and found that the filters based on contrast matching gave perceived improvement rather than those based on CS for people with maculopathy. They also found that some generic filters gave equal perceived improvement. Peli’s adaptive enhancement and contrast enhancement were the two generic filters most frequently preferred by all subjects. 
Until now, it is clear that digital filtering can enhance perceived visibility for both text and images. One remaining question is whether this perceived enhancement can be demonstrated quantitatively (objectively)—that is, does it result in measurable improvement in performance. For text, reading speed measurement has been shown to be a good parameter for measuring improvement objectively. 3 4 5 But for single face images, as far as we know, there is only one study that has been conducted to answer this question. Peli et al. 11 measured the improved visibility using a recognition task of celebrity and unfamiliar faces. The original and filtered (using Peli’s adaptive enhancement filter) images were shown to people with low vision. Subjects indicated on a scale of 1 to 6 their level of confidence in recognizing a face as a celebrity. The results showed that 11 of 21 subjects demonstrated enhanced recognition, and the study claimed a meaningful benefit of digital enhancement for some patients with central field loss. However, the method of Peli et al. may have a limitation. The task of recognizing celebrity faces may function well with people who follow celebrities but may be less effective for others who are less familiar with the world of politics and entertainment. Based on these concerns, face identity recognition (FIR) may not be the best task for measuring the enhancement of visibility of single face images. 
Facial expression recognition (FER) is another key task, in addition to FIR. Both are important in daily life, but they have been shown to be different cognitive tasks, 14 activating different face-processing regions in the brain. 15 16 17 18 19 20 21 22 and being processed by different pathways. 23 24 25 26 In prosopagnosia, for example, the patient cannot perform FIR, but performs normally for FER. However, there is some evidence that the two tasks may be similar in terms of their basic visual requirements. Although FER and FIR have been used in several studies in people with visual impairments, 27 28 29 30 31 few studies have used both. Bullimore et al. 29 used a composite measure that included accuracy in both FIR and FER of 13 subjects with ARM and four visually normal subjects. This correlated best with word-reading acuity, rather than with contrast thresholds, grating acuity, or letter chart acuity. Although there appeared to be a relationship between FIR and FER, the FIR detection distances were shorter for all subjects apart from one subject, suggesting that FIR is a more difficult task compared with FER. Another study in which both FIR and FER were used in people with ARMD 28 showed that both correlated well with distance and reading acuity. Considering the difficulties in measuring FIR, we chose FER to demonstrate enhancement of visibility for face images with digital image enhancement. 
The current preliminary study had two purposes: first, to develop a method using images of facial expression to quantitatively assess enhancement of visibility gained with digital filters (experiment 1). Second, to apply this method to measure improvement in visibility as a result of the application of subjectively preferred filters for people with maculopathy (experiment 2). These filters were those that each individual subject had chosen, among a range of filters, as giving the best improvement in visibility for face images. 6 A tertiary question was whether differences in perceived enhancement between two digital filters can be demonstrated. The first hypothesis is that FER is an effective task to use in demonstrating visibility enhancement with single face images filtered by preferred digital filters for people with maculopathy. The second hypothesis is that perceived improvements in visibility can be demonstrated objectively. 
Methods
Experiment 1: Preliminary Study to Develop Methods
Materials.
A CD of full-face images with different expressions named JACFEE (Japanese and Caucasian facial expressions of emotion) was used (Paul Ekman Group, LLC, San Francisco, CA; http://www.paulekman.com/researchproducts.html). JACFEE consists of 56 photographs of different people, half are men and half women, and half are Caucasian and half Japanese. The photographs are in color and each illustrates one of seven different emotions (anger, contempt, disgust, fear, happiness, surprise, and sadness). The photographs were taken with the same background, which was a light blue screen, and the people were all wearing a similar white shirt with collar. There are eight images for each emotion giving a total of 56 images. Among the eight images, there are two male and two female Caucasians, and two male and two female Asians. The image size was 1250 × 990 pixels. The vertical height of each face (from the hair line to the bottom of the chin) was approximately four fifths of the height of the monitor. The monitor height was 28.5 cm. Thus, each face was approximately 22.8 cm high, which is close to the size of a real face. It must be noted that, in everyday life, people with normal vision would not usually view faces at a distance of 57 cm. However, people with low vision often use a large-screen computer or TV or a combination of large screen and decreased viewing distance. 
Instrument and Software.
A 21-inch (53.3 cm diagonally) monitor (Trinitron; Sony, Tokyo, Japan) with a resolution of 1280 × 1024 pixels was used to display each image. Images were displayed with an application in commercial software (Imageshow, MatLab; The MathWorks, Natick, MA), which was programmed by a software programmer from the System Design Engineering Department, University of Waterloo. 
Subjects.
Six subjects with normal vision (visual acuity of 6/6 or better, aged 25–35 years) and two subjects with maculopathy (one with JMD was aged 59 years and the other with exudative AMD was aged 78 years) participated in this preliminary study. The subjects with normal vision were not age matched, as the chief purpose was to ensure that a person with good vision could perform the task and that the task was not too difficult (i.e., a person with good vision could identify the expressions accurately) and to choose which expressions to use, so as to ensure a fairly consistent level of difficulty among the expressions. Subjects with normal vision and maculopathy were used at this preliminary stage to ensure that the task was not too easy. We wanted a level of difficulty such that those with normal vision would make a few errors and those with maculopathy would make more errors, so that there was a possible range for measuring improvement with the filters for the subjects with maculopathy. The two subjects with maculopathy wore their habitual glasses plus a +1.75-D distance correction lens for the viewing distance of 57 cm. All subjects gave written informed consent for participation in the study, which was approved by the Office of Research Ethics at the University of Waterloo. All subjects were treated in accordance with the Declaration of Helsinki. 
Procedure.
The intended approach was to show the faces for a brief time and to use the percentage of correct identification of the emotion as the outcome measure. At this stage, two critical questions were to be answered. First, how many expressions would be ideal for the quantitative measurement? Since most of the subjects who participated in the stage two of quantitative measurement would be seniors, it is possible that the number of facial expressions 7 would be too many for a subject to remember during the test. Second, how long should the image be displayed? Initial testing showed that the longer the display duration, the higher the accuracy of expression recognition. If the display time was too long, the task would be too easy, and any enhancement in recognition with the filtered images compared with the unfiltered images would not be shown (a ceiling effect). A critical-duration time should be determined to eliminate any ceiling effect so as to be able to measure any enhancement effect with the filtered images. 
The information that accompanied the JACFEE CD indicated that the expression of contempt was confusing, even to subjects with normal vision. Therefore, it was excluded from the beginning. The six remaining facial expressions were displayed for a short duration and viewed at a distance of 57 cm. To prevent after-images, each face image was preceded by a 2-second plain gray image and followed by the same plain gray image displayed for an unlimited time. In total, 48 face images were shown with all six expressions. The order of the expressions was randomized. Display durations of 2 seconds, 1 second, and 0.73 second were used (0.73 second was the shortest duration that the program could provide). The results showed that no subjects made any mistakes with the 2-second duration, a few mistakes were made with the 1-second duration and more mistakes were made with the 0.73-second duration. At 1 and 2 seconds, none of the subjects (either with normal vision or maculopathy) made any errors with the expressions of happiness and surprise. The results for the 0.73-second duration are shown in Table 1 . The normal-sighted subjects made no errors with the expressions of happiness and only one error with the expression of surprise. Two previous studies 32 33 also found that expressions of happiness and surprise were the easiest expressions to recognize. Therefore, these two expressions were excluded for the purpose of avoiding a ceiling effect. As a result, four face expressions were left (anger, disgust, fear, and sadness), which is a relatively easy number for elderly individuals to remember at one time and has been used in previous studies (Bullimore MA, et al. IOVS 2000;41:ARVO Abstract 2510). 29 Next, the two subjects with maculopathy were tested with the four facial expressions using 32 images displayed with a duration of 0.73 second. The results are also shown in Table 1 . It can be seen that the older subjects with maculopathy made more mistakes than the younger normal subjects under the same experimental conditions. This result suggested that with the duration of 0.73 second older subjects with maculopathy would make more errors than those with normal vision and that a ceiling effect would not be encountered without the filters. Since subjects with normal vision also made errors, it seems that there would also not be a ceiling effect when the filters were applied for the subjects with maculopathy and that there would be the possibility of measuring improvement with the filters. According to the subjective responses of the two subjects with maculopathy, a choice of four facial expressions was easy for the seniors to undertake. From this preliminary study, the facial expression image duration of 0.73 and the four face expressions (anger, disgust, fear, and sadness, shown in Fig. 1 ) were chosen for the further quantitative measurement of image visibility. 
Experiment 2: Objective Measurement of Improved Visibility with Filtered Images
Subjects.
Nine subjects with bilateral maculopathy participated in this experiment (Table 2) . The better eye was used in the study. The subjects wore their habitual glasses. All the subjects except S4 were given a distance correction lens of +1.75 D for the viewing distance of 57 cm. Three had atrophic ARMD, four had exudative ARMD, and two had JMD. Note that S6 had a diagnosis of JMD, although he is now in the same age group as those with age-related maculopathy. The two subjects with maculopathy who participated in experiment 1 were not included. All subjects gave written informed consent for participation in the study, which was approved by the Office of Research Ethics at the University of Waterloo. All subjects were treated in accordance with the Declaration of Helsinki. 
Selection of Digital Filters.
Six generic filters had been compared in their ability to improve perceived visibility of face images for these subjects. 6 The filters were contrast enhancement, Peli’s adaptive enhancement, edge enhancement, high-pass/unsharp masking, difference of Gaussian, equi-emphasis band-pass filter, and band-pass filters based on contrast sensitivity or supra-threshold contrast matching. Three custom-designed filters had also been evaluated. These were band-pass filters based on contrast sensitivity loss, suprathreshold contrast matching, and emphasis of the peak of the contrast sensitivity function. All the filters could be applied at different strengths and with different parameters. A total of four 2-hour visits were required for the study. The first was to measure the CSF and contrast-matching functions. These were used to develop the custom-devised filters. Then subjects went through three visits to identify the preferred filters. The second visit was for filter rankings, in which different versions of each filter type were viewed side by side and the most preferred version for each type of filter was chosen. At the third visit, the filters were rated against the original, unfiltered image, and the ratings were used to determine the optimum filters (the two filters that had the highest ratings of those included in that study). None of the filters was given a rating close to 200, which may mean that the perceived improvement in visibility did not result in the perception of a restoration of vision or improvement to normal vision. 9 These two highest rated filters for each subject (filters 1 and 2) were chosen for evaluation in the present study and were designated filter 1 (the highest rated filter) and filter 2 (the second highest) to distinguish the two filters. For two subjects, the top two filters had been rated equally. In these cases filter 1 or 2 was assigned arbitrarily. Table 3shows these filters for each subject. 
Procedure.
For each subject, the 32 images showing the four emotions were randomized in order. The first 1 to 15 images were processed by filter 1. The next 16 to 30 images were processed by filter 2. Images 31 and 32 were not processed and were used as the control subjects. Each face image was shown twice, in an original and processed version. The two unfiltered control images were also shown twice to measure any practice effect. These 64 images were mixed and then put into a randomized order for image display with one rule that the same face (filtered and unfiltered) would not be shown consecutively. 
The subjects were told that there were four expressions to recognize and a list of the possible four expressions in large print was given to them. They were shown one separate example of each emotion at the beginning to explain the purpose of this session. This example was not used in the actual study. The image display was similar to the preliminary study with an image duration of 0.73 second. 
Analysis.
Each facial expression response was marked as incorrect or correct and the numbers of errors were recorded. A paired t-test between the test and retest of the control images was applied to consider any practice effect. To study the effect of enhancement of the test images, we applied a paired t-test across subjects. Scatterplots and correlation coefficients were used to consider the agreement between the magnitude of improvement in performance and the filter ratings. The average rating for the two preferred filters over 100 (equal rating to the unfiltered image) was calculated. 
To test whether there was any difference in enhancement between the first and second filters, the results were separated into first filter and second filter, and a paired t-test was applied, excluding those subjects who had rated filters 1 and 2 equally. Again, scatterplots and correlations were used to show agreement between the difference in the ratings (between filters 1 and 2) and the difference in errors made with the two filters. 
Results
Repeatability
Across all nine subjects, the mean difference between the test and retest of the two unfiltered images was 0.44. A paired-t-test to compare the test and retest responses showed no significant difference (P > 0.05). This result suggests that there was no significant practice effect. 
Overall Performance with Preferred Filters
The results are shown in Table 4 . The two-tailed paired t-test showed that the best two filters significantly decreased the errors in facial expression recognition across the nine subjects with maculopathy (P = 0.004). Figure 2is a scatterplot of the improvement in performance (errors without filters − errors with filters) against the average increase in rating (average rating for the two filters − 100). It can be seen that there is no significant correlation between these measures (r = 0.18). 
Comparison between Two Best-Rated Filters
The ratings for perceived image enhancement of the first and second filters for the nine subjects are shown in Table 3 . The number of errors for facial expression recognition of the first and second filters were separated and are shown in Table 5 . Since subjects 7 and 9 rated the first and second filter equally, they were excluded from this statistical analysis. A paired t-test showed that the rating of perceived enhancement of the first and second filters was significantly different (P = 0.034). Four subjects had more errors with the second filter and two subjects with the first filter; one performed the same on both. It can be seen that the highest rated filter resulted in fewer errors than did the second filter, but this difference did not reach significance (paired t-test, excluding subjects 7 and 9, P = 0.31). Thus, the higher rated first filters did not give significantly better performance than the second filters, even though there was a significant difference in perceived enhancement. 
Figure 3is a scatterplot of the difference in performance with the two filters (errors with filter 2 − errors with filter 1) against the difference in rating (rating for the 1 − rating for filter 2). It can be seen that there is no significant correlation between these measures (r = 0.097). 
Discussion
This preliminary study is the first to use FER to measure the visibility enhancement by digital filters and it has demonstrated that FER can be used to measure improvements in performance and that the subjectively preferred enhancements give rise to improvement in performance. 
From the literature, it is not clear which spatial frequency range is critical for FIR and FER. A functional magnetic resonance imaging (fMRI) study 34 showed that low-pass–filtered face images with expressions of fear increases amygdala activation compared with those with high spatial frequencies. Several studies also indicated that low spatial frequency information is important for expression discrimination, and high spatial frequency information is important for emotional intensity judgment. 34 35 36 FIR seems to require higher spatial frequency information, 11 29 37 38 39 since finer details of a specific face needs to be visualized. Alternatively, Rubin and Schuchard 30 did not find a strong relationship between contrast sensitivity and face recognition. People with later stages of maculopathy fail to perceive high and middle spatial frequency information. Mei and Leat 40 showed that most participants with maculopathy could not match contrast above 2 cyc/deg, which means at these spatial frequencies, even 100% contrast cannot be perceived. For those spatial frequencies that could be detected (≤2 cyc/deg), contrast thresholds were higher than those of the controls. Thus, people with later stages of maculopathy have lost perception of middle and high spatial frequencies. Increasing the contrast of these spatial frequencies cannot benefit visual perception. This may explain why the study by Peli et al. 11 with FIR demonstrated improvements in only approximately 50% of the subjects. On the contrary, the evidence appears to indicate that FER can be performed with lower SF information, which can be perceived by people with maculopathy, and therefore measures of FER result in an ability to demonstrate improvement. Another benefit of using FER is that it avoids the problem of a person’s familiarity with particular faces. 
FER is a better visual task than FIR for measuring visibility enhancement for single faces, but the selection of faces, expression types, and display duration are also critical in designing the method. The present study found that four facial expressions (anger, disgust, fear, and sadness) of both Caucasian and Asian men and women 41 and a display duration of 0.73 seconds resulted in a difference in performance between older subjects with maculopathy and younger subjects with normal vision and allowed room for improvement for the subjects with maculopathy. 
Experiment 2 also showed that the method developed is sensitive. In this study, the condition to be detected was an objective measure of improvement in visibility as a result of digital image enhancement. Even for this relatively small subject sample of nine subjects, there was a significant effect of image processing on image visibility (P = 0.004)—that is, fewer errors were made with filtered images than with the unfiltered images. This result suggests that the second hypothesis was found to be true: Subjectively preferred filters do result in actual improvement in performance as measured by FER, and the test that we have developed is sensitive enough to detect these differences. 
As for the relationship between subjective preference of filters and measured improvement in performance, the present study seems to suggest that they may not be a direct relation. For both the situations when the two filters were considered together and when comparing the first and second filter, there was no correlation between the magnitude of the subjective ratings and the improvement in performance. Also, there was a significant difference between the perceived enhancement of the first and second filters, but there was no significant difference in the ability to detect facial expressions. This lack of significant difference may be a question of sample size. With the results found in this study, a power calculation shows that a sample size of 44 would be required to find a significant difference in performance between the first and second filters (based on the data from Table 5in which the mean difference was 1 and the SD of the differences was 2.38, paired t-test, α = 0.05, power = 80%). Thus, perceived performance may be the more sensitive method for determining differences in the effectiveness of different filters, but more research is needed to compare the two measures of the effectiveness of a filter. It certainly appears that there is not a close association between subjective preference and performance, and therefore both may have to be included in future studies. 
Subjective preference for filters is a very efficient technique to use when comparing a range of different filters or versions of a filter. What we showed is that the preferences do result in improved performance and that the method developed in this study can objectively demonstrate these improvements. However, the difference in preference between two filters was not demonstrated by measurable visibility improvement. 
Conclusions
In summary, the method developed in this preliminary study proved to be effective in objective (quantitative) measurement of the image visibility enhancement with digital filtering for people with maculopathy. Even though the subject sample was small, the results show that there is measurable improvement in facial expression recognition with subjectively preferred filters, although the objective improvements were not directly proportional to those of the qualitative measurements. 
 
Table 1.
 
Number of Mistakes of Facial Expression Recognition with Face Expression Image Display Duration of 0.73 Second
Table 1.
 
Number of Mistakes of Facial Expression Recognition with Face Expression Image Display Duration of 0.73 Second
Subject Anger Disgust Fear Sadness Happiness Surprise
Normal S1 0 1 0 2 0 0
Normal S2 1 1 1 1 0 0
Normal S3 0 1 0 0 0 0
Normal S4 1 0 1 1 0 0
Normal S5 0 0 1 0 0 1
Normal S6 1 1 0 1 0 0
Maculopathy S1 3 4 2 3
Maculopathy S2 4 3 4 3
Figure 1.
 
Examples of the four facial expressions finally chosen from JACFEE.
Figure 1.
 
Examples of the four facial expressions finally chosen from JACFEE.
Table 2.
 
Subject Information for Experiment 2
Table 2.
 
Subject Information for Experiment 2
Subject Sex Age (y) VA (logMAR) Diagnosis Eye
S1 Male 81 0.70 Atrophic ARMD Left
S2 Female 59 0.70 Exudative ARMD Left
S3 Female 79 0.51 Atrophic ARMD Right
S4 Female 20 1.51 JMD Right
S5 Female 83 1.11 Atrophic ARMD Right
S6 Male 64 0.92 JMD Right
S7 Female 85 1.33 Exudative ARMD Right
S8 Male 89 0.60 Exudative ARMD Left
S9 Female 81 1.12 Exudative ARMD Right
Table 3.
 
Preferred First and Second Filter for the Nine Subjects
Table 3.
 
Preferred First and Second Filter for the Nine Subjects
Subject First Filter Second Filter
S1 BP-equi-emphasis (145) BP-3.6% (117.5)
S2 BP-CS peak emphasis (115) BP-equi-emphasis (107.5)
S3 DOG convolution (132.5) BP-3.6% (130)
S4 BP-CS (127.5) BP-27.9% (122.5)
S5 DOG convolution (150) Peli adaptive enhancement (135)
S6 DOG convolution (135) DOG FFT (127.5)
S7 BP-27.9%(175) BP-equi-emphasis (175)
S8 BP-3.6% (107.5) BP-27.9% (106.5)
S9 High-pass (125) BP-27.9% (125)
Table 4.
 
Errors of Facial Expression Identification with and without the Preferred Filter
Table 4.
 
Errors of Facial Expression Identification with and without the Preferred Filter
Subject Errors with Original Images Errors with Filtered Images
S1 9 6
S2 14 6
S3 4 2
S4 15 11
S5 11 7
S6 15 8
S7 15 13
S8 8 8
S9 12 11
Mean 11.4 8
Figure 2.
 
Correlation between improvement in errors and improvement in ratings across nine subjects.
Figure 2.
 
Correlation between improvement in errors and improvement in ratings across nine subjects.
Table 5.
 
The Comparison of Facial Expression Recognition with First- and Second-Preferred Filters
Table 5.
 
The Comparison of Facial Expression Recognition with First- and Second-Preferred Filters
Subject Errors with First Filter Errors with Second Filter
S1 2 4
S2 2 4
S3 2 0
S4 4 7
S5 4 2
S6 4 4
S7 (7) (6)
S8 2 6
S9 (7) (4)
Mean 2.86 3.86
Figure 3.
 
Correlation between improvement in filter ratings between first and second filters versus improvement in errors (errors with second filter − errors with first filter).
Figure 3.
 
Correlation between improvement in filter ratings between first and second filters versus improvement in errors (errors with second filter − errors with first filter).
McKoneE, MartiniP, NakayamaK. Categorical perception of face identity in noise isolates configural processing. J Exp Psychol Hum Percept Perform. 2001;27:573–599. [CrossRef] [PubMed]
TanakaJW, FarahMJ. Parts and wholes in face recognition. Q J Exp Psychol A. 1993;46:225–245. [PubMed]
LawtonTA, SebagJ, SadunAA, CastlemanKR. Image enhancement improves reading performance in age-related macular degeneration patients. Vision Res. 1998;38:153–162. [CrossRef] [PubMed]
FineEM, PeliE. Enhancement of text for the visually impaired. J Opt Soc Am A Opt Image Sci Vis. 1995;12:1439–1447. [CrossRef] [PubMed]
LawtonTB. Image enhancement filters significantly improve reading performance for low vision observers. Ophthalmic Physiol Opt. 1992;12:193–200. [PubMed]
Leat SJ, Mei M. Custom-devised and generic digital enhancement of images for people with maculopathy. Ophthalmic Physiol Opt. Published online March 6, 2009.
WolffsohnJS, MukhopadhyayD, RubinsteinM. Image enhancement of real-time television to benefit the visually impaired. Am J Ophthalmol. 2007;144:436–440. [CrossRef] [PubMed]
PeliE. Recognition performance and perceived quality of video enhanced for the visually impaired. Ophthalmic Physiol Opt. 2005;25:543–555. [CrossRef] [PubMed]
LeatSJ, OmoruyiG, KennedyA, JerniganE. Generic and customized digital image enhancement filters for the visually impaired. Vision Res. 2005;45:1991–2007. [CrossRef] [PubMed]
PeliE, KimJ, YitzhakyY, GoldsteinRB, WoodsRL. Wideband enhancement of television images for people with visual impairments. J Opt Soc Am A Opt Image Sci Vis. 2004;21:937–950. [CrossRef] [PubMed]
PeliE, LeeE, TrempeCL, BuzneyS. Image enhancement for the visually impaired: the effects of enhancement on face recognition. J Opt Soc Am A Opt Image Sci Vis. 1994;11:1929–1939. [CrossRef] [PubMed]
PeliE, GoldsteinRB, YoungGM, TrempeCL, BuzneySM. Image enhancement for the visually impaired: simulations and experimental results. Invest Ophthalmol Vis Sci. 1991;32:2337–2350. [PubMed]
PeliE. Limitations of image enhancement for the visually impaired. Optom Vis Sci. 1992;69:15–24. [CrossRef] [PubMed]
SergentJ. Structural processing of faces.EllisHD YoungAW eds. Handbook of Research on Face Processing. 1989;54–91.North Holland Amsterdam.
GlascherJ, TuscherO, WeillerC, BuchelC. Elevated responses to constant facial emotions in different faces in the human amygdala: an fMRI study of facial identity and expression. BMC Neurosci. 2004;5:45. [CrossRef] [PubMed]
WilliamsMA, MorrisAP, McGloneF, AbbottDF, MattingleyJB. Amygdala responses to fearful and happy facial expressions under conditions of binocular suppression. J Neurosci. 2004;24:2898–2904. [CrossRef] [PubMed]
WinstonJS, VuilleumierP, DolanRJ. Effects of low-spatial frequency components of fearful faces on fusiform cortex activity. Curr Biol. 2003;13:1824–1829. [CrossRef] [PubMed]
BlairRJ, MorrisJS, FrithCD, PerrettDI, DolanRJ. Dissociable neural responses to facial expressions of sadness and anger. Brain. 1999;122:883–893. [CrossRef] [PubMed]
NakamuraK, KawashimaR, ItoK, et al. Activation of the right inferior frontal cortex during assessment of facial emotion. J Neurophysiol. 1999;82:1610–1614. [PubMed]
DolanRJ, FletcherP, MorrisJ, KapurN, DeakinJF, FrithCD. Neural activation during covert processing of positive emotional facial expressions. Neuroimage. 1996;4:194–200. [CrossRef] [PubMed]
LeDouxJ. Emotional networks and motor control: a fearful view. Prog Brain Res. 1996;107:437–446. [PubMed]
HaxbyJV, HoffmanEA, GobbiniMI. The distributed human neural system for face perception. Trends Cogn Sci. 2000;4:223–233. [CrossRef] [PubMed]
YoungAW, NewcombeF, de HaanEH, SmallM, HayDC. Face perception after brain injury: selective impairments affecting identity and expression. Brain. 1993;116:941–959. [CrossRef] [PubMed]
SergentJ, VillemureJG. Prosopagnosia in a right hemispherectomized patient. Brain. 1989;112:975–995. [CrossRef] [PubMed]
TranelD, DamasioAR, DamasioH. Intact recognition of facial expression, gender, and age in patients with impaired recognition of face identity. Neurology. 1988;38:690–696. [CrossRef] [PubMed]
BruyerR, LaterreC, SeronX, et al. A case of prosopagnosia with some preserved covert remembrance of familiar faces. Brain Cogn. 1983;2:257–284. [CrossRef] [PubMed]
BoucartM, DinonJF, DespretzP, DesmettreT, HladiukK, OlivaA. Recognition of facial emotion in low vision: a flexible usage of facial features. Vis Neurosci. 2008;25:603–609. [PubMed]
TejeriaL, HarperRA, ArtesPH, DickinsonCM. Face recognition in age related macular degeneration: perceived disability, measured disability, and performance with a bioptic device. Br J Ophthalmol. 2002;86:1019–1026. [CrossRef] [PubMed]
BullimoreMA, BaileyIL, WackerRT. Face recognition in age-related maculopathy. Invest Ophthalmol Vis Sci. 1991;32:2020–2029. [PubMed]
RubinG., SchuchardR. Does contrast sensitivity predict face recognition performance in low-vision observers?—noninvasive assessment of the visual system. Technical Digest. 1990;3:130–137.
AlexanderMF, MaguireMG, et al. Assessment of visual function in patients with age-related macular degeneration and low visual acuity. Arch Ophthalmol. 1988;106(11)1543–1547. [CrossRef] [PubMed]
GosselinP, KirouacG, DoreFY. Components and recognition of facial expression in the communication of emotion by actors. J Pers Soc Psychol. 1995;68:83–96. [CrossRef] [PubMed]
RussellJA, SuzukiN, IshidaN. Canadian, Greek, and Japanese freely produced emotion labels for facial expressions. Psychol Bull. 1993.102–141.
VuilleumierP, ArmonyJL, DriverJ, DolanRJ. Distinct spatial frequency sensitivities for processing faces and emotional expressions. Nat Neurosci. 2003;6:624–631. [CrossRef] [PubMed]
CalderAJ, YoungAW, KeaneJ, DeanM. Configural information in facial expression perception. J Exp Psychol Hum Percept Perform. 2000;26:527–551. [CrossRef] [PubMed]
SchynsPG, OlivaA. Dr. Angry and Mr. Smile: when categorization flexibly modifies the perception of faces in rapid visual presentations. Cognition. 1999;69:243–265. [CrossRef] [PubMed]
FiorentiniA, MaffeiL, SandiniG. The role of high spatial frequencies in face perception. Perception. 1983;12:195–201. [CrossRef] [PubMed]
OwsleyC, SloaneME. Contrast sensitivity, acuity, and the perception of ‘real-world’ targets. Br J Ophthalmol. 1987;71:791–796. [CrossRef] [PubMed]
HayesTM, Morrone , et al. Recognition of positive and negative bandpass-filtered images. Perception. 1986;15:595–602. [CrossRef] [PubMed]
MeiM, LeatSJ. Suprathreshold contrast matching in maculopathy. Invest Ophthalmol Vis Sci. 2007;48:3419–3424. [CrossRef] [PubMed]
BiehlM, MatsumotoD, EkmanP, et al. Matsumoto and Ekman’s Japanese and Caucasian Facial Expressions of Emotion (JACFEE): Reliability Data and Cross-National Differences. J Nonverb Behav. 1997;21:2–21.
Figure 1.
 
Examples of the four facial expressions finally chosen from JACFEE.
Figure 1.
 
Examples of the four facial expressions finally chosen from JACFEE.
Figure 2.
 
Correlation between improvement in errors and improvement in ratings across nine subjects.
Figure 2.
 
Correlation between improvement in errors and improvement in ratings across nine subjects.
Figure 3.
 
Correlation between improvement in filter ratings between first and second filters versus improvement in errors (errors with second filter − errors with first filter).
Figure 3.
 
Correlation between improvement in filter ratings between first and second filters versus improvement in errors (errors with second filter − errors with first filter).
Table 1.
 
Number of Mistakes of Facial Expression Recognition with Face Expression Image Display Duration of 0.73 Second
Table 1.
 
Number of Mistakes of Facial Expression Recognition with Face Expression Image Display Duration of 0.73 Second
Subject Anger Disgust Fear Sadness Happiness Surprise
Normal S1 0 1 0 2 0 0
Normal S2 1 1 1 1 0 0
Normal S3 0 1 0 0 0 0
Normal S4 1 0 1 1 0 0
Normal S5 0 0 1 0 0 1
Normal S6 1 1 0 1 0 0
Maculopathy S1 3 4 2 3
Maculopathy S2 4 3 4 3
Table 2.
 
Subject Information for Experiment 2
Table 2.
 
Subject Information for Experiment 2
Subject Sex Age (y) VA (logMAR) Diagnosis Eye
S1 Male 81 0.70 Atrophic ARMD Left
S2 Female 59 0.70 Exudative ARMD Left
S3 Female 79 0.51 Atrophic ARMD Right
S4 Female 20 1.51 JMD Right
S5 Female 83 1.11 Atrophic ARMD Right
S6 Male 64 0.92 JMD Right
S7 Female 85 1.33 Exudative ARMD Right
S8 Male 89 0.60 Exudative ARMD Left
S9 Female 81 1.12 Exudative ARMD Right
Table 3.
 
Preferred First and Second Filter for the Nine Subjects
Table 3.
 
Preferred First and Second Filter for the Nine Subjects
Subject First Filter Second Filter
S1 BP-equi-emphasis (145) BP-3.6% (117.5)
S2 BP-CS peak emphasis (115) BP-equi-emphasis (107.5)
S3 DOG convolution (132.5) BP-3.6% (130)
S4 BP-CS (127.5) BP-27.9% (122.5)
S5 DOG convolution (150) Peli adaptive enhancement (135)
S6 DOG convolution (135) DOG FFT (127.5)
S7 BP-27.9%(175) BP-equi-emphasis (175)
S8 BP-3.6% (107.5) BP-27.9% (106.5)
S9 High-pass (125) BP-27.9% (125)
Table 4.
 
Errors of Facial Expression Identification with and without the Preferred Filter
Table 4.
 
Errors of Facial Expression Identification with and without the Preferred Filter
Subject Errors with Original Images Errors with Filtered Images
S1 9 6
S2 14 6
S3 4 2
S4 15 11
S5 11 7
S6 15 8
S7 15 13
S8 8 8
S9 12 11
Mean 11.4 8
Table 5.
 
The Comparison of Facial Expression Recognition with First- and Second-Preferred Filters
Table 5.
 
The Comparison of Facial Expression Recognition with First- and Second-Preferred Filters
Subject Errors with First Filter Errors with Second Filter
S1 2 4
S2 2 4
S3 2 0
S4 4 7
S5 4 2
S6 4 4
S7 (7) (6)
S8 2 6
S9 (7) (4)
Mean 2.86 3.86
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×