May 2017
Volume 58, Issue 5
Open Access
Low Vision  |   May 2017
Effects of Magnification on Emotion Perception in Patients With Age-Related Macular Degeneration
Author Affiliations & Notes
  • Aaron P. Johnson
    Department of Psychology, Concordia University, Montréal, Quebec, Canada
    CRIR/Centre de réadaptation MAB-Mackay du CIUSSS du Centre-Ouest-de-l'Île-de-Montréal, Montréal, Quebec, Canada
  • Heather Woods-Fry
    Department of Psychology, Ottawa University, Ottawa, Ontario, Canada
  • Walter Wittich
    Department of Psychology, Concordia University, Montréal, Quebec, Canada
    CRIR/Centre de réadaptation MAB-Mackay du CIUSSS du Centre-Ouest-de-l'Île-de-Montréal, Montréal, Quebec, Canada
    Department of Psychology, Ottawa University, Ottawa, Ontario, Canada
    École d'optometrie, Université de Montréal, Montréal, Quebec, Canada
    CRIR/Institut Nazareth et Louis-Braille du CISSS de la Montérégie-Centre, Longueuil, Quebec, Canada
  • Correspondence: Aaron P. Johnson, Department of Psychology, Concordia University, 7141 Sherbrooke West, PY-146 Montreal, Quebec H4B 1R6, Canada; aaron.johnson@concordia.ca
Investigative Ophthalmology & Visual Science May 2017, Vol.58, 2520-2526. doi:10.1167/iovs.16-21349
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Aaron P. Johnson, Heather Woods-Fry, Walter Wittich; Effects of Magnification on Emotion Perception in Patients With Age-Related Macular Degeneration. Invest. Ophthalmol. Vis. Sci. 2017;58(5):2520-2526. doi: 10.1167/iovs.16-21349.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Individuals with low vision often experience difficulties in performing tasks of daily living, such as face perception. This leads them to having difficulties with social interactions, as they can no longer correctly perceive the emotion of others. The present study investigated the effects of magnification on face perception in participants with age-related macular degeneration (AMD), and their ability to detect and categorize emotions. It was hypothesized that patients with AMD would be less accurate in comparison to healthy controls, but that magnification would improve their performance to that of controls.

Methods: Faces containing happy, angry, or neutral emotion were both doubled (equivalent of arm's length distance) and decreased by half in size (equivalent of across the street). The ability to detect and to discriminate emotional content was compared between 20 AMD patients and 7 age-matched controls. Eye movements were recorded while conducting both tasks.

Results: Regardless of stimulus size, when compared to controls, we observed that individuals with AMD consistently performed with lower accuracy in both emotion detection and categorization tasks. Moreover, having images undergo a 2-fold increase in size did improve performance, but did not equate AMD participants' performance to that of the controls in either the emotion detection or categorization task. Eye movements in AMD participants were highly variable in position compared to controls.

Conclusions: The data suggest that magnification alone does not appear to be the answer for improving emotion perception within individuals with low vision. Next steps should include an evaluation of the effects of viewing strategy.

Age-related declines in vision are becoming more common, with the most frequent cause of central vision loss being age-related macular degeneration (AMD).1 Central vision plays an important role when performing activities of daily living such as reading, driving, perceiving objects, and perceiving faces. Thus, most individuals with AMD have difficulty with these types of tasks.24 While most research into the functional consequences of AMD has investigated its effect on reading performance,4 little research has focused on other tasks such as face perception. Therefore in the current study, our goal was to investigate how individuals with AMD perform in a face perception task, and investigate if task performance could be improved through the use of magnification, a common technique employed in rehabilitation centers to improve performance in reading. 
When asked to identify all of the problems that they encounter in their daily lives, patients with AMD report having difficulty with distinguishing faces across the street (60%), across a room (45%–54%), and at arm's length (30%) (Johnson AP, et al. IOVS 2012;53:ARVO E-Abstract 4386).5 Of the 619 individuals with low vision in the Montreal Barriers Study,6 44% indicated that they experienced moderate to severe problems recognizing faces.7 Elsewhere, researchers have reported that 57% of individuals with AMD find that conversations are difficult, with most participants stating that this is due to the difficulty in perceiving emotion in others.8 Therefore, the effect of central vision loss on face perception is an established problem. Most of the initial studies reported a strong correlation between difficulty in face recognition and loss of visual acuity and/or contrast sensitivity.3,9 In an attempt to explain this correlation, Peli and colleagues10 were among the first report that there is a critical range of frequencies that must be perceived in order to accomplish face recognition.1113 To demonstrate their point, they presented images containing faces of different celebrities with varying amount of low-pass Gaussian blur to normally sighted and AMD participants, asking them to identify each individual. When visual acuity was poor, either due to the presence of AMD or due to the reduction in the images' resolution with blur, task performance was poor as well. These data indicated that, in order to perceive a face, the image representing it must lie in a range of critical frequencies. When perception of these frequencies is lost, because of the presence of either AMD or blur, participants experience great difficulty in face perception.10,14 Research in healthy adults has also shown that, in order to optimally identify an individual face, participants ideally have enough spatial resolution to detect the facial features such as the eyes and mouth.14,15 
Besides face identification, others have proposed the use of emotion recognition and categorization to study face perception.1618 Facial emotion recognition is thought to use many of the same processes as face perception as a viewer must concentrate on the eye and mouth regions of the face.16,19,20 These are the regions that are most frequently fixated in a face perception task by participants from Western cultures,21 and provide the necessary information required to identify a face.22 Boucart and colleagues23 introduced the concept of using an emotion recognition and detection task to test face perception in patients. Their study consisted of two experiments whereby, in the first task, participants were presented with a face on a screen and were asked to indicate whether or not the face expressed an emotion. In the second experiment, participants were again presented with a face on a screen, but this time were asked to categorize the emotion depicted as either “happy,” “neutral,” or “angry.” The authors found that while participants could not detect the presence or absence of emotions, they were able to categorize them more successfully than by chance alone.23 The researchers explained this finding by claiming that in order to detect the presence of an emotion, participants use high spatial frequency information in the face, such as contours around the mouth and eyes. Conversely, to categorize an expression as happy or sad, low spatial frequency information is required. It was thought that the turning up (or down) of the mouth—thus classifying the emotion—would be most visible in the low spatial frequency content of the face. Therefore, the authors argued that participants experience difficulties when detecting the presence of emotions because people affected by AMD have poor high spatial frequency acuity due to the loss of foveal vision, but relatively intact low spatial frequency perception through their fully functioning periphery. 
If Boucart and colleagues23 were accurate, then increasing the availability of low spatial frequency content by increasing the image size should result in the categorization of emotions becoming easier for individuals with AMD. In research involving young adults, Melmoth and colleagues24 demonstrated that fovea-like performance could be attained in the periphery by doubling the size and contrast of a face. Thus, face perception in the periphery was improved through stimulus magnification and increase in contrast. Current rehabilitation techniques designed to improve reading performance in patients with AMD involve magnification of the image as a coping strategy aimed at improving their functional vision.4,25 In the present study, our goal was to use a technique similar to Boucart and colleagues23 in order to investigate the effect of magnification on face perception in participants with AMD. Our first hypothesis was that individuals with AMD would be less accurate in comparison to controls across all conditions. Our second hypothesis was that increasing the size of the face would improve performance in both face emotion detection and classification tasks. 
Methods
Participants
Twenty patients diagnosed with AMD were recruited from the MAB-Mackay Rehabilitation Centre, and either were receiving or had recently received vision rehabilitation services (see Table 1 for demographic breakdown). Their Early Treatment Diabetic Retinopathy Study (ETDRS) acuity in their better eye was at least 0.8 logMAR (20/126 or 6/38). Seven age-matched control participants with normal or corrected vision (acuities of at least 0.3 logMAR [20/40 or 6/12], with the use of standard refraction, glasses or contact lenses, if applicable) were recruited through participants' spouses or from individuals known to the research team. All participants provided informed consent, with the procedures of the experiment approved by the ethics review board at the Centre de recherche interdisciplinaire en réadaptation de Montréal métropolitaine, in accordance with the Declaration of Helsinki.26 
Table 1
 
Participant Characteristics and Descriptive Statistics of the Two Samples (AMD, Controls)
Table 1
 
Participant Characteristics and Descriptive Statistics of the Two Samples (AMD, Controls)
Materials
All stimuli were presented on a Windows PC (Dell, Round Rock, TX, USA) with a 21-inch Viewsonic G225f Monitor (screen resolution: 1280 by 1024 pixels; refresh rate: 100 Hz; Walnut, CA, USA), using a custom Mathworks MATLAB (version 2012b; The Mathworks, Natick, MA, USA) code implementing the Psychophysics toolbox extensions.27,28 Eye movements were monitored using a video-based eye tracker (S2; Mirametrix, Montreal, QC, Canada), with a 60-Hz binocular gaze sampling. Sample data (time stamp and current gaze X- and Y-coordinates) were recorded directly into the MATLAB program for offline analysis. 
For the emotion face stimuli, 360 images depicting male and female faces were taken from the NimStim Database (http://www.macbrain.org/resources.htm; in the public domain). The faces represented a mix of races, age, and hair color and style. All faces expressed one of two emotions (happy, angry) or had no expressed emotion (neutral) with mouth open or closed. Images were converted to grayscale using the rgb2gray function in MATLAB to remove color cues. The original stimulus size of 506 by 625 pixels (16.9° by 20.8° of visual angle/dva) was also manipulated, with images having a 2-fold decrease (253 by 313 pixels, or 8.4 by 10.4 dva) and increase (1012 by 1250 pixels, or 33.7 by 43.7 dva) in size using the imresize function in MATLAB, implementing nearest-neighbor interpolation. All sizes refer to whole head (i.e., top of hair to bottom of chin), and proportionately reflect the size of a face during a conversation, when viewing across a room, and across a street. These size categories are often assessed by clinical screening questionnaires (e.g., Visual Function Index VF-14).5,8 
Montreal Cognitive Assessment questionnaire (MoCA): To control for participants' cognitive health, participants completed the MoCA blind.29 The MoCA shows good internal consistency (Cronbach α = 0.84),30 and high test–retest reliability (r = 0.79).31 All participants scored >18 (the cutoff requirement for the blind MoCA). 
Procedure
Participants were identified through the archives at the MAB-Mackay Rehabilitation Centre, contacted via telephone, and invited to take part in the study. When participants arrived for the testing session, informed written consent was obtained and they were placed at a 70-cm distance from the computer monitor, with distance maintained via a chin rest. This was followed by the Freiburg Visual Acuity test32 to assess acuity and contrast sensitivity on the day of testing (see Table 1). The eye tracker was binocularly calibrated whereby participants had to follow a moving circle on the computer screen. Circle diameter started at 12 dva and was reduced over a period of 5 seconds. Participants were instructed to keep the circle centered at the center of their vision, and calibration was repeated until the average value was less than 1 dva and the maximum error was less than 1.5 dva. 
Before starting the combined emotion recognition/detection task, participants were presented with written and audio instruction, and were informed to respond as accurately as possible. A total of 360 images, representing 10 male and female faces, were shown at three different image sizes. To start a trial, the participant fixated a cross (height and width of 5 dva), and then the researcher pressed the spacebar. Faces were presented on the screen over a uniform gray background in random order. Participants were required to verbalize whether or not the presented face contained an emotion. If not, a new trial began; however, if yes, a screen containing the text “Happy or Angry” and an audio recording of the same instructions cued participants to verbally indicate which emotion they thought was present. The researcher entered the response, after which the screen blanked to a uniform gray for 1 second before the start of the next trial. During the entire test, participants' eye movements were recorded using a noninvasive eye tracker. Participants were allowed to take breaks at any time between any of the tasks mentioned above, but were recalibrated if they moved from the chair. Participants had the choice of receiving either a monetary compensation of $10.00 per hour or an annual membership to an adapted transport service for persons with disabilities. 
Data Analysis
In order to test the hypotheses, accuracy data (i.e., percent correct) for both emotion detection and categorization were analyzed separately. Because the experimenter was encoding the participant's response, reaction time data were not analyzed. Due to the sample size, data were analyzed with descriptive statistics (mean, standard deviations, 95% confidence intervals around the mean, and Glass's Δ as an effect size measure). Note that Glass's Δ is the appropriate effect size measure when comparing two groups with sample sizes and variance, and where one group is the control. Glass's Δ gives a range of an effect size (min − max) reflecting the variance of each group. 
Eye Tracking Analysis
In an attempt to explain patterns within the data, and to verify Boucart et al.'s33 predictions that participants would focus on the eye and mouth regions, gaze position sample data were collected. Researchers illustrate their results on face processing in the form of heat maps22 or by marking the image with a cross to indicate fixation location.34 Here we generated a heat map by convolving each fixation position with a kernel density estimation function, whose σ equaled 0.5°. 
Results
Emotion Detection and Recognition Task
To examine whether AMD patients would perform worse than controls in the detection and categorization of emotion tasks, the data were compared at each magnification level (see Fig. 1; Table 2 for details). For the 25%, 50%, and 100% image size condition, in comparison to controls, AMD patients' mean performance in the accuracy task (F[1,25] = 6.83 × 1017, P < 0.001, η2 = 1), as well as the categorization task (F[1,25] = 2.37 × 1017, P < 0.001, η2 = 1), was below that of controls, supporting the first hypothesis. Our second hypothesis was that magnification would improve the performance within individuals with AMD, on both stimulus detection and categorization (see Table 2 for details). Scores on the emotion detection task at 25% of the stimulus size were statistically significantly lower than at the 50% and 100% sizes (F[2,38] = 15.72, P < 0.001, η2 = 0.453; see Table 3 for post hoc comparisons). Even this increase in the size of the faces from 25% (10.4 dva) to 5% (20.8 dva) and 100% (43.7 dva) failed to match the controls' detection accuracy at 25% image size (10.4 dva, all P > 0.05). 
Figure 1
 
Accuracy in the emotion detection class across the three image sizes for participants with AMD and controls. Results for the emotion classification task were similar (see Table 2).
Figure 1
 
Accuracy in the emotion detection class across the three image sizes for participants with AMD and controls. Results for the emotion classification task were similar (see Table 2).
Table 2
 
Proportion of Emotion Detection and Categorization Scores as a Function of Image Magnification Across AMD and Control Participants. Glass's Δ Indicates the Range of Effect Sizes for Each Pairwise Comparison.
Table 2
 
Proportion of Emotion Detection and Categorization Scores as a Function of Image Magnification Across AMD and Control Participants. Glass's Δ Indicates the Range of Effect Sizes for Each Pairwise Comparison.
Table 3
 
Proportion of Emotion Detection and Categorization Scores Comparing Magnification Sizes for AMD Participants Only
Table 3
 
Proportion of Emotion Detection and Categorization Scores Comparing Magnification Sizes for AMD Participants Only
However, when detecting emotions at 100% of the images' size, patients performed worse than when viewing images at 50% of their size. For the emotion categorization task, when viewing images at 50% as well as 100% of their size, patients performed more poorly than when they viewed them at 25% of their size (F[2,38] = 9.69, P < 0.001, η2 = 0.338). However, when viewing the stimulus at 100%, AMD patients' performance was similar to that when the stimulus size was 50%. Therefore, our second hypothesis that magnification improves the performance in individuals with AMD was only partially supported, as magnification helped improve performance only at the smallest image size. 
Eye Gaze Fixations While Viewing Faces
To visualize the pattern of fixations in viewing the face during the emotion task, heat maps were calculated using the kernel density estimation function based on the initial starting coordinate of each fixation made on each face. Figure 2 shows the reflect probability of fixating the locations within each face, with hotter colors (red/yellow) indicating a higher likelihood of fixating a facial feature. In the age-matched control participants (example shown in Fig. 2A), a pattern of fixation typical for young individuals was observed,16,19,20 with fixations located primarily on the eye and mouth regions. However, a different pattern of fixations with two different strategies was observed for individuals with AMD. Fourteen of the AMD participants showed the pattern of fixations observed within Figure 2B. Fixation locations appear to be random, with no discernable pattern of fixations located over the eye and mouth. This pattern is similar to the sample-based eye movement patterns that have been observed by others in AMD clients while viewing faces within an OCT/SLO.34 This goes against the prediction of Boucart and colleagues,33 who proposed that individuals with AMD would focus on the low spatial frequencies contained in the mouth region when performing the emotion categorization task. Six of the present participants with AMD, however, showed a pattern of eye movements that differed from this scattered approach (Fig. 2C). Here, the eye movements show a similar T-shaped pattern as our age-matched controls (Fig. 2A), but instead of the pattern being located over the eye and mouth region, they display the same pattern shifted away from the eye/mouth region. Mouth position (i.e., open or closed) had no effect on this pattern of eye movements. This result, whereby the individuals are demonstrating a more stable ocular motor control than the other participants, could be indicative of the participant using a new location on the retina to serve as a pseudofovea (or a preferred retinal location) for facial feature processing. 
Figure 2
 
Example heat maps of fixation position while viewing the face, generated by kernel density estimation, whereby hot colors represent higher probability of image region being fixated. (A) Age-matched control participant shows typical T-shape fixations around eye and mouth region, (B) two individuals with AMD showing no discernable fixation pattern, (C) two individuals with AMD showing a similar pattern of fixations to the control, but shifted away from the eye/mouth locations.
Figure 2
 
Example heat maps of fixation position while viewing the face, generated by kernel density estimation, whereby hot colors represent higher probability of image region being fixated. (A) Age-matched control participant shows typical T-shape fixations around eye and mouth region, (B) two individuals with AMD showing no discernable fixation pattern, (C) two individuals with AMD showing a similar pattern of fixations to the control, but shifted away from the eye/mouth locations.
Discussion
The goal of the present study was to examine whether magnification helps patients with AMD to improve their ability to detect and categorize facial stimuli. Our results indicate that participants with AMD performed more poorly than those in the control group in all conditions. More specifically, healthy observers were always more accurate at the detection and categorization of emotional faces when the image was kept at its original size (equivalent of viewing across the street), was doubled in size (equivalent of viewing across a room), or was quadrupled in size (equivalent of viewing at arm's length). In addition, magnification helped improve performance within participants affected by AMD only for small image sizes. Looking at the data, a trend depicting improvement was seen, whereby with increasing magnification, performance slowly improved. Therefore, it can be supposed that if magnification is increased to 8 or 16 times the original, improvement among this group might be observed. 
Supporting our first hypothesis, one of our main findings was that we obtained a difference in performance between AMD patients and controls for both the emotion detection and categorization tasks. More specifically, for all image size conditions, AMD patients' mean performance accuracy was below that of the controls. In accordance with previous research in depicting the clear distinctions between visual capabilities of people affected by AMD and healthy controls,8,34,35 we were able to demonstrate that AMD patients can perform both detection and categorization tasks with similar accuracy. Participants detected the presence of an emotion and categorized it as angry, neutral, or happy equivalently well. We believe that this disparity between our result and that of Boucart et al.23 is methodological in nature, as they argued that high spatial frequencies in central vision are crucial for detecting the presence of an expression while low spatial frequencies allow for the execution of highly detailed tasks such as categorization of emotions. Their participants were required to perform only one of the two tasks, which is problematic because in order to categorize an emotion, one must be able to perceive it first. In the present study, both emotion detection and categorization were incorporated into a single task, hence ensuring that all participants performed both tasks in sequence. 
With regard to the second hypotheses, the results only partially supported the claim that magnification of facial stimuli would improve AMD patients' performance. The mean performance accuracy was compared for a 2-fold (i.e., from 25% to 50%) and a 4-fold (i.e., from 25% to 100%) increase in stimulus size. Such increases were chosen in accordance to Melmoth and colleagues' study36 where it was shown that face perception similar to that of the fovea in healthy young adults could be attained in the periphery with a 2-fold increase of the stimulus. Given that AMD patients are left with only peripheral vision, it was tested whether doubling the image size would improve emotion detection and categorization performance; however, this was not the case. In a study by Johnson and Gurnsey,37 the authors noted that magnification of the image in the periphery might cause it to be so large that it is partly within the bounds of central vision. Hence, the increased accuracy reported by Melmoth et al.24 could be due to the recruitment of central vision rather than the increase in image size per se. AMD patients lacking central vision would not be able to make use of this larger image, as it could partially fall into their scotoma. 
Boucart and colleagues23 predicted that patients with AMD would focus on the faces' T-zone, and in particular the mouth region, when participants were asked to classify the emotion. This is because the presence of the emotion would be detected within the low spatial frequencies as the mouth smiled in the happy emotion condition. Previous research in young adults has shown that, when viewing a face, eye movements focus onto the eye and mouth regions of a face to perceive an emotion.16,1922 When analyzing the eye tracking data from the current study, a large variance for the location of fixations in the low vision group was apparent, more so than in the control group who consistently looked at the eye and mouth T-zone. Seiple, Rosen, and Garcia34 investigated different scanning behavior for AMD patients and controls, finding similar patterns, likely linked to the use of a pseudofovea, or preferred retinal locus.34 
One limitation of our study was that we were unable to get a measurement of the size and density of each participant's scotoma. This is due to the high costs of the scanning laser ophthalmoscope (SLO) required to make this measurement, making this equipment currently unavailable to this team. It would be interesting to combine a fundus picture, an image of the back of an eye, obtained from an SLO, and a functional measure of the visual field, in order to assess the problems that an AMD patient encounters. Others have shown that measurements of field loss as measured in an SLO microperimetry are predictive of individuals with low vision having problems in face recognition.38 A further limitation is that in comparison to other face processing studies, we used a total of only three facial emotions (happy, neutral, angry). Given that we are dealing with older adults who were visually impaired, it was a functional requirement of the testing procedure to limit the experiment to less than 60 minutes. This was to ensure that participants would not succumb to fatigue effects. 
Our interpretation of these findings demonstrates that magnification helps only for small image sizes and does not improve the performance accuracy of AMD patients at larger image sizes. This finding is important because many current rehabilitation techniques for macular vision loss are centered on magnification.25 Martelli and colleagues39 reported that magnification did not improve performance accuracy for scene perception. Considering that face perception is similar to scene perception in its complexity (i.e., many features that occlude and border each other), it is no surprise that the same difficulty arises in face perception, in part due to a perceptual phenomenon known as crowding.40 When looking at a scene, the ability to detect specific objects is reduced by the mere presence of surrounding objects. Hence, if a face is magnified, this does not remove the neighboring content that could crowd the facial features used in emotion detection. 
In conclusion, magnification can help in the perception of emotion in faces, but only when the stimuli are small or when an individual is farther away. However, at least in the range of sizes that were used in the current study, performance in both emotion detection and categorization by individuals with AMD never exceeded 80% of the performance of age-matched controls. Part of the difference in performance between our controls and participants, and within our participants, seems to be associated with the patterns of eye movements used while viewing the face. Therefore, magnification alone is not the answer when trying to improve performance in emotion detection. Instead, it might need to be combined with other techniques (e.g., contrast enhancement, eye movement training and/or viewing strategy) to improve task performance to the levels observed in age-matched controls. Another technique would be shape-based image enhancement designed to target high-level perceptual processing of faces.41,42 
Acknowledgments
The authors thank Sandra Fordham and Anna Sotnikova for assistance with data collection. 
Supported by funds from the Centre de Recherche Interdisciplinaire en Réadaptation de Montréal Métropolitaine, and the MAB-Mackay Foundation. WW is supported by an FRQS Junior 1 career award (28881 and 30620). 
Disclosure: A.P. Johnson, None; H. Woods-Fry, None; W. Wittich, None 
References
Friedman DS, O'Colmain BJ, Muñoz B,et al.; Eye Diseases Prevalence Research Group. Prevalence of age-related macular degeneration in the United States. Arch Ophthalmol. 2004; 122: 564–572.
Barnes CS, De L'Aune W, Schuchard RA. A test of face discrimination ability in aging and vision loss. Optom Vis Sci. 2011; 88: 188–199.
Bullimore MA, Bailey IL, Wacker RT. Face recognition in age-related maculopathy. Invest Ophthalmol Vis Sci. 1991; 32: 2020–2029.
Rubin GS. Measuring reading performance. Vision Res. 2013; 90: 43–51.
Hart PM, Chakravarthy U, Stevenson MR, Jamison JQ. A vision specific functional index for use in patients with age related macular degeneration. Br J Ophthalmol. 1999; 83: 1115–1120.
Overbury O, Wittich W. Barriers to low vision rehabilitation: the Montreal Barriers Study. Invest Ophthalmol Vis Sci. 2011; 52: 8933–8938.
Gagné JP, Wittich W. Visual impairment and audiovisual speech-perception in older adults with acquired hearing loss. In Hickson L, ed. Phonak Conference: Hearing Care for Adults 2009 - The Challenge of Aging. Phonak AG; 2010: 165–177.
Tejeria L, Harper RA, Artes PH, Dickinson CM. Face recognition in age related macular degeneration: perceived disability, measured disability, and performance with a bioptic device. Br J Ophthalmol. 2002; 86: 1019–1026.
Ebert EM, Fine AM, Markowitz J, Maguire MG, Starr JS, Fine SL. Functional vision in patients with neovascular maculopathy and poor visual acuity. Arch Ophthalmol. 1986; 104: 1009–1012.
Peli E, Lee E, Trempe CL, Buzney S. Image enhancement for the visually impaired: the effects of enhancement on face recognition. J Opt Soc Am A Opt Image Sci Vis. 1994; 11: 1929–1939.
Fiorentini A, Maffei L, Sandini G. The role of high spatial frequencies in face perception. Perception. 1983; 12: 195–201.
Gold J, Bennett PJ, Sekuler AB. Identification of band-pass filtered letters and faces by human and ideal observers. Vision Res. 1999; 39: 3537–3560.
Schyns PG, Bonnar L, Gosselin F. Show me the features! Understanding recognition from the use of visual information. Psychol Sci. 2002; 13: 402–409.
McCulloch DL, Loffler G, Colquhoun K, Bruce N, Dutton GN, Bach M. The effects of visual degradation on face discrimination. Ophthalmic Physiol Opt. 2011; 31: 240–248.
Calder AJ. Facial emotion recognition after bilateral amygdala damage: differentially severe impairment of fear. Cogn Neuropsychol. 1996; 13: 699–745.
Gosselin F, Schyns PG. Bubbles: a technique to reveal the use of information in recognition tasks. Vision Res. 2001; 41: 2261–2271.
Schyns PG, Oliva A. Dr. Angry and Mr. Smile: when categorization flexibly modifies the perception of faces in rapid visual presentations. Cognition. 1999; 69: 243–265.
Smith ML, Cottrell GW, Gosselin F, Schyns PG. Transmitting and decoding facial expressions. Psychol Sci. 2005; 16: 184–189.
Henderson JM, Williams CC, Falk RJ. Eye movements are functional during face learning. Mem Cognit. 2005; 33: 98–106.
Sekuler AB, Gaspar CM, Gold JM, Bennett PJ. Inversion leads to quantitative, not qualitative, changes in face processing. Curr Biol. 2004; 14: 391–396.
Blais C, Jack RE, Scheepers C, Fiset D, Caldara R. Culture shapes how we look at faces. PLoS One. 2008; 3: e3022.
Gold JM, Mundy PJ, Tjan BS. The perception of a face is no more than the sum of its parts. Psychol Sci. 2012; 2: 427–434.
Boucart M, Dinon JF, Despretz P, Desmettre T, Hladiuk K, Oliva A. Recognition of facial emotion in low vision: a flexible usage of facial features. Vis Neurosci. 2008; 25: 603–609.
Melmoth DR, Kukkonen HT, Mäkelä PK, Rovamo JM. The effect of contrast and size scaling on face perception in foveal and extrafoveal vision. Invest Ophthalmol Vis Sci. 2000; 41: 2811–2819.
Binns AM, Bunce C, Dickinson C, et al. How effective is low vision service provision? A systematic review. Surv Ophthalmol. 2012; 57: 34–65.
Williams JR. The Declaration of Helsinki and public health. Bull World Health Organ. 2008; 86: 650–651.
Pelli DG. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis. 1997; 10: 437–442.
Brainard DH. The Psychophysics Toolbox. Spat Vis. 1997; 10: 433–436.
Wittich W, Phillips N, Nasreddine ZS, Chertkow H. Sensitivity and specificity of the Montreal Cognitive Assessment modified for individuals who are visually impaired. J Vis Impair Blind. 2010; 104: 360–368.
Costa AS, Fimm B, Friesen P, et al. Alternate-form reliability of the Montreal cognitive assessment screening test in a clinical setting. Dement Geriatr Cogn Disord. 2012; 33: 379–384.
Stewart S, O'Riley A, Edelstein B, Gould C. A preliminary comparison of three cognitive screening instruments in long term care: the MMSE, SLUMS, and MoCA. Clin Gerontol. 2012; 35: 57–75.
Bach M. The Freiburg Visual Acuity test--automatic measurement of visual acuity. Optom Vis Sci. 1996; 73: 49–53.
Boucart M, Dinon JF, Despretz P, Desmettre T, Hladiuk K, Oliva A. Recognition of facial emotion in low vision: a flexible usage of facial features. Vis Neurosci. 2008; 25: 603–609.
Seiple W, Rosen RB, Garcia PMT. Abnormal fixation in individuals with age-related macular degeneration when viewing an image of a face. Optom Vis Sci. 2013; 90: 45–56.
Monés J, Rubin GS. Contrast sensitivity as an outcome measure in patients with subfoveal choroidal neovascularisation due to age-related macular degeneration. Eye. 2004; 19: 1142–1150.
Melmoth DR, Kukkonen HT, Makela PK, Rovamo JM. The effect of contrast and size scaling on face perception in foveal and extrafoveal vision. Invest Ophthalmol Vis Sci. 2000; 41: 2811–2819.
Johnson A, Gurnsey R. Size scaling compensates for sensitivity loss produced by a simulated central scotoma in a shape-from-texture task. J Vis. 2010; 10 (12): 18.
Wallis TS, Taylor CP, Wallis J, Jackson ML, Bex PJ. Characterization of field loss based on microperimetry is predictive of face recognition difficulties. Invest Ophthalmol Vis Sci. 2014; 55: 142–153.
Martelli M, Majaj NJ, Pelli DG. Are faces processed like words? A diagnostic test for recognition by parts. J Vis. 2005; 5 (1): 6.
Levi DM. Crowding—an essential bottleneck for object recognition: a mini-review. Vision Res. 2008; 48: 635–654.
Irons J, McKone E, Dumbleton R, et al. A new theoretical approach to improving face recognition in disorders of central vision: face caricaturing. J Vis. 2014; 14 (2): 12.
Al-Atabany WI, Memon MA, Downes SM, Degenaar PA. Designing and testing scene enhancement algorithms for patients with retina degenerative disorders. Biomed Eng Online. 2010; 9: 27.
Figure 1
 
Accuracy in the emotion detection class across the three image sizes for participants with AMD and controls. Results for the emotion classification task were similar (see Table 2).
Figure 1
 
Accuracy in the emotion detection class across the three image sizes for participants with AMD and controls. Results for the emotion classification task were similar (see Table 2).
Figure 2
 
Example heat maps of fixation position while viewing the face, generated by kernel density estimation, whereby hot colors represent higher probability of image region being fixated. (A) Age-matched control participant shows typical T-shape fixations around eye and mouth region, (B) two individuals with AMD showing no discernable fixation pattern, (C) two individuals with AMD showing a similar pattern of fixations to the control, but shifted away from the eye/mouth locations.
Figure 2
 
Example heat maps of fixation position while viewing the face, generated by kernel density estimation, whereby hot colors represent higher probability of image region being fixated. (A) Age-matched control participant shows typical T-shape fixations around eye and mouth region, (B) two individuals with AMD showing no discernable fixation pattern, (C) two individuals with AMD showing a similar pattern of fixations to the control, but shifted away from the eye/mouth locations.
Table 1
 
Participant Characteristics and Descriptive Statistics of the Two Samples (AMD, Controls)
Table 1
 
Participant Characteristics and Descriptive Statistics of the Two Samples (AMD, Controls)
Table 2
 
Proportion of Emotion Detection and Categorization Scores as a Function of Image Magnification Across AMD and Control Participants. Glass's Δ Indicates the Range of Effect Sizes for Each Pairwise Comparison.
Table 2
 
Proportion of Emotion Detection and Categorization Scores as a Function of Image Magnification Across AMD and Control Participants. Glass's Δ Indicates the Range of Effect Sizes for Each Pairwise Comparison.
Table 3
 
Proportion of Emotion Detection and Categorization Scores Comparing Magnification Sizes for AMD Participants Only
Table 3
 
Proportion of Emotion Detection and Categorization Scores Comparing Magnification Sizes for AMD Participants Only
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×