Free
Visual Psychophysics and Physiological Optics  |   March 2013
Implicit Processing of Scene Context in Macular Degeneration
Author Affiliations & Notes
  • Muriel Boucart
    From the Laboratoire Neuroscience Fonctionnelle et Pathologies, Université of Lille Nord de France/CNRS, Lille, France; and
  • Christine Moroni
    From the Laboratoire Neuroscience Fonctionnelle et Pathologies, Université of Lille Nord de France/CNRS, Lille, France; and
  • Sebastien Szaffarczyk
    From the Laboratoire Neuroscience Fonctionnelle et Pathologies, Université of Lille Nord de France/CNRS, Lille, France; and
  • Thi Ha Chau Tran
    From the Laboratoire Neuroscience Fonctionnelle et Pathologies, Université of Lille Nord de France/CNRS, Lille, France; and
    Department of Ophthalmology, Saint Vincent de Paul Hospital, Lille, France.
  • Corresponding author: Muriel Boucart, CHRU Lille, Hôpital Roger Salengro, Laboratoire Neuroscience Fonctionnelle et Pathologies, F-59037 Lille, France; m-boucart@chru-lille.fr
Investigative Ophthalmology & Visual Science March 2013, Vol.54, 1950-1957. doi:10.1167/iovs.12-9680
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Muriel Boucart, Christine Moroni, Sebastien Szaffarczyk, Thi Ha Chau Tran; Implicit Processing of Scene Context in Macular Degeneration. Invest. Ophthalmol. Vis. Sci. 2013;54(3):1950-1957. doi: 10.1167/iovs.12-9680.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose.: For normally sighted people, there is a general consensus that objects that appear in a congruent context (e.g., a hair dryer in a bathroom) are processed more accurately and/or more quickly than objects in an incongruent context (e.g., a hair dryer in a corn field). We investigated whether people with AMD, who have impairments in recognizing objects embedded in complex scenes, can nevertheless take advantage of contextual information for object detection.

Methods.: Twenty-two people with AMD and 18 age-matched, normally sighted controls took part in the study. They were tested in two tasks: (1) an object detection task in which a foreground target object was set within a congruent background or an incongruent background, with no information being given to the participants as to the relationship between the target and its background, and (2) a task in which the participant had to explicitly state whether or not the foreground object was congruent with its background. A go/no-go paradigm was used in both tasks (i.e., a key press when the target is present and no key press when it is absent). The same participants, stimuli, and presentation conditions were used in both tasks.

Results.: In the context task, the people with AMD exhibited higher sensitivity when the target object was consistent with its background; however, they performed no better than chance in the explicit task. Normally sighted controls benefited from the congruent context in both tasks.

Conclusions.: Our results suggest that when central vision is impaired (as in AMD), the contextual information captured by peripheral vision provides cues for object categorization.

Introduction
Age-related macular degeneration (AMD) is a disease that affects central vision and all the functions of central vision, including detailed perception, reading, and face recognition. 16 Although it has been demonstrated 7,8 that people with AMD can categorize isolated objects with a high degree of accuracy (>75%), objects encountered in real life rarely appear in the absence of some sort of background. In real life, objects are embedded in complex scenes, with clutter, occlusion, and other factors, such as luminance, crowding, and lateral masking, all making recognition harder to achieve. In this type of situation, recognition can be improved by other factors, such as familiarity, expectations, and contextual information. The “context effect” refers to the influence of an object's environment on the perception of that object. 
A large body of literature data from behavioral visual cognition, 914 electrophysiology, 15 and brain-imaging studies 1619 in normally sighted people shows that contextual information affects the efficiency of the object search and recognition. There is a general consensus whereby objects appearing in a consistent background (e.g., a toaster in a kitchen) are processed more accurately and/or more quickly than objects appearing in an inconsistent background (e.g., a toaster in a bathroom), although Hollingworth and Henderson 20 failed to observe contextual facilitation in a detection task. Behavioral research has shown that seeing a familiar context automatically activates the representation of objects typically found within that scene, as well as their typical locations. 914,21,22 Further evidence for the facilitation of object recognition and location by context comes from studies of visual search tasks 23 and action-related tasks, 2426 such as preparing a sandwich or a cup of tea. The scene context in these tasks directs attention to the object-relevant locations within the scene. Interestingly, contextual binding of an object and its background can operate in the absence of explicit instruction in that respect. For example, Goh and colleagues 16 used functional magnetic resonance imaging during a passive viewing task to show that brain areas (the bilateral parahippocampal areas), other than those involved in scene processing or isolated object processing, showed adaptation only when the unique pairing of an object with its background scene was repeated. 
So, do people with impaired central vision take advantage of contextual information for object recognition? In light of previous studies on normally sighted people, one might expect object recognition in realistic scenes to be facilitated by congruent association between the target object and its background. However, the extent to which visually impaired observers are able to use the richness of peripheral vision and spatial layout information to guide object recognition in complex scenes has not yet been fully explored. Even though one can legitimately expect a consistent context to aid object recognition (e.g., a piece of furniture is more likely to be found indoors than outdoors), we cannot rule out the possibility whereby people who have to rely on peripheral vision may not necessarily be able to take advantage of contextual information. Indeed, crowding is known to have stronger effects in peripheral vision. 27 For instance, Tran et al. 8 investigated the influence of background information on object categorization. Patients with AMD and normally sighted, age-matched controls were asked to press a key whenever they saw a target object (an animal). The target was sometimes presented on a white background, sometimes in its normal setting (e.g., a lion in the savanna) and sometimes in a meaningless, nonstructured background. The people with AMD detected the target more reliably when it was isolated than when it was in its natural setting, although accuracy was higher when the target appeared against a meaningful background than against a meaningless background. This suggested that implicit, automatic processing of the scene layout might be occurring. However, we do not know whether the association between the object and its background was processed consciously or not and whether the benefit provided by the background stemmed from its meaning or from its degree of structural organization (since meaningful, structured backgrounds were compared with meaningless, nonstructured backgrounds). 
The present study was designed to investigate whether contextual information provides additional cues in cases of image degradation due to impaired central vision, and whether people with AMD are able to explicitly associate an object and its background. The participants' performance was assessed in two tasks: (1) a categorization task in which a target object was located in a congruent or an incongruent background (with no information being given to the participants as to the relationship between the target and its background), and (2) a task in which the participants were asked to explicitly state whether or not a target object was related to the background. The same participant stimuli and exposure conditions were used in both tasks. Our hypothesis was that people with central vision loss would compensate through more effective use of their peripheral vision. 
Patients and Methods
Patients
The study included 22 patients (14 women) with neovascular AMD (as confirmed by fluorescein angiography). The patients' mean ± SD age was 78 ± 7 (range, 61–87). None of the patients exhibited cognitive impairments, as assessed by the Mini Mental State Evaluation (MMSE) score (mean: 27.5 ± 1.68; range, 25–30). Neovascular AMD was confirmed by fluorescein angiography. Only one eye was studied. In cases of bilateral AMD, we considered the eye with the best corrected visual acuity. If both eyes had equal acuity, one eye was selected at random. The patients had a visual acuity of 0.5 ± 0.2 logarithm of the minimum angle of resolution (logMAR) (approximate Snellen visual acuity: 20/63). The study's inclusion and exclusion criteria are displayed in Table 1
Table 1. 
 
Inclusion and Exclusion Criteria for the Participants with AMD
Table 1. 
 
Inclusion and Exclusion Criteria for the Participants with AMD
Inclusion criteria
 Willing to give informed consent
 Neovascular AMD well defined with subfoveal involvement  confirmed by fluorescein angiography
 Best corrected visual acuity between 20/40 and 20/400 in the eye  to be studied
 Refraction between +3D and −3D
Exclusion criteria
 History of any neurological or psychiatric disease
 History of ophthalmologic disease other than AMD that might  compromise its VA or peripheral vision during the study  (amblyopic, uncontrolled glaucoma, cataract, optic neuropathy,  diabetic retinopathy, uveitis)
 Unable to communicate (deafness)
 Treated with medication that might compromise concentration   (benzodiazepine, narcoleptics)
 Mental deterioration with MMSE <24
Controls
An age-matched control group with normal visual acuity was composed of 18 participants (15 women), with a mean age of 76 ± 5 (range, 64–89). The mean MMSE score was 28.4 ± 1.6 (range, 25–30). None of the control participants had ocular or neurological diseases. The mean visual acuity was 0.07 ± 0.05 logMAR. Control participants were either patients who had undergone cataract surgery or relatives of participants with AMD. Controls were tested on a single, preferred eye. The group's clinical and demographic data are summarized in Table 2
Table 2. 
 
Demographic and Clinical Data of the Study Population
Table 2. 
 
Demographic and Clinical Data of the Study Population
Participants with AMD n = 22
Mean age, y, mean ± SD (range) 78 ± 7 (61–87)
Sex, male/female 8/14
Mean MMSE, mean ± SD 27.5 ± 1.68
Mean Log MAR VA 0.5 ± 0.2
Mean lesion size, mm2, mean ± SD (range) 6.6 ± 3.9 (1.5–13.2)
Greatest diameter, mm, mean ± SD (range) 3.2 ± 1.1 (1.35–4.60)
Normally sighted controls n = 18
Mean age, y, mean ± SD (range) 76 ± 5 (64–89)
Sex, male/female 3/15
Mean Log MAR VA 0.07 ± 0.05
Mean MMSE 28.4 ± 1.9
Participants with AMD and controls were recruited between January and July 2011 by the Department of Ophthalmology at Saint Vincent de Paul Hospital, Lille, France. The study was approved by the ethics committee of Lille, France (CPP Nord-Ouest IV), and performed in accordance with the tenets of the Declaration of Helsinki. Written, informed consent was obtained from all participants. 
Clinical Examination
Ophthalmologic Examination.
The best corrected visual acuity was determined using Early Treatment Diabetic Retinopathy Study charts at a distance of 4 m and was converted to logMAR visual acuity for statistical purposes. A slit-lamp examination, IOP measurement, and funduscopy were performed on all patients and controls. 
Imaging Procedures and Lesion Size Measurement.
The diagnosis of neovascular AMD was confirmed by fluorescein angiography, as described elsewhere. 8 The lesion's area (in mm2) and greatest dimension were measured from digital angiograms by using image analysis software (software, Heidelberg Engineering, Heidelberg, Germany) to outline the lesion. 28,29 Clinical assessments and experimental tasks were performed during the same visit. 
Stimuli and Apparatus
The stimuli were displayed on a 30-inch color monitor (Dell, Roundrock, Texas) connected to a computer (Dell T 3400). The stimuli consisted of 14 different backgrounds (color photographs of natural scenes) taken from a large commercial CD database (Corel Corp., Ottawa, ON, Canada). Scenes were manipulated so that the background and object could be either congruent or incongruent. Joubert et al. 30 have shown that simply pasting an object onto another photograph affects performance. To control for the effect of manipulating scenes, the target objects (animals and pieces of furniture) were cut out of their original photograph and pasted onto another background. Each object was pasted onto a congruent background (i.e., a natural scene for an animal or an indoor scene for a piece of furniture) and an incongruent background (i.e., an indoor scene for an animal or a natural scene for a piece of furniture) at the same spatial location. Quartets of images were built so that each animal and each piece of furniture appeared on the same background (once in a natural scene and once an indoor scene). Examples are shown in Figure 1. The quartets had been built in the CerCo laboratory in Toulouse, France, in order to study how aging modulates the influence of context on object processing. 31 We selected 14 CerCo quartets in which the target object was relatively large and not camouflaged. The photographs were displayed on a light gray background (56.2 cd/m2). The software was developed in-house in C++. Half of the scenes contained an animal and the other half contained a piece of furniture. The image resolution was 768 (horizontal) × 512 pixels, with a screen resolution of 2560 × 1600 pixels. At a viewing distance of 1 m, the angular size of the pictures was 20° horizontally and 15° vertically. Responses were given via a key box connected to the computer. 
Figure 1
 
Quartets of scenes: each animal and each piece of furniture appeared on the same congruent and incongruent backgrounds. In the present experiment, congruency was defined as natural background for an animal and an indoor scene for a piece of furniture.
Figure 1
 
Quartets of scenes: each animal and each piece of furniture appeared on the same congruent and incongruent backgrounds. In the present experiment, congruency was defined as natural background for an animal and an indoor scene for a piece of furniture.
Procedure
After the display of a black (5°) central fixation cross for 500 ms, an image was centrally presented on the screen. Both tasks used the same pictures, the same participants, and the same presentation conditions. 
In the “context” task, participants were given the name of a target verbally at the beginning of the experiment and asked to press a key when they saw the target. Half of the participants in each group were given an animal as a target and half were given a piece of furniture as a target. There were 56 trials, defined by the context (congruent/incongruent), the category of the target (animal/furniture), and the 14 different backgrounds. Hence, there were 28 trials with animals and 28 trials with pieces of furniture. For each of the two categories of target object, there were 14 trials with a congruent background (e.g., an animal set against a natural background) and 14 with an incongruent background (e.g., an animal in an indoor scene). The animals, pieces of furniture, and congruent and incongruent contexts were represented randomly and in equal proportions. Participants were told that 50% of the photographs would contain the target but not that it could appear on a congruent or an incongruent background. 
In the “congruency” task, participants were asked to press a key only when the foreground object and the background scene were congruent and to refrain from pressing a key when they were incongruent. They were told that “congruency” meant an animal on a natural background or a piece of furniture in an indoor scene. Before the task, participants were shown examples of congruent and incongruent combinations on paper. 
The same 56 images were used in the context and in the congruency tasks but presented in a different order. In both tasks, the exposure duration was set to 300 ms for patients and 100 ms for control participants, to allow a single fixation only. The shorter exposure time for controls than for patients was based on a pilot study showing that normally sighted people reached a ceiling in performance with an exposure time of 300 ms. The intertrial interval was set to 2 seconds. The experiment lasted about 5 minutes for each task. 
Correct responses thus included “hits” (e.g., pressing the key when an animal was displayed and the target was an animal) and correct rejections (e.g., not pressing the key when a piece of furniture was displayed and the target was an animal). Errors were false alarms (e.g., pressing the key when a piece of furniture was displayed and the target was an animal) and omissions (e.g., not pressing the key when an animal was displayed and the target was an animal). A d' index of sensitivity was computed on the data. 
Statistical Analysis
All analyses were carried out with the software SPSS (version 18.0 for Macintosh; SPSS Inc., Chicago, IL). Analyses of variance were performed on response times (RTs) and on the index of sensitivity (d') for each task. For d' data were analyzed using a generalized linear mixed model. In the context task, the between factors were the group (people with AMD versus normally sighted controls) and the target category (animal versus piece of furniture) and the within factor was the type of background (congruent versus incongruent). In the congruency task the between factor was the group (people with AMD versus controls) and the within factor was the object category (animal versus piece of furniture). Performance in terms of d' is presented in Figure 2. The distribution of response times in the context task is shown in Figure 3
Figure 2
 
Top: d' (confidence intervals 0.95) averaged over each target category and as a function of group (people with AMD and normally sighted, age-matched controls) and relations between the object and the background (congruent/incongruent) in the context task. Bottom: d' (confidence intervals 0.95) as a function of group (people with AMD and normally sighted, age-matched controls) and the category of object in the congruency task.
Figure 2
 
Top: d' (confidence intervals 0.95) averaged over each target category and as a function of group (people with AMD and normally sighted, age-matched controls) and relations between the object and the background (congruent/incongruent) in the context task. Bottom: d' (confidence intervals 0.95) as a function of group (people with AMD and normally sighted, age-matched controls) and the category of object in the congruency task.
Figure 3
 
Distribution of response times in the context task as a function of the group (patient with AMD versus normally sighted controls) and the relationship between the background and the target object (congruent versus incongruent) in the context task.
Figure 3
 
Distribution of response times in the context task as a function of the group (patient with AMD versus normally sighted controls) and the relationship between the background and the target object (congruent versus incongruent) in the context task.
The Context Task
RTs were significantly shorter (by 159 ms, F 1,36 = 11.5, P < 0.002) for controls than for people with AMD, but sensitivity did not differ significantly between group (controls: d' = 3.32 versus patients: d' = 2.91, F 1,38 = 1.06, NS). Sensitivity was higher (F 1,24 = 4.10 , P = 0.05) for congruent object and background pairs (d' = 3.36) than for incongruent object and background pairs (d' = 2.87). However, RTs did not differ significantly (513 ms and 525 ms for congruent and incongruent backgrounds, respectively; F 1,36 = 0.59, NS). There was no main effect of target category for either sensitivity (F 1,38 = 0.57, NS) or RTs (F 1,36 = 0.9, NS). The better sensitivity for the congruent background was observed for both categories of targets. There was no significant interaction involving category. Figure 3 shows that the advantage for congruent object and background pairs over incongruent object and background pairs occurred earlier for controls (peaking at approximately 350–400 ms) than for patients (peaking at approximately 500–550 ms). Controls scored the same proportion of hits and correct rejections (0.90 and 0.91, respectively), whereas patients produced more correct rejections than hits (0.92 and 0.79, respectively). In terms of errors, patients did not record significantly more false alarms than controls did (0.08 vs. 0.10, respectively) but the proportion of omissions was higher for patients than it was for controls (0.21 vs. 0.09, respectively; F 1,36 = 6.99, P < 0.012). 
The Congruency Task
The only significant main effect was that of group, with a higher sensitivity for controls than for patients (d' = 2.06 vs. 1.08, respectively; F 1,38 = 15.25, P < 0.001). The proportion of correct detections (i.e., the hit rate) for congruency between the object and its background was very low for patients and did not differ from chance (furniture: 0.48, t21 = 0.4, NS; animals: 0.48 t21 = 0.3, NS), whereas it did differ from chance for controls (furniture: 0.65, t17 = 2.87, P < 0.01; animals: 0.75, t17 = 3.89, P < 0.001). As in the context task, the results showed a bias toward correct rejections in the patient group (0.8), whereas the numbers of hits and correct rejections were similar in the control group (hits: 0.7; correct rejections: 0.79). There were no significant differences between the two categories of objects in terms of either sensitivity or RTs (animals: d' = 1.66 and 769 ms; furniture: d' = 1.47 and 776 ms; F values < 1). 
Correlations between Task Performance and Clinical Data
Visual acuity was correlated with the lesion's size both with diameter (r = 0.65, P < 0.0001) and surface area (r = 0.7, P < 0.0001). No significant correlation was found between visual acuity and accuracy in either the context task or the congruency task. A correlation between accuracy and lesion size was found when the object was incongruent with the background (diameter r = −0.41, P = 0.05; surface area r = −0.46, P = 0.2). A significant correlation between lesion size and RTs was observed when the object was congruent with the background (diameter r = 0.52, P = 0.007; surface area r = 0.43, P = 0.031). 
Discussion
Studies on normally sighted people have reported that performance is facilitated when an object is consistent with its background (relative to an inconsistent context). This effect of context has been found in a variety of tasks, including visual search 23 and identification 14,21,22 ; however, the results have been less unambiguous for detection tasks. For example, Biederman et al. 9 and Boyce and colleagues 10,11 reported that the detection of an object (e.g., a sofa) was facilitated when it was semantically consistent with its background (i.e., an indoor scene) than when it was semantically inconsistent (i.e., a street scene). In these studies, an object's name was first displayed and was followed by a mask (random lines) comprising a cue (a black dot) and then the scene. Participants were asked to decide whether or not the object located at the cue matched the word (yes/no). Hollingworth and Henderson 20 demonstrated that the context effect in this task resulted from a bias. Indeed, they observed facilitation for consistent object and background pairs when the label was displayed before the scene (in experiment 1) but not for the detection of consistent versus inconsistent object and background pairs when the target's label was displayed after the scene (in experiment 3). They suggested that when the label is displayed before the scene, participants exhibit a response bias with a tendency to respond “yes” more frequently when the target label (rather than the object) is consistent with the scene. 
In people with AMD, several studies have been designed to investigate the effect of background information on object recognition. Boucart et al. 7 reported that object categorization was facilitated when objects were located on a homogeneous white background than when the same objects appeared in a scene. In the same line, Bordier et al. 32 found a reduction in the spatial frequency bandwidth necessary for image identification when the background was attenuated by lowering its luminance (i.e., the background of the base image was selectively darkened by 80% of its original luminance) both for patients with AMD and for young observers. These results suggest that reduction of crowding improves categorization and identification performance in people who must rely on their peripheral vision. More recently, Wiecek et al. 33 compared visual search performance in different versions of scenes (e.g., original image, gray scale of the original image, edge segmented image, objects on a gray uniform background, 50% contrast masked background). Participants were asked to detect a predefined target object in a scene. They found no significant difference in search duration, or accuracy, across the different manipulations of images but, in their study, vision was binocular and there was no time constraint. They suggested that the lack of improvement may be attributed to a decrease in contextual information about the target object when it is removed from its background. Consistent with this account, Tran et al. 8 found that a meaningful consistent background facilitated object categorization in patients with AMD as compared with the same object on a meaningless background. 
In the present study, both people with central vision loss and control participants benefited from contextual information in the object detection task. Indeed, in the context task, performance was better for congruent object and background pairs than for incongruent object and background pairs, even though participants were specifically asked to attend selectively to the object. Hence, our results do not indicate a response bias induced by verbal communication of a target label before the session. 20 Indeed, this type of bias would have been reflected by a higher proportion of false alarms. A bias to more omissions and more correct rejections (i.e., the absence of a key press) was observed in the patients' group, suggesting that patients pressed the key when they were confident that their designated target was indeed present. 
The facilitation for congruent object and background in the context task (in which participants were asked to attend only to the object) suggests the implicit processing of contextual information. However, when asked to associate explicitly the object with its setting (in the congruency task), the same patients had trouble processing the images that they had just seen in the context task. Normally sighted people also showed worse performance in the congruency task than in the context task, although their proportion of hits was much better than chance (while it was no better than chance for people with AMD). Why, then, did contextual information influence object detection implicitly but not explicitly in people with central vision loss? A recent study 34 in our team investigating scene exploration with an eye tracker showed that people with AMD fixated systematically outside the region of interest, which was the central object (in a scene), but were nevertheless able to name the object correctly in more than 70% of cases (although they used the category name, e.g., “an animal,” more than the exact name, e.g., “a bear”). A coarse peripheral, perception of the object may have been enough to dissociate an animal from a piece of furniture in the context task but insufficient to decide whether the same object was congruent with its background in the congruency task. Normally sighted elderly were also impaired in the explicit association of an object and background as compared with the implicit processing of background information in the context task. This is consistent with studies showing a deficit in “binding” an object to its background in elderly. This is attributed to a reduction in attentional resources in older people. 35  
In the context task, the advantage for a congruent object and background pair over an incongruent object and background pair occurred at approximately 400 ms in normally sighted controls (see Fig. 3), which is in the normative range for young adults. 36 This suggests that context might influence object categorization early (i.e., at a predecisional stage). The benefit from congruency occurred 100 to 150 ms later for people with AMD, even though the latter were provided with 200 ms more exposure time than normally sighted controls were. Given that perception of the foreground object was impaired by central vision loss, it may be that people with AMD processed the peripheral background in the context task and tried to infer the category of object from that background. When the background depicted an indoor scene, the object was likely to be a piece of furniture and, conversely, when the background was a natural scene, the object was likely to be an animal. This type of strategy should have increased the false alarm rate unless patients pressed the key only when their degree of confidence was high and refrained from pressing the key if their confidence was low, thus increasing the RTs and the number of omissions and correct rejections. 
Our knowledge of the indoor and outdoor visual environments dictates our predictions about what objects to expect in a scene and where to expect them. These context-driven predictions facilitate object recognition and object search. 23 If contextual information is to assist the recognition process, it has to be extracted rapidly. Bar 37 suggested that rapid extraction is mediated by global cues that are conveyed by low spatial frequencies in the image. This coarse information is projected early and rapidly from the visual cortex to the prefrontal cortex and the parahippocampal cortex (possibly through the magnocellular pathway), where it can activate a scene schema. This representation is then refined and further substantiated by specific details that are gradually conveyed by higher spatial frequencies. This model is supported by brain imaging studies in normally sighted young adults. 38,39 There is evidence to suggest that the retinal cells involved in the magnocellular pathway are present in greater numbers in the periphery of the retina. 40,41 Research has shown that people with AMD are able to categorize scenes as natural/urban or indoor/outdoor with high accuracy at an exposure duration that prevents more than a single fixation. 42 We therefore hypothesize that the peripheral part of the scene was processed by people with AMD at much the same speed as controls (although not necessarily consciously) and was used to make inferences about the category of object likely to be found in that context. Animal studies suggest that the periphery is involved to some extent in shape perception in the dorsal stream, which is related to the magnocellular pathway. 43,44 For instance, Palmer and Rosa 45 found that the dorsomedial visual area can contribute to shape perception (i.e., the extraction of contours) when there is significant input from peripheral vision. Furthermore, the representation of the far retinal periphery in middle temporal area receives specific connections from parahippocampal and retrosplenial areas, 46 both of which are involved in scene perception. 47,48  
Our study showed that people with central vision loss are able to implicitly associate an object and its background with a success rate (>75%) that greatly exceeds chance. However, we used only two categories of object (animals and pieces of furniture), two categories of environment (indoor scenes and natural scenes), a small number of pictures, and a detection task. Whether the facilitation with congruent object and background pairs would occur in more realistic conditions (i.e., in an identification task with a large variety of objects and backgrounds and more crowded scenes) remains to be established. 
Acknowledgments
The authors are grateful to Steven Ola and Laure-Marine Piquet for testing the participants and to Nathalie Vayssière and Nadège Bacon-Macé (from the CerCo laboratory in Toulouse) for providing the pictures used in the study. 
References
Legge GE Rubin GS Pelli DG Schleske MM. Psychophysics of reading—II. Low vision. Vision Res . 1985; 25: 253–265. [CrossRef] [PubMed]
Legge GE Rubin GS Luebker A. Psychophysics of reading—V. The role of contrast in normal vision. Vision Res . 1987; 27: 1165–1177. [CrossRef] [PubMed]
Legge GE Ross JA Luebker A LaMay JM. Psychophysics of reading. VIII. The Minnesota Low-Vision Reading Test. Optom Vis Sci . 1989; 66: 843–853. [CrossRef] [PubMed]
Peli E Goldstein RB Young GM Image enhancement for the visually impaired. Simulations and experimental results. Invest Ophthalmol Vis Sci . 1991; 32: 2337–2350. [PubMed]
Tejeria L Harper RA Artes PH Dickinson CM. Face recognition in age related macular degeneration: perceived disability, measured disability, and performance with a bioptic device. Br J Ophthalmol . 2002; 86: 1019–1026. [CrossRef] [PubMed]
Boucart M Dinon JF Despretz P Recognition of facial emotion in low vision: a flexible usage of facial features. Vis Neurosci . 2008; 25: 603–609. [PubMed]
Boucart M Despretz P Hladiuk K Desmettre T. Does context or colour improve object recognition in patients with macular degeneration? Vis Neurosci . 2008; 25: 685–691. [CrossRef] [PubMed]
Tran THC Guyader N Guérin A Despretz P Boucart M. Figure ground discrimination in age related macular degeneration. Invest Ophthalmol Vis Sci . 2011; 52: 1655–1660. [CrossRef] [PubMed]
Biederman I Mezzanotte RJ Rabinowitz JC. Scene perception: detecting and judging objects undergoing relational violations. Cogn Psychol . 1982; 14: 143–177. [CrossRef] [PubMed]
Boyce SJ Pollatsek A. Identification of objects in scenes: the role of scene background in object naming. J Exp Psychol Learn Mem Cogn . 1992; 18: 531–543. [CrossRef] [PubMed]
Boyce SJ Pollatsek A Rayner K. Effect of background information on object identification. J Exp Psychol Hum Percept Perform . 1989; 15: 556–566. [CrossRef] [PubMed]
De Graef P Christiaens D d'Ydewalle G. Perceptual effects of scene context on object identification. Psychol Res . 1990; 4: 317–329. [CrossRef]
Henderson JM Weeks PA Hollingworth A. The effects of semantic consistency on eye movements during complex scene viewing. J Exp Psychol Hum Percept Perform . 1999; 25: 210–218. [CrossRef]
Davenport JL. Consistency effects between objects in scenes. Mem Cognit . 2007; 35: 393–401. [CrossRef] [PubMed]
Ganis G Kutas M. An electrophysiological study of scene effects on object identification. Brain Res Cogn Brain Res . 2003; 16: 123–144. [CrossRef] [PubMed]
Goh JO Siong SC Park D Gutchess A Hebrank A Chee MW. Cortical areas involved in object, background, and object-background processing revealed with functional magnetic resonance adaptation. J Neurosci . 2004; 24: 10223–10228. [CrossRef] [PubMed]
Bar M Aminoff E. Cortical analysis of visual context. Neuron . 2003; 38: 347–358. [CrossRef] [PubMed]
Mudrick L Lamy D Deouell LY. ERP evidence for context congruity effects during simultaneous object-scene processing. Neuropsychologia . 2010; 48: 507–517. [CrossRef] [PubMed]
Kirk U. The neural basis of object-context relationships on aesthetic judgment. PLoS One . 2008; 3: e3754. [CrossRef] [PubMed]
Hollingworth A Henderson JM. Does consistent scene context facilitate object perception? J Exp Psychol Gen . 1998; 127: 398–415. [CrossRef] [PubMed]
Palmer SE. Effects of contextual scenes on identification of objects. Memory & Cognition . 1975; 3: 519–526. [CrossRef] [PubMed]
Davenport JL Potter MC. Scene consistency in object and background perception. Psychol Sci . 2004; 15: 559–564. [CrossRef] [PubMed]
Torralba A Oliva A Castelhano M Henderson JM. Contextual guidance of eye movements in real-world scenes: the role of global features on object search. Psychol Rev . 2006; 113: 766–786. [CrossRef] [PubMed]
Land MF Hayhoe MM. In what ways do eye movements contribute to everyday activities? Vis Res . 2001; 41: 3559–3565. [CrossRef] [PubMed]
Hayhoe M Ballard D. Eye movements in natural behavior. Trends Cogn Sci . 2005; 9: 188–193. [CrossRef] [PubMed]
Tatler BW Hayhoe MM Land MF Ballard DH. Eye guidance in natural vision: reinterpreting salience. J Vis . 2011; 11: 5. [CrossRef] [PubMed]
Pelli DG Palomares M Majaj NJ. Crowding is unlike ordinary masking: distinguishing feature integration from detection. J Vis . 2004; 4: 1136–1169. [PubMed]
Barbazetto I Burdan A Bressler NM Photodynamic therapy of subfoveal choroidal neovascularization with verteporfin: fluorescein angiographic guidelines for evaluation and treatment—TAP and VIP report No. 2. Arch Ophthalmol . 2003; 121: 1253–1268. [CrossRef] [PubMed]
Hogg R Curry E Muldrew A Identification of lesion components that influence visual function in age related macular degeneration. Br J Ophthalmol . 2003; 87: 609–614. [CrossRef] [PubMed]
Joubert OR Rousselet GA Fize D Fabre-Thorpe M. Processing scene context: fast categorization and object interference. Vision Res . 2007; 47: 3286–3297. [CrossRef] [PubMed]
Saint-Aubert L Rémy F Bacon-Macé N Barbeau E Vayssière NM Fabre-Thorpe M. Object categorization in natural scenes: The use of context increases with aging. Accepted Vision Research . In press.
Bordier C Petra J Dauxerre C Vital-Durand F Knoblauch K. Influence of background on image recognition in normal vision and age-related macular degeneration. Ophthalmic Physiol Opt . 2011; 31: 203–215. [CrossRef] [PubMed]
Wiecek E Jackson ML Dakin SC Bex P. Visual search with image modification in age-related macular degeneration. Invest Ophthalmol Vis Sci . 2012; 53: 6600–6609. [CrossRef] [PubMed]
Thibaut M Tran THC Boucart M. Object and scene exploration in people with age related macular degeneration. Paper presented at: European Conference on Visual Perception; September 2–6, 2012; Alghero, Italy.
Chee MW Goh JO Venkatraman V Age-related changes in object processing and contextual binding revealed using fMR adaptation. J Cogn Neurosci . 2006; 18: 495–507. [CrossRef] [PubMed]
Joubert OR Fize D Rousselet GA Fabre-Thorpe M. Early interference of context congruence on object processing in rapid visual categorization of natural scenes. J Vis . 2009; 8: 11–18. [CrossRef]
Bar M. Visual objects in context. Nat Rev Neurosci . 2004; 5: 617–629. [CrossRef] [PubMed]
Fenske MJ Aminoff E Gronau N Bar M. Top-down facilitation of visual object recognition: object-based and context-based contributions. Prog Brain Res . 2006; 155: 3–21. [PubMed]
Kveraga K Boshyan J Bar M. Magnocellular projections as the trigger of top-down facilitation in recognition. J Neurosci . 2007; 27: 13232–13240. [CrossRef] [PubMed]
Malpeli JG Lee D Baker FH. Laminar and retinotopic organization of the macaque lateral geniculate nucleus: magnocellular and parvocellular magnification functions. J Comp Neurol . 1996; 375: 363–377. [CrossRef] [PubMed]
Meissirel C Wikler KC Chalupa LM Rakic P. Early divergence of magnocellular and parvocellular functional subsystems in the embryonic primate visual system. Proc Natl Acad Sci U S A . 1997; 94: 5900–5905. [CrossRef] [PubMed]
Tran THC Rambaud C Despretz P Boucart M. Scene perception in age-related macular degeneration (AMD). Invest Ophthalmol Vis Sci . 2010; 51: 6868–6874. [CrossRef] [PubMed]
Nassi JJ Callaway ME. Parallel processing strategies of the primate visual system. Nat Rev Neurosci . 2009; 10: 360–372. [CrossRef] [PubMed]
Tapia E Breitmeyer BG. Visual consciousness revisited: magnocellular and parvocellular contributions to conscious and nonconscious vision. Psychol Sci . 2011; 22: 934–942. [CrossRef] [PubMed]
Palmer SM Rosa MG. A distinct anatomical network of cortical areas for analysis of motion in far peripheral vision. Eur J Neurosci . 2006; 24: 2389–2405. [CrossRef] [PubMed]
Rosa MG Palmer SM Gamberini M Connections of the dorsomedial visual area: pathways for early integration of dorsal and ventral streams in extrastriate cortex. J Neurosci . 2009; 29: 4548–4563. [CrossRef] [PubMed]
Epstein RA. Parahippocampal and retrosplenial contributions to human spatial navigation. Trends Cogn Sci . 2008; 12: 388–396. [CrossRef] [PubMed]
Epstein RA. Cognitive neuroscience: scene layout from vision and touch. Curr Biol . 2011; 21: R437–R438. [CrossRef] [PubMed]
Footnotes
 Supported by a grant “LowVision” from the French National Research Agency (MB).
Footnotes
 Disclosure: M. Boucart, None; C. Moroni, None; S. Szaffarczyk, None; T.H.C. Tran, None
Figure 1
 
Quartets of scenes: each animal and each piece of furniture appeared on the same congruent and incongruent backgrounds. In the present experiment, congruency was defined as natural background for an animal and an indoor scene for a piece of furniture.
Figure 1
 
Quartets of scenes: each animal and each piece of furniture appeared on the same congruent and incongruent backgrounds. In the present experiment, congruency was defined as natural background for an animal and an indoor scene for a piece of furniture.
Figure 2
 
Top: d' (confidence intervals 0.95) averaged over each target category and as a function of group (people with AMD and normally sighted, age-matched controls) and relations between the object and the background (congruent/incongruent) in the context task. Bottom: d' (confidence intervals 0.95) as a function of group (people with AMD and normally sighted, age-matched controls) and the category of object in the congruency task.
Figure 2
 
Top: d' (confidence intervals 0.95) averaged over each target category and as a function of group (people with AMD and normally sighted, age-matched controls) and relations between the object and the background (congruent/incongruent) in the context task. Bottom: d' (confidence intervals 0.95) as a function of group (people with AMD and normally sighted, age-matched controls) and the category of object in the congruency task.
Figure 3
 
Distribution of response times in the context task as a function of the group (patient with AMD versus normally sighted controls) and the relationship between the background and the target object (congruent versus incongruent) in the context task.
Figure 3
 
Distribution of response times in the context task as a function of the group (patient with AMD versus normally sighted controls) and the relationship between the background and the target object (congruent versus incongruent) in the context task.
Table 1. 
 
Inclusion and Exclusion Criteria for the Participants with AMD
Table 1. 
 
Inclusion and Exclusion Criteria for the Participants with AMD
Inclusion criteria
 Willing to give informed consent
 Neovascular AMD well defined with subfoveal involvement  confirmed by fluorescein angiography
 Best corrected visual acuity between 20/40 and 20/400 in the eye  to be studied
 Refraction between +3D and −3D
Exclusion criteria
 History of any neurological or psychiatric disease
 History of ophthalmologic disease other than AMD that might  compromise its VA or peripheral vision during the study  (amblyopic, uncontrolled glaucoma, cataract, optic neuropathy,  diabetic retinopathy, uveitis)
 Unable to communicate (deafness)
 Treated with medication that might compromise concentration   (benzodiazepine, narcoleptics)
 Mental deterioration with MMSE <24
Table 2. 
 
Demographic and Clinical Data of the Study Population
Table 2. 
 
Demographic and Clinical Data of the Study Population
Participants with AMD n = 22
Mean age, y, mean ± SD (range) 78 ± 7 (61–87)
Sex, male/female 8/14
Mean MMSE, mean ± SD 27.5 ± 1.68
Mean Log MAR VA 0.5 ± 0.2
Mean lesion size, mm2, mean ± SD (range) 6.6 ± 3.9 (1.5–13.2)
Greatest diameter, mm, mean ± SD (range) 3.2 ± 1.1 (1.35–4.60)
Normally sighted controls n = 18
Mean age, y, mean ± SD (range) 76 ± 5 (64–89)
Sex, male/female 3/15
Mean Log MAR VA 0.07 ± 0.05
Mean MMSE 28.4 ± 1.9
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×