Free
Article  |   February 2014
Face perception is tuned to horizontal orientation in the N170 time window
Author Affiliations
  • Corentin Jacques
    Research Institute for Psychological Science (ISPSY), UC Louvain, Belgium
    Institute of Neuroscience (IONS), UC Louvain, Belgium
    Department of Psychology, Stanford University, Stanford, CA, USA
    corentin.g.jacques@uclouvain.be
  • Christine Schiltz
    EMACS Unit, Faculty of Language and Literature, Humanities, Arts and Education (FLSHASE), University of Luxembourg, Luxembourg
    christine.schiltz@uni.lu
  • Valerie Goffaux
    Research Institute for Psychological Science (ISPSY), UC Louvain, Belgium
    Institute of Neuroscience (IONS), UC Louvain, Belgium
    Department of Cognitive Neuroscience, Maastricht University, The Netherlands
    valerie.goffaux@uclouvain.be
Journal of Vision February 2014, Vol.14, 5. doi:https://doi.org/10.1167/14.2.5
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Corentin Jacques, Christine Schiltz, Valerie Goffaux; Face perception is tuned to horizontal orientation in the N170 time window. Journal of Vision 2014;14(2):5. https://doi.org/10.1167/14.2.5.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  The specificity of face perception is thought to reside both in its dramatic vulnerability to picture-plane inversion and its strong reliance on horizontally oriented image content. Here we asked when in the visual processing stream face-specific perception is tuned to horizontal information. We measured the behavioral performance and scalp event-related potentials (ERP) when participants viewed upright and inverted images of faces and cars (and natural scenes) that were phase-randomized in a narrow orientation band centered either on vertical or horizontal orientation. For faces, the magnitude of the inversion effect (IE) on behavioral discrimination performance was significantly reduced for horizontally randomized compared to vertically or nonrandomized images, confirming the importance of horizontal information for the recruitment of face-specific processing. Inversion affected the processing of nonrandomized and vertically randomized faces early, in the N170 time window. In contrast, the magnitude of the N170 IE was much smaller for horizontally randomized faces. The present research indicates that the early face-specific neural representations are preferentially tuned to horizontal information and offers new perspectives for a description of the visual information feeding face-specific perception.

Introduction
How the human visual system decodes face identity in everyday—i.e., cluttered and noisy—visual environment has been a matter of extensive research for more than 40 years. Since the seminal study by Yin (1969), the processing of faces is known to differ from the processing of other complex visual categories in a fundamental aspect: its vulnerability to picture-plane inversion. A variety of paradigms has shown that inversion in the picture plane disrupts the ability to capture the idiosyncratic properties of faces, hampering the discrimination and recognition of (un)familiar faces (Bartlett & Searcy, 1993; Farah, Tanaka, & Drain, 1995; McKone & Yovel, 2009; Rhodes, Brake, & Atkinson, 1993; Rossion, 2008) as well as the detection of faces in a visual scene (Lewis & Edmonds, 2003; Rousselet, Mace, & Fabre-Thorpe, 2003). Since inversion affects the processing of faces more than the processing of other complex visual categories (Yin, 1969), the face inversion effect (IE) is considered one of the most robust sources of evidence for the specificity of the brain mechanisms used to process faces (Ellis, 1975). 
Planar inversion generates a global 180° shift of the image phase orientation but preserves intrinsic stimulus properties. As a consequence, any difference between upright and inverted face processing is thought to reveal the observer-dependent biases specifically at work while processing upright faces rather than differences in visual input (i.e., image) properties. While several authors proposed that inversion simply makes face perception less efficient overall (Gold, Mundy, & Tjan, 2012; Sekuler, Gaspar, Gold, & Bennet, 2004; Valentine, 1988; Yovel & Kanwisher, 2004), a large body of evidence indicates that inversion impairs the interdependent processing of inner facial features selectively (Bartlett & Searcy, 1993; Barton, Keenan, & Bass, 2001; Farah et al., 1995; Leder & Bruce, 2000; Rhodes et al., 1993; Sergent, 1984; Young, Hellawell, & Hay, 1987; for a review see Rossion, 2008). This body of evidence was taken to suggest that the mechanisms selectively called upon to process (upright) faces rely on the interdependent (i.e., “holistic”) encoding of face information. 
In contrast to the numerous studies investigating face-specific visual processing, fewer studies address the nature of the visual information being specifically extracted in upright faces. Considering the information being extracted during face perception is critical since knowing what the brain processes is necessary to understand how it operates (Garner, 1970). The empirical literature on face-specific processing most often relies on the manipulation (e.g., displacement, replacement, etc.) of complex facial features (eyes, nose, etc.), which are implicitly assumed to constitute the visual building blocks of face representations (e.g., Haxby, Hoffman, & Gobbini, 2000). However, the exact influence of these complex manipulations on the visual processing of faces is poorly understood (e.g., McKone & Yovel, 2009). A few authors took a different approach to tackle the visual information feeding face-specific processing. This approach is grounded in the knowledge of how retinal input is decomposed at the earliest cortical stages of visual processing, i.e., in primary visual cortex (V1). Hence, since Hubel and Wiesel (1962), it has been well-known that V1 neurons are tuned to orientation and cluster according to their orientation preference, making orientation a fundamental encoding dimension or building block of primary visual processing. A study by Dakin and Watt (2009) indicated that this primary dimension is important to consider for a better understanding of the nature of high-level and category-specific face representations. They asked subjects to name images of faces of celebrities that were filtered in the two-dimensional (2-D) Fourier domain to preserve information in a narrow orientation band. In their experiment recognition performance peaked when horizontal information was preserved in the stimulus and decayed when moving the filter away from this orientation; the worst performance was measured for faces that only contained vertical information. In other words these findings indicate that the recognition of celebrities is tuned to a particular range of image orientation content. Goffaux and Dakin (2010) confirmed and emphasized the relevance of horizontal information in face-specific processing by showing that the advantage for processing horizontal information is lost when the faces are turned upside-down (see also Goffaux, van Zon, & Schiltz, 2011; Pachai, Sekuler, & Bennett, 2013b), indicating that the tuning of face perception to the horizontal orientation range stems from observer-dependent mechanisms and not merely from the physical properties of the face image. In contrast, the latter effect was not observed when subjects discriminated horizontally and vertically filtered pictures of cars or natural scenes. Moreover, in the same paper, it was further shown that the interdependent processing of face features is also driven by horizontal face information. Altogether these findings suggested that face-specific processing is tuned to the horizontally oriented content of the face image, therefore offering a parsimonious description of the visual information that is specifically processed when viewing faces. 
So far, support for the horizontal tuning of face-specific processing has come from behavioral research. Since behavioral measures represent the output of a sequence of processing steps spanning from computations performed in the retina to motor execution, the processing stage at which face perception is tuned to horizontal orientation is still elusive. In healthy humans, a precise and continuous estimation of the timing of perceptual processes can best be performed by recording the scalp electroencephalogram (EEG) and analyzing the event-related potentials (ERPs) triggered by sensory stimuli. By means of EEG and ERP, the present work addresses exactly when in the visual processing chain face-specific processing tunes to the horizontal orientation range. Previous ERP studies in humans have consistently shown that face-specific processing emerges in early stages of visual processing, i.e., in the N170 time window. The N170 is an ERP component recorded over bilateral occipito-temporal (OT) scalp regions (with right hemisphere dominance) between 120 and 200 ms after stimulus onset, and is larger in amplitude for faces compared to nonface stimuli (Bentin, McCarthy, Perez, Puce, & Allison, 1996; Rossion & Jacques, 2008, 2011). The larger N170 for faces starts around 110–130 ms after stimulus onset when controlling stimuli for low-level image properties (i.e., Fourier amplitude spectrum; Rossion & Caharel, 2011; Rousselet, Husk, Bennett, & Sekuler, 2008) and is thought to represent the activation of face-specific neuronal populations in the OT cortex supporting the categorization of a face as a face (i.e., face detection; Ganis, Smith, & Schendan, 2012; Rossion & Jacques, 2008). Besides the coding of faces at the basic-category level, this time window is also thought to encompass the encoding of individual face representations (Caharel, Arripe, Ramon, Jacques, & Rossion, 2009; Jacques & Rossion, 2006; Jacques, d'Arripe, & Rossion, 2007). Numerous ERP studies have shown that the N170 is delayed and increased in amplitude for inverted compared to upright faces (Itier, Latinus, & Taylor, 2006; Itier & Taylor, 2004a; Jacques & Rossion, 2007; Rossion et al., 2000). Even though N170 latency and amplitude modulations have also been reported for the inversion of nonface stimuli, these effects are generally of lower magnitude than for faces, in line with the face-specificity of the behavioral IE (Itier et al., 2006; Itier & Taylor, 2004a; Rossion et al., 2000). The N170 IE is a particularly interesting marker for face-specific processing. Indeed, relative to the comparison of the N170 elicited by different stimulus categories, the N170 IE is less confounded by differences in low-level image properties and therefore is a purer manifestation of early face processing specificity. 
Here, we addressed whether the early face-specific representations activated in the OT cortex during the N170 time window are tuned to horizontal information. We recorded the EEG of human participants while they viewed images of faces and nonface objects (cars and natural scenes in Experiment 1 and cars in Experiment 2), in which the orientation content had been manipulated a priori. This was done by randomizing image Fourier phase in a narrow (20°-wide) orientation band centered on horizontal or vertical orientation (e.g., Näsänen, 1999). Phase randomization disrupted the shape information within the manipulated orientation band (Oppenheim & Lim, 1981; Piotrowski & Campbell, 1982). We reasoned that if the orientation band driving early face-specific representations is phase-randomized, observers would use information at other, suboptimal orientations, resulting in impaired face-specific processing. The use of phase randomization to manipulate image orientation content differs from previous reports on face horizontal tuning, which relied on orientation filtering. Our choice was directed by the fact that, as opposed to filtering, phase randomization allows preserving the amplitude spectrum of the stimuli, i.e., a property to which early electrophysiological markers of visual processing are sensitive (Hansen, Johnson, & Ellemberg, 2012; Rossion & Caharel, 2011). We presented participants with upright and inverted stimuli and expected that the attenuation of face-specific processing due to horizontal phase-randomization would reduce the magnitude of behavioral and N170 IEs for faces. In contrast, we did not expect any interaction between orientation content and planar inversion for nonface stimuli, as their representation is assumed to rely on mechanisms that are not, or far less, sensitive to planar orientation (Yin, 1969). Such findings would show that the extraction of horizontal information is a core aspect of the early visual mechanisms specifically called upon for processing faces. 
We first conducted a psychophysics experiment to confirm that face-specific horizontal tuning replicates when narrow-band phase randomization rather than filtering is used to manipulate image orientation content. The largest IE was found when the horizontal content of the image was preserved, confirming that the observer-dependent, face-specific processing relies on the integrity of horizontal information. In a second experiment, we recorded ERPs to investigate whether randomizing the horizontal image content affects face-specific processing at the level of the N170. 
Experiment 1
The manipulation of orientation content by narrow-band phase randomization departs from previous published studies, which almost exclusively used orientation filtering (except Goffaux & Dakin, 2010, experiment 3, and Pachai et al., 2013b). Therefore, as a prior to our ERP experiment, we first ran a psychophysics experiment to explore whether the horizontal tuning of face processing replicates with this stimulus manipulation method. 
Methods
Participants
Fifteen Maastricht University students (age range: 18 to 27 years old; six males; two left-handed) gave their informed consent to participate in the behavioral experiment. Their vision was normal, or corrected to normal when necessary. They earned course credits in exchange for their participation. 
Stimuli
Forty grayscale pictures of unfamiliar faces (26 females, 14 males) posing in a neutral expression were cropped to remove external face cues (hair, ears, and neck). Forty pictures of cars (front view) and 40 pictures of natural scenes without any man-made component (mostly forests or tree-containing pictures) were also used. Faces and cars were embedded in a textured background made of phase-randomized pictures of faces and cars respectively. This was done so that stimuli from all categories subtended equal size and contrast against background. Next, images from the three categories were equalized for mean root-mean square (RMS) contrast and luminance. 
Two additional sets of stimuli were obtained from the original set of 120 pictures (i.e., 40 faces, cars, and scenes) by randomizing the phase of each stimulus across all spatial frequencies either at 0° (vertical) or 90° (horizontal) orientations (see Figure 1). Specifically, for a given image, an array of random phase of the same size as the image was generated (i.e., by taking the phase spectrum of a white noise array). This array was weighted by multiplying it (element-wise) with an orientation filter that had a wrapped Gaussian profile in the orientation domain (allowing all spatial frequencies to pass equally), centered on one orientation (here either 0° or 90°), normalized between 0 and 1 and whose bandwidth was specified by the Gaussian standard deviation (we used a SD of 20° to approximately match orientation tuning properties in primary visual cortex as done in Goffaux & Dakin, 2010). The weighted random phase was then added to the phase of the original picture. The Gaussian weighting was performed to achieve maximal phase randomization at the target orientation (i.e., adding random values in the [–pi pi] range) and decreasing phase randomization with increasing distance from this orientation (i.e., progressively reducing the range of random phase values added to the original phase, as defined by the filter's bandwidth). This procedure allowed to generate stimuli for which the information around the 0° or the 90° orientation was rendered meaningless, but preserving information at other orientations. Because phase randomization can sometimes slightly change the mean luminance and RMS contrast of an image (due to clipping in the image's luminance histogram), these parameters were adjusted again across categories after the randomization procedure. Last, inverted versions of all images were created by vertically flipping each image. Stimuli were displayed on a light gray background using E-prime 2 (Psychology Software Tools, Inc., Pittsburgh, PA) at 57-cm viewing distance. With an on-screen size of 11.7 × 11.7 cm, they subtended a visual angle of 11.7° × 11.7°. 
Figure 1
 
Making of the stimuli. (A) An image (i.e., face, car or scene) is first decomposed into its phase and amplitude spectra. The phase coefficients around either the horizontal or the vertical orientation are randomized using a narrow orientation filter with a Gaussian profile. The unaltered amplitude spectrum is then combined with the modified phase spectrum to generate images in which the image information contained in the vertical or the horizontal orientation is rendered meaningless (i.e., vertical and horizontal randomization, respectively). (B) Example original, vertically and horizontally randomized face, car ,and scene images used in the experiments.
Figure 1
 
Making of the stimuli. (A) An image (i.e., face, car or scene) is first decomposed into its phase and amplitude spectra. The phase coefficients around either the horizontal or the vertical orientation are randomized using a narrow orientation filter with a Gaussian profile. The unaltered amplitude spectrum is then combined with the modified phase spectrum to generate images in which the image information contained in the vertical or the horizontal orientation is rendered meaningless (i.e., vertical and horizontal randomization, respectively). (B) Example original, vertically and horizontally randomized face, car ,and scene images used in the experiments.
Procedure
In each trial, two pictures of the same category were presented sequentially, both either upright or upside down. In half of the trials, the second stimulus was of the same identity as the first stimulus. The first stimulus was presented for 700 ms, immediately followed by a 200-ms mask. Masks were 10 × 10 Gaussian noise templates (square size of 30 pixels) of the same global luminance and contrast as the stimuli. After a blank-screen interval of 400 ms, the second stimulus was presented at screen center until the subject's response. Subjects performed a matching task between the first and second stimuli of each trial and gave their response by pressing one of two keys with their right hand. To further prevent subjects from relying on elementary image-based matching, the first stimulus and mask were presented at the same, randomly jittered (across trials) position (±20 pixels around screen center), while the second stimulus was presented at the center of the screen. Moreover, in any given trial, the background texture (for faces and cars) and the random phase coefficients used to manipulate information orientation content (i.e., for vertically and horizontally randomized stimuli) were different across first and second stimuli, even when the two stimuli were of the same identity. Following the subject's response, there was a blank screen lasting 1500 ms on average (range: 1250–1750 ms) before the next trial began. Breaks took place every 20 trials. Accuracy feedback was provided every 40 trials during the experiment. There were 36 within-subject experimental conditions: planar orientation (upright, inverted), randomization (none, horizontal, vertical), category (face, car, scene), and similarity (same, different). Planar orientation and category conditions were blocked (block order was randomized across participants) while similarity and randomization were randomly alternated across trials. There were 40 trials per condition, making a total of 1,440 trials performed on two different days. Each stimulus was repeated four times in each condition (twice as the first and twice as the second stimulus). The same trial pairs were presented across planar orientation and randomization conditions for sake of comparability. Prior to the experiment, subjects performed 120 practice trials in total. 
Statistical analyses
Based on hits and correct rejections, we computed the bias-free sensitivity d′ in each experimental condition using the log-linear approach (Stanislaw & Todorov, 1999). Correct response times (RT) were collapsed across same and different trials. Because our hypotheses were about modulations of the magnitude of the IE by the orientation of stimulus information content (i.e., randomization), statistics were performed on an IE index computed for each subject and condition (d′: upright minus inverted; RT: inverted minus upright). Inversion effect indices for d′ and correct RT were submitted to a repeated-measure analysis of variance (ANOVA) with randomization (none, horizontal, vertical) and category (face, car, scene) as within-subject factors. The magnitudes of the IE in d′ and correct RT in the various randomization by category conditions were further compared two-by-two using Bonferroni post-hoc tests. We used partial eta squared (ηp2) to estimate the effect size of the reported effects and differences. The full statistics performed on the raw d′ and RT data are reported as supplemental material
Results
Sensitivity IE
Figure 2 plots the averaged IE magnitude in sensitivity (d′) and correct RT for each randomization by category separately. The repeated-measure ANOVA on sensitivity IE indices revealed a significant main effect of stimulus category, F(2, 28) = 61.06, p < 0.00001, ηp2 = 0.81, as the IE on sensitivity was largest for faces compared to cars and scenes (ps < 0.0001). This effect was moderated by a significant interaction with randomization, F(4, 56) = 4.2, p < 0.005, ηp2 = 0.23. We explored this interaction further by running separate ANOVA for each stimulus category. For faces, the sensitivity IE magnitude was significantly modulated by randomization, F(2, 28) = 8.64, p < 0.0012, ηp2 = 0.38. Horizontally filtered faces produced the weakest IE compared to vertically and nonrandomized faces (p < 0.001 and p < 0.02, respectively). IE magnitude did not differ significantly between the latter two conditions (p = 0.75). In contrast, for cars and scenes the IE was not significantly influenced by image orientation content (ps > 0.5, ηp2 < 0.054). For cars and scenes, we actually found no significant IE (t test comparing the sensitivity IE to zero in each randomization condition for cars: ps > 0.4; for scenes: ps > 0.11). 
Figure 2
 
The magnitude of the inversion effect in sensitivity (d′) and correct RT (ms) is plotted for the different randomization by category conditions. Error bars are across-subjects SEM.
Figure 2
 
The magnitude of the inversion effect in sensitivity (d′) and correct RT (ms) is plotted for the different randomization by category conditions. Error bars are across-subjects SEM.
Correct RT IE
The ANOVA computed on the RT IE indices only revealed a significant main effect of stimulus category, F(2, 28) = 12.1, p < 0.0002, ηp2 = 0.46. As found at the level of the sensitivity IE, the RT IE was most robust for faces compared to cars and scenes (p < 0.0013 and p < 0.0003, respectively). Although largest for faces, we still found significant RT IE for cars and scenes in the nonrandomized condition only (t test comparing the RT IE to zero for cars: p < 0.0032, for scenes: p < 0.05). There was no significant IE for cars and scenes in randomized conditions (ps > 0.32). The interaction between stimulus category and randomization was not significant, F(4, 56) = 0.23, p = 0.92, ηp2 = 0.016. 
Summary and conclusions
In summary, the face inversion effect, i.e., the hallmark of face-specific processing was strongly reduced when horizontal—as compared to vertical—information was randomized. This suggests that the integrity of the horizontal orientation band is crucial in triggering face-specific processes. In contrast to faces, the magnitude of the IE for cars and scenes was not affected by the randomization of vertical or horizontal image content, supporting the face-specific nature of horizontal tuning. 
The fact that the present finding replicates Goffaux and Dakin (2010, experiment 1) using a different method for manipulating orientation content (i.e., narrow-band phase randomization vs. band-pass orientation filtering) indicates that horizontal tuning is a robust aspect of face-specific processing. 
Experiment 2
Experiment 1 indicated that the randomization of the horizontal content of face images disrupts a major behavioral marker of face-specific processing (i.e., the behavioral IE) more than the randomization of vertical image content. In Experiment 2, we systematically investigated the processing stage at which face-specific processing is tuned to horizontal orientation. We recorded the scalp electrophysiological activity of participants while they viewed upright and inverted images of faces and cars whose orientation content had been disrupted via narrow-band phase randomization. 
Methods
Participants
For organizational reasons, the testing period extended over several months and the lab initially used for collecting EEG data was not available when we planned to test our last subjects. Two groups of participants were therefore tested at two different locations. The first group consisted of 17 paid volunteers tested at the University of Luxembourg. Three participants were rejected from analyses because of poor signal-to-noise ratio (i.e., no visible ERP component emerging from the background noise). The 14 participants left were 10 females and four males with a mean age of 23 years old (age range: 20 to 25); all were right-handed. The second group was composed of seven paid volunteers (two males, mean age: 22 years old, age range: 19 to 25, all right-handed) that we tested at the UCLouvain. Thus, our final sample included 21 out of the 24 participants tested. All subjects had normal or corrected-to-normal vision. 
Stimuli
The same stimuli as in the behavioral experiment were used in the EEG experiment. However, because EEG measurements necessitate more trials than behavioral measurements, and for the sake of brevity, only face and car stimuli were used in Experiment 2
Procedure
After electrode cap placement, subjects were seated on a comfortable chair, in a light- and sound-attenuated room, at a viewing distance of 57 cm from a computer CRT monitor. Stimuli were displayed against a light gray background using E-prime 2 (Psychology Software Tools, Inc., Pittsburgh, PA) and subtended 11.7 × 11.7 degrees of visual angle. In each trial a stimulus appeared at the center of the screen for 250 ms, followed by a blank-screen interval of random duration (1400–1900 ms). This setup involved a two (category: faces and cars) by three (randomization: none, horizontal, vertical) by two (planar orientation: upright and inverted) design. The order of these 12 conditions within each block was randomized. The 40 face and car stimuli were repeated twice in each condition. There were 80 trials in each experimental condition, resulting in a total of 960 trials with a short break every block of 80 trials. 
Subjects were instructed to categorize each stimulus as being a car or a face by pressing one of two response buttons with the index and middle fingers of their dominant hand. We used a different task in the ERP experiment compared to the psychophysics experiment, for several reasons: (a) we wanted to avoid differences in behavioral performance across stimulus conditions, (b) we needed to save time to accumulate enough trials over a single session (one image is presented per categorization trial while two images had to be shown per trial in the identity matching task of Experiment 1, which was spread over two sessions), and (c) N170 effects related to face image transformation are found independently of the task performed (Caharel, Fiori, Bernard, Lalonde, & Rebai, 2006; George, Evans, Fiori, Davidoff, & Renault, 1996; Itier, Alain, Sedore, & McIntosh, 2007; Itier & Taylor, 2004b; Jacques & Rossion, 2007, 2010; Letourneau & Mitchell, 2008; Milivojevic, Clapp, Johnson, & Corballis, 2003). The difference in task between Experiments 1 and 2 potentially limits the scope of our interpretation of the ERP findings relative to the processing of face identity per se. However, these limitations are outweighed by the benefits of the increased signal to noise ratio afforded by a larger number of trials. To investigate more directly the relationship between the early neural coding of face identity and the processing of horizontal face information, future ERP experiments will need to use tasks directly tapping into identity perception. 
EEG recording
EEG was recorded at two different locations, involving slightly different setups. Importantly the EEG recording equipment (amplifiers and electrode caps) at the two different locations were manufactured by the same company (ANT, Inc., The Netherlands) allowing for a straightforward integration of the two datasets. In one setup (14 participants), scalp EEG was recorded from 64 Ag-AgCl electrodes mounted in an electrode cap (Waveguard, ANT Neuro, Enschede, The Netherlands) using the extended 10–20 system, with a left mastoid reference. EEG analog signal was digitized at a 1024-Hz sampling rate and a digital antialiasing filter of 0.27 × sampling rate was applied at recording (therefore, at 1024 Hz sampling rate, the usable bandwidth is 0 to ∼276 Hz). Vertical eye movements were monitored using two additional electrodes placed above and below the left eye orbit, and horizontal eye movements were monitored by taking the difference between signals measured at fronto-temporal channels FT7 versus FT8. In a second setup (seven participants), scalp EEG was recorded from 127 Ag-AgCl electrodes mounted in an electrode cap (Waveguard) using the extended 10–20 system, with a midline anterior-frontal reference (Afz). EEG acquisition sampling rate was 1000 Hz. Vertical and horizontal eye movements were monitored using four additional electrodes placed above and below the left eye orbit and on the outer canthus of each eye. The rest of the recording parameters are common to both recording setups. Electrode impedances were kept below 10 kΩ. 
EEG preprocessing
EEG data was analyzed using Eeprobe 3.2 (ANT Neuro, Inc., The Netherlands) and Matlab (v. 7.5). After filtering of the EEG with a digital 30 Hz low-pass filter, time windows in which the SD of the EEG on any electrode within a sliding 200-ms time window exceeded 35 μV were marked as either EEG artifacts or blink artifacts. Blink artifacts were corrected by subtraction of a vertical electrooculogram (EOG) propagation factors based on EOG components derived from principal component analyses. For each subject, EEG epochs containing no EEG artifacts were averaged for each condition separately and baseline corrected using the 200-ms prestimulus time window. Participants' average responses (i.e., ERPs) were then re-referenced to a common average reference, as recommended for N170 studies (Joyce & Rossion, 2005). Single-subject ERPs were combined across the two recording systems by downsampling the averaged ERP signal from 1024 to 1000 Hz and selecting the 63 electrode sites that were shared between the two systems, resulting in a group of 21 participants. 
ERP statistical analyses
We tracked how stimulus inversion modulated brain electrophysiological responses to faces and cars when phase had been a priori randomized in one of the cardinal orientation bands. To this aim, two separate sets of analyses were performed on the ERPs recorded on the scalp in response to the presented stimuli. 
In a first, data-driven approach, we characterized the influence of phase randomization and stimulus category upon the spatiotemporal course of the neural IE, in all electrodes and time samples of the ERP responses. To this aim, first we investigated the spatiotemporal distribution of the IE separately for each category and randomization conditions by comparing the ERP waveforms measured in the upright versus inverted conditions at all electrodes in each condition (i.e., six comparisons). Second, we directly examined the influence of the phase randomization of horizontal and vertical information on the spatiotemporal distribution of the neural IE (i.e., a two-way interaction between randomization and planar orientation) separately for each category (i.e., two comparisons). Specifically, we first computed the inverted minus upright subtraction ERP waveforms at all electrodes for each subject, separately for vertically and horizontally randomized conditions and stimulus categories. We then compared these IE ERP subtraction waveforms across vertically and horizontally randomized conditions, separately for each category. Third, we investigated the three-way interaction between stimulus category, planar orientation, and randomization by subtracting the IE subtraction waveforms computed in the vertically randomized condition from the IE subtraction waveforms computed in the horizontally randomized condition, separately for faces and cars. We compared the resulting subtraction waveforms (i.e., obtained for each category) in order to reveal the differential effect of phase randomization on the neural IE for faces versus cars. 
Each of these nine comparisons was statistically assessed by performing a permutation test (10,000 permutations, two-tailed, p < 0.01) on each scalp electrode and time sample (see Jacques & Rossion, 2009, 2010; Jacques et al., 2007, for details on the permutation procedure). Whenever reported, effect sizes were computed using Cohen's d (Cohen, 1988) for paired-sample tests (i.e., difference between the means of two conditions divided by the SD of the difference between conditions). 
Spatiotemporal analyses provide a rich and data-driven statistical description of the comparison between the investigated ERP waveforms over the whole scalp at each time point. However, this analysis compares ERP waveforms using the same time point across conditions, without an explicit reference to ERP components (e.g., N170). Therefore, when this analysis highlights a significant difference between waveforms it does not allow disambiguating whether this difference is related to a modulation of the amplitude or the latency of a specific ERP component (e.g., the N170). Since we were particularly interested in the N170 component, and for the sake of comparability with ERP literature, we conducted a complementary set of analyses that relies on the analysis of the peak latency and amplitude of the N170 ERP component. Our choice to focus on the N170 component was further motivated by the absence of ERP spatiotemporal difference across experimental conditions before the N170 time window. We measured the peak latency and mean amplitude of the N170 (maximal at about 155 ms) recorded at one pair of OT electrodes where the component was the most prominent (PO7 and PO8 in the left and right hemisphere, respectively), as classically done in studies of face perception (e.g., Bentin et al., 1996; Itier & Taylor, 2004b; Jacques & Rossion, 2007; Rossion et al., 2000). The N170 mean amplitude was quantified as the mean voltage measured within a 40-ms time window centered on the peak latencies of the across-subject averaged ERPs measured for each condition and electrode separately. The measurement window of the N170 amplitude was therefore varied across conditions to provide an assessment of N170 amplitude independently of condition-related differences in peak latency (Jacques & Rossion, 2007, 2010). Similar to Experiment 1, statistical analyses were performed on an IE index computed for each subject for N170 amplitude (upright minus inverted) and N170 latency (inverted minus upright). Statistical analyses were performed using repeated-measure ANOVA with category (faces, cars), randomization (nonrandomized, horizontally randomized, vertically randomized), and hemisphere (left, right) as within-subject factors. Greenhouse-Geisser correction to df was applied when necessary. 
Since EEG data was collected at two different locations involving slightly different systems, we tested for potential differences in the magnitude of the IE on the N170 mean amplitude and peak latency across the two groups of participants (N = 14 and N = 7). Since none of the t tests were significant (all uncorrected ps > 0.1), data from the two groups was collapsed. 
Results
Behavioral results
Participants were both accurate and fast, performing the classification task at ceiling in all conditions (96% ± 1.4% of correct responses on average, range: 94%–98%) with an average of 500-ms response time across conditions (SD: ±11.4 ms; range: 480–513 ms). Due to the ceiling performance, we did not analyze the behavioral performance further. 
ERP results
Spatiotemporal characterization of the neural IE across randomization and category conditions:
Figure 3 shows the across-subject average ERP waveforms measured at the two OT electrodes used for N170 peak analyses (PO7/PO8) in the upright versus inverted plane orientations, separately for each category by randomization condition. The bottom row of Figure 3 shows the effect of randomization on the time course of the neural IE at the same electrodes. To assess the full spatiotemporal course of the effect of randomization on the magnitude of the IE for faces and for cars, we conducted analyses at each time sample (i.e., every 1 ms) of the ERP response and each electrode. This was done separately for each randomization condition and stimulus category. Results of these analyses are shown underneath each ERP waveform plots on Figure 3 and in the form of scalp topographies across multiple time windows in Figure 4
Figure 3
 
Grand average ERP waveforms (from −50 to 350 ms relative to stimulus onset) elicited in the 12 experimental conditions at two occipito-temporal electrodes (PO7 in left hemisphere; PO8 in right hemisphere) in Experiment 2. (A) ERPs elicited by upright and inverted faces are grouped by randomization with nonrandomized in the first row, vertically randomized in the second row, and horizontally randomized in the third row. Each plot shows the ERP response to upright face stimuli (full line), inverted face stimuli (black dashed line), and the difference between upright and inverted (gray dashed line). Results of the time-by-electrode permutation tests comparing upright to inverted stimuli are shown in the bottom part of each plot for the displayed ERP waveforms. Significant differences (p < 0.01; two-tailed; 10,000 permutations) are color-coded as a function of the amplitude of the ERP difference between inverted and upright conditions. The bottom row shows superimposed inversion effect difference waves (inverted minus upright) for each randomization condition (± SEM in shaded). (B) Identical to (A) for car stimuli.
Figure 3
 
Grand average ERP waveforms (from −50 to 350 ms relative to stimulus onset) elicited in the 12 experimental conditions at two occipito-temporal electrodes (PO7 in left hemisphere; PO8 in right hemisphere) in Experiment 2. (A) ERPs elicited by upright and inverted faces are grouped by randomization with nonrandomized in the first row, vertically randomized in the second row, and horizontally randomized in the third row. Each plot shows the ERP response to upright face stimuli (full line), inverted face stimuli (black dashed line), and the difference between upright and inverted (gray dashed line). Results of the time-by-electrode permutation tests comparing upright to inverted stimuli are shown in the bottom part of each plot for the displayed ERP waveforms. Significant differences (p < 0.01; two-tailed; 10,000 permutations) are color-coded as a function of the amplitude of the ERP difference between inverted and upright conditions. The bottom row shows superimposed inversion effect difference waves (inverted minus upright) for each randomization condition (± SEM in shaded). (B) Identical to (A) for car stimuli.
Figure 4
 
Time course of the significant ERP effects of stimulus inversion and interaction with randomization for face and car stimuli. Left column: Topographical maps (view from above the head) of the significant effect (p < 0.01; two-tailed; 10,000 permutations) of face inversion in the nonrandomized (first row), vertically randomized (second row), and horizontally randomized conditions (third row), as well as the interaction between face orientation and vertical versus horizontal randomization (fourth row). Maps are shown for nine time windows from 80 to 240 ms after stimulus onset in 20-ms steps. Significant differences are color-coded as a function of the amplitude of the ERP difference between inverted and upright faces. White means there was no significant effect. The time window of the N170 component (120–200 ms) is highlighted on the top row. Right column: Identical to the left column except the maps show significant effects for car stimuli. The bottom row of the Figure depicts the differential effect of randomization on stimulus inversion as measured for faces and cars (i.e., three-way interaction category × randomization (vertical vs. horizontal) × planar orientation). This clearly shows the stronger differential effect of randomization (vertical vs. horizontal) on the magnitude of the IE for faces compared to cars. Topographical maps were created using EEGLAB (Delorme & Makeig, 2004).
Figure 4
 
Time course of the significant ERP effects of stimulus inversion and interaction with randomization for face and car stimuli. Left column: Topographical maps (view from above the head) of the significant effect (p < 0.01; two-tailed; 10,000 permutations) of face inversion in the nonrandomized (first row), vertically randomized (second row), and horizontally randomized conditions (third row), as well as the interaction between face orientation and vertical versus horizontal randomization (fourth row). Maps are shown for nine time windows from 80 to 240 ms after stimulus onset in 20-ms steps. Significant differences are color-coded as a function of the amplitude of the ERP difference between inverted and upright faces. White means there was no significant effect. The time window of the N170 component (120–200 ms) is highlighted on the top row. Right column: Identical to the left column except the maps show significant effects for car stimuli. The bottom row of the Figure depicts the differential effect of randomization on stimulus inversion as measured for faces and cars (i.e., three-way interaction category × randomization (vertical vs. horizontal) × planar orientation). This clearly shows the stronger differential effect of randomization (vertical vs. horizontal) on the magnitude of the IE for faces compared to cars. Topographical maps were created using EEGLAB (Delorme & Makeig, 2004).
These analyses revealed a strong and robust IE for nonrandomized as well as vertically randomized faces starting over occipital electrodes during the descending slope of the N170 (i.e., approximately 100 ms after stimulus onset; Figure 4, left column first and second rows). The IE extended over right-lateralized OT electrodes at around 120 ms. It then spread to bilateral OT electrodes in the later part of the N170 (at around 160 ms), and lasted up to about 200–220-ms poststimulus onset. In all significant time samples the IE measured at OT electrodes reversed polarity over central and fronto-central electrode sites, as usually found with nonrandomized stimuli (e.g., Jacques & Rossion, 2007). 
As can be seen in Figure 3A (first and second rows), the effect of inversion for nonrandomized and vertically randomized faces manifested most strongly as a shift in the onset and peak latency of the N170 recorded over OT electrodes for inverted relative to upright faces. Besides a small delay of about 10 ms in the overall latency of the IE for vertically randomized compared to nonrandomized faces (see Figure 3A, last row), the IE in these two conditions was notably similar in magnitude and spatiotemporal profile (Figure 4, first and second rows). 
Contrasting with the early and robust IE found for nonrandomized and vertically randomized conditions, the presentation of horizontally randomized faces strongly attenuated the magnitude of the IE and delayed its onset latency. In this condition, although the IE started as a small and short duration difference at around 140 ms over occipital electrodes (i.e., 40 ms later than in the vertically and non randomized conditions; Figure 4, left column, third row), it was mostly restricted to the later part of the N170 (ascending slope after the peak), maximal around 170–200 ms over OT electrodes. The magnitude of the IE for horizontally randomized faces was also significantly weaker than for vertically randomized faces (see Figure 3A, fourth row). This was reflected in the statistical comparison of the magnitude of the face IEs across phase randomization (vertical vs. horizontal) conditions (i.e., two-way interaction between phase randomization and planar orientation, Figure 4, left column, fourth row). Specifically, horizontal randomization led to a significantly smaller and less sustained face IE compared to vertical randomization during the whole N170 time window (i.e., the interaction was significant in a 120–200-ms time window after stimulus onset; Figure 4, left column, fourth row). Measuring the Cohen's d effect size for the IE averaged in a 120–200-ms time window revealed that the size of the IE was more than two times larger for vertically randomized (Cohen's d effect size averaged across electrodes PO7/PO8 = 1.17) compared to horizontally-randomized condition (Cohen's d = 0.55). 
For car stimuli, the IE was of smaller magnitude and shorter duration compared to faces (see Figure 3B, last row). This was particularly the case for vertically and horizontally randomized cars, for which the IE started at around 140 ms and was restricted to the descending/early slope of the N170 (Figure 4, right column, first to third rows). As was done for ERPs to face stimuli, we evaluated the effect of phase randomization on the magnitude of the IE for cars. Importantly, this revealed that the spatiotemporal profile and the magnitude of neural IE for cars were similar across vertical and horizontal randomization conditions (Figure 4, right column, second and third rows). This was directly reflected in the absence of a significant interaction between randomization and planar orientation during the N170 time window for these stimuli (Figure 4, right column, fourth row). Indeed, the IE for cars in the vertical (Cohen's d effect size averaged across electrodes PO7/PO8 = 0.38) and horizontal randomization conditions (Cohen's d = 0.33) were of comparable size in the N170 time window (120–200 ms). The only significant interaction between randomization and planar orientation started in a later time window for cars, after 210 ms, and was of short duration (i.e., <20 ms). This interaction reflected a small latency delay in the onset of the difference between upright and inverted cars for horizontally randomized compared to vertically randomized cars in the P2/N250 time window (see Figure 3B, second to fourth rows). Despite this latency difference, the magnitude of the IE in this time window was similar across randomization conditions. 
In order to highlight the larger effect of phase randomization (vertical vs. horizontal) on the magnitude of the IE for faces compared to the same effect for cars in the N170 time window, we compared ERP subtraction waveforms reflecting the differential IE across randomization for faces versus cars at each time point and electrode. Statistical test of this three-way interaction (permutation tests: 10,000 permutations, p < 0.01, two-tailed) supported the observation that the effect of stimulus inversion was smaller in the horizontally randomized compared to the vertically randomized condition for faces only, not for cars (Figure 4, bottom row; comparison not shown on Figure 3). This three-way interaction reached significance at about 120–130-ms poststimulus onset during the early N170 time window (i.e., descending slope). This early interaction was most robust over right inferior OT electrodes; it spread to left inferior OT electrodes after 160 ms. 
N170 peak latency and amplitude:
Figure 5 depicts the magnitude of the IE on the peak latency and mean amplitude of the N170 as measured at OT electrodes (averaged over PO7 and PO8), where the N170 had maximal amplitude. Statistics were performed on the size of the IE (latency: inverted minus upright; amplitude: upright minus inverted) across category, randomization, and hemisphere factors. The statistics on the raw N170 amplitude and latency values are reported in supplemental material
Figure 5
 
The magnitude of the inversion effect on the peak latency (left) and mean amplitude (right) of the N170 component averaged over bilateral occipito-temporal electrodes PO7/PO8 is shown separately for each category by randomization conditions. Latency was measured at the peak of the N170 and amplitude was measured as the mean amplitude in a ±20-ms window centered on the latency of the across-subject average N170 computed for each condition. Latency and amplitude values were then subtracted to reflect the inversion effect. Error bars are SEM.
Figure 5
 
The magnitude of the inversion effect on the peak latency (left) and mean amplitude (right) of the N170 component averaged over bilateral occipito-temporal electrodes PO7/PO8 is shown separately for each category by randomization conditions. Latency was measured at the peak of the N170 and amplitude was measured as the mean amplitude in a ±20-ms window centered on the latency of the across-subject average N170 computed for each condition. Latency and amplitude values were then subtracted to reflect the inversion effect. Error bars are SEM.
Analyses performed on the magnitude of the N170 peak latency IE revealed significant main effects of category, F(1, 20) = 9.6, p < 0.006, ηp2 = 0.32; randomization, F(1.7, 33.2) = 4.8, p < 0.02, ηp2 = 0.19; and importantly a significant interaction between these two factors, F(1.5, 30.1) = 4.0, p < 0.04, ηp2 = 0.17. This indicates that, for faces, the delay in the N170 latency due to inversion was smaller for horizontally randomized compared to vertically randomized (p < 0.01, ηp2 = 0.41) or nonrandomized (p < 0.03, ηp2 = 0.36) stimuli, while the IE on N170 latency did not differ significantly between the latter conditions (p = 0.3, ηp2 = 0.06). In contrast, for cars randomization did not significantly modulate the N170 latency IE (ps > 0.15), in particular when comparing horizontally to vertically randomized car images (p > 0.99, ηp2 < 0.0001). Note that despite the significant category × randomization interaction, the IE on the N170 peak latency did not differ between face and cars in the non-randomized condition (p = 0.9). 
The magnitude of the N170 mean amplitude IE was larger for faces than for cars as indicated by a significant main effect of category, F(1, 20) = 12.8, p < 0.002, ηp2 = 0.39. The main effect of randomization, F(1.9, 38.9) = 20.9, p < 0.0001, ηp2 = 0.51, was also significant. These main effects were qualified by a significant three-way interaction between category, randomization, and hemisphere, F(1.9, 37.6) = 3.4, p < 0.05, ηp2 = 0.15. In the right hemisphere, the IE was indeed larger for faces than for cars (p < 0.01, ηp2 = 0.33), and for nonrandomized compared to randomized faces and cars (p < 0.0005, ηp2 = 0.55 and p < 0.0002, ηp2 = 0.56 for vertically and horizontally randomized conditions, respectively). The IE in N170 mean amplitude did not differ between randomized conditions (p = 0.96, ηp2 < 0.001). In the left hemisphere, the IE for faces was largest for nonrandomized compared to vertically (p < 0.005, ηp2 = 0.46) and horizontally (p < 0.002, ηp2 = 0.49) randomized conditions; again the IE on N170 amplitude did not differ between the latter conditions (p = 0.56, ηp2 = 0.02). For cars in the left hemisphere there was no significant difference in the size of the IE across randomization conditions (ps > 0.2). 
Summary and conclusions:
The earliest and strongest effect of stimulus inversion on the ERP signal occurred during the N170 time window on OT electrodes (Figure 4). Confirming previous evidence, the neural IE was of larger magnitude and of longer duration for faces than for cars. 
Here we were particularly interested in exploring the influence of image orientation content on the neural IE. Spatiotemporal analyses indicate that the IE during the N170 time window was of comparable magnitude for non- and vertically randomized faces but significantly weaker for horizontally randomized faces (Figures 3 and 4). N170 peak analyses further revealed that the modulation of the IE by randomization was mostly reflected in the latency of the N170 (Figure 5). Indeed, the inversion-related N170 latency increase was significantly smaller for horizontally randomized compared to vertically or nonrandomized faces. Because of the very moderate IE on N170 latency in the horizontally randomized condition, the waveforms for upright and inverted faces overlapped for a longer duration in this condition (i.e., until around 140–170 ms as revealed by the spatiotemporal analyses) compared to the vertically and nonrandomized faces (i.e., IE starting at around 100–120 ms). 
For cars, we found a significant IE in the N170 time window. Unexpectedly, we did not find a significantly larger IE on the N170 peak latency for faces, compared to cars in the nonrandomized condition. This partly fits with previous reports of large N170 peak latency IE for cars (Itier et al., 2006). Nevertheless our spatiotemporal analyses revealed a much stronger IE for faces than cars in the N170 time window. Most importantly, in contrast to faces we did not observe any modulation of orientation content (vertical vs. horizontal randomization) on the neural IE for cars. 
Except for the expected increase for inverted compared to upright faces in the nonrandomized condition, inversion seemed to affect the N170 mean amplitude equally across the horizontal versus vertical randomization by category conditions. This observation and the fact that we did not find a larger IE for faces than cars on the N170 peak latency suggest that in our experiment the spatiotemporal analyses revealed the effects of category and randomization better than N170 peak analyses. 
Importantly, the N170 onset was the first time window to exhibit a significant interaction between stimulus category, planar orientation and randomization further confirming that the disruption of horizontal image content impairs early face-specific processing. 
Discussion
A core aspect of face-specific processing was shown to reside in its strong reliance on horizontal face information content (Dakin & Watt, 2009; Goffaux & Dakin, 2010; Goffaux et al., 2011; Pachai et al., 2013b). Here we addressed when exactly in the visual processing stream face-specific perception is tuned to horizontal information. To this aim, we measured the behavioral performance (Experiment 1) and neural activity (Experiment 2) when participants viewed upright and inverted images of faces and cars (and natural scenes in Experiment 1) that were manipulated to disrupt either the horizontal or the vertical content of the image. We predicted that the behavioral and neural IEs for faces, which mark the recruitment of face-specific mechanisms, would disappear or be attenuated when the orientation range which is crucial for their emergence is disrupted. 
The narrow-band phase-randomization method employed here to manipulate orientation content differed from the methods used in previous studies (e.g., filtering, masking; see below). Experiment 1 was specifically devoted to confirming that face-specific horizontal tuning replicates when narrow-band phase randomization rather than filtering is used to manipulate image orientation content. As predicted by previous findings (Goffaux & Dakin, 2010), the decrease in behavioral performance when discriminating inverted compared to upright faces (i.e., the inversion effect) was of smaller magnitude when horizontal information was randomized in the image compared to when no phase randomization or vertical randomization was applied. The differential modulation of the magnitude of the behavioral IE across randomization conditions confirms previous evidence that the horizontal tuning of face identity processing is not caused by differing image properties across conditions, but rather by observer-dependent biases, i.e., the way information is extracted and processed by the visual system (see also Pachai et al., 2013b). In contrast to faces, the magnitude of the IE in discriminating images of cars and natural scenes was not modulated by image orientation content, supporting the face specificity of horizontal tuning. 
In Experiments 1 and 2 we used images of cars as a nonface object control stimuli. Although the claims that can be made about face-specificity are limited by the nature and number of nonface categories tested, our rationales for using car images as control condition were the following. First, front-view images of cars are matched to front-view images of faces on a number of core aspects (highly familiar stimuli, high within-category homogeneity, symmetric around the vertical axis, more energy in horizontal orientation bands). Second, using band-pass orientation filtering, Goffaux and Dakin (2010) found that, similarly to upright faces, upright front-view images of cars are best discriminated based on horizontal information. For these reasons, using car images as a control nonface category constitutes a very conservative test for the face-specificity of observer-dependent horizontal tuning. The fact that the phase randomization of vertical and horizontal image content failed to modulate the IE for cars therefore makes a strong case for the face-specificity of horizontal tuning. We are not suggesting that the pattern of performance observed for cars would be the same for any other visual category. For instance, present and previous findings indicate that the discrimination of natural scenes is best when using vertical information (Experiment 1; Goffaux & Dakin, 2010). In fact, the preference for a particular orientation band is likely to depend on the orientation of the most diagnostic information for the task at hand (see Hansen & Essock, 2005). 
Horizontal phase randomization disrupts the N170 inversion effect and delays face-specific encoding
The main goal of the present work was to determine when exactly in the course of face-specific processing horizontal tuning emerges. In Experiment 2, we measured whether and when the manipulation of image orientation content influenced the timing and magnitude of the neural IE, taken as a marker of face-specificity. The spatiotemporal analysis revealed a robust IE for the nonrandomized faces starting during the descending slope of the N170 component and peaking at around 170-ms poststimulus onset over bilateral OT electrodes. Randomizing the vertical structure of face images did not influence the magnitude and overall spatiotemporal profile of the neural IE, indicating that the phase randomization of facial vertical structure preserved to a large extent the visual information that is crucial for the emergence of face-specific processing. In contrast, randomizing the horizontal content of face images strongly delayed the onset latency (by 40 ms) and decreased the magnitude of the IE (by 50%) during the whole N170 time window. Spatiotemporal analyses performed on the ERP response to car stimuli also revealed a significant IE in the N170 time window. However, in line with previous evidence, it was of smaller magnitude than for faces overall (Itier et al., 2006; Rossion et al., 2000). In contrast to faces, the spatiotemporal profile and magnitude of the IE observed for cars during the N170 time window was not influenced by whether vertical or horizontal information was randomized, supporting the proposal that high-level processing of upright faces is selectively tuned to information contained in the horizontal orientation range of the image. 
The fact that the neural IE for faces was influenced by image orientation content during the early time window of the N170 (120–130 ms; Figure 4, left column) indicates that face horizontal tuning emerges at the same time as the activation of face-specific representations in the OT cortex (Ganis et al., 2012; see Rossion & Jacques, 2008, 2011, for reviews). 
To further characterize the neural IE, we analyzed the peak latency and mean amplitude parameters of the N170 component. These analyses indicated that while inversion affected both the latency and amplitude of the N170 for nonrandomized faces, it affected the N170 latency, but not the amplitude, for vertically randomized faces. In other words, when focusing on the peak of the N170 (and not on the whole temporal window as in spatiotemporal analyses), the larger IE for vertically compared to horizontally randomized faces was only observed on the N170 peak latency. As noted earlier, the reduced IE on N170 peak latency for horizontally randomized faces likely explains the delayed onset and smaller magnitude of the neural IE in the N170 time window observed in the spatiotemporal analyses for this condition. 
The lack of IE on the N170 mean amplitude for vertically randomized faces is compatible with previous evidence that face inversion usually affects the N170 latency more consistently than its amplitude (Boutsen et al., 2006; Goffaux, Gauthier, & Rossion, 2003; Rossion et al., 2003). As a matter of fact, previous evidence supports the view that amplitude and latency increases in the N170 with face inversion reflect distinct functional processes involved in face processing, and should therefore be considered separately (see extended discussion in Jacques & Rossion, 2010). On the one hand inversion-related delays of N170 peak latency are assumed to reflect the delayed activation of face-specific representations due to the noncanonical orientation of the input face (Itier et al., 2006; Jacques & Rossion, 2007, 2010; Rossion et al., 2000). On the other hand, the neural mechanisms inducing the increase of N170 amplitude with inversion are still debated and have been proposed to result from an enhanced difficulty of face encoding (Rossion et al., 1999), from a recruitment of additional nonface neural representations (Rosburg et al., 2010; Rossion et al., 1999), and/or from a differential sensitivity of the eye region to upright and inverted faces (Itier et al., 2007). The lack of IE on the N170 mean amplitude for vertically randomized faces could result from the presence of visual noise in vertically randomized images, as previous reports documented a reduction in the N170 amplitude for inverted faces when noise was added to the face image (Linkenkaer-Hansen et al., 1998; Schneider et al., 2007). In those same reports, supporting the dissociation between N170 peak amplitude and latency, the N170 latency was consistently delayed for inverted compared to upright faces, independently of the visual noise. Future research using different methods to manipulate image orientation content is needed to determine whether orientation content of face images contributes to the N170 amplitude IE. 
The strong attenuation of the neural IE for horizontally randomized faces suggests that upright versions of these stimuli failed to recruit face-specific mechanisms and supports the view that horizontal cues in faces are essential for triggering the processes specifically called upon for the normal processing of upright faces (Goffaux & Dakin, 2010). In line with this proposal, the N170 elicited by upright faces was delayed most when randomizing the horizontal structure of the image (supplemental data and Supplementary Figure S2). Similarly, previous ERP studies have revealed that image transformations hampering face perception (i.e., inversion, horizontal misalignment, contrast reversal, facial feature displacement or removal, etc.) significantly delay the N170 (Caharel et al., 2006; George et al., 1996; Itier et al., 2007; Itier & Taylor, 2004b; Jacques & Rossion, 2010; Letourneau & Mitchell, 2008; Milivojevic et al., 2003). Since the N170 indexes the early activation of facial representations in the human brain (e.g., Ganis et al., 2012; Rossion & Jacques, 2008; Rousselet et al., 2008), our observations suggest that the neural evidence which triggers the face-specific processing chain accumulates more slowly when horizontal information is unavailable, thus yielding a delay in the N170 response. 
Filtering versus phase randomization
The narrow-band phase randomization approach used here to manipulate the image orientation content departs from the filtering procedure employed in previous published studies investigating the horizontal tuning of face processing (except Goffaux & Dakin, 2010, experiment 3; Pachai et al., 2013b that used orientation-filtered masking). In contrast to filtering, phase randomization leaves the amplitude spectrum unchanged. Therefore, processing differences between phase-randomized and original images cannot be attributed to differences in low-level image properties. This was already suggested by the fact that inversion, which preserves image spectral properties (except for a global constant shift in phase orientation), nonetheless disrupts the preference for horizontally oriented visual cues when processing face identity. 
Another difference between filtering and narrow band phase randomization is that there is a large overlap of image content between horizontally and vertically randomized stimuli as these only depart at the level of the 20° randomized orientation range while the overlap in image content between horizontally and vertically filtered conditions was negligible in previous filtering studies. The potential drawback is that the processing differences between horizontally and vertically randomized stimuli used here are more subtle than between horizontally and vertically filtered faces employed in previous studies (e.g., it is interesting to compare the size of the interaction between planar orientation by orientation content in Goffaux & Dakin, 2010, experiment 1 to the same interaction in Experiment 1 of the present report). Narrow-band phase randomization is therefore a very conservative test of orientation tuning. Replicating face-specific horizontal tuning with this technique bespeaks the robustness of the phenomenon. 
Why is face perception tuned to horizontal information?
The present findings show that the N170 IE for faces, i.e., the most robust and earliest marker of face processing specificity, is tied to the encoding of horizontal information. The fact that the horizontal tuning of face perception arises early in the visual processing stream indicates that it does not reflect differential decisional biases across vertically and horizontally randomized faces. Rather it suggests that the face-specific representations activated during the N170 time window in the OT cortex are preferentially tuned to the horizontal structure of face information. However, present and past evidence does not preclude that face information contained in other orientation ranges may play a significant role when processing other types of facial cues such as eye gaze or facial expression. 
Face identity perception may be tuned to horizontal orientation because this band carries the features and their spatial configuration along the vertical axis of elongation, i.e., cues thought to contribute to face identity coding. In line with this claim, human observers have been shown to especially rely on this orientation range when processing faces holistically and generate viewpoint-invariant representations of face identity (Goffaux & Dakin, 2010; Goffaux & Rossion, 2007). Moreover, local cues in the eye region (shape of the eyes and eye brows, local distances between eye and eye brow), which are particularly relevant for face identity processing (Caldara et al., 2005; De Heering & Schiltz, 2013; Sadr, Jarudi, & Sinha, 2003), are also oriented horizontally. Filtering out or randomizing horizontal orientation in this region of the face is therefore likely to result in impaired face recognition or discrimination, as suggested by recent findings (Pachai, Sekuler, & Bennett, 2013a). In sum, these findings suggest that the horizontal tuning of face perception partly stems from the complex structure of the face stimulus, which contains more information in the horizontal orientation range. Further, the fact that inversion disrupts the processing of horizontal face information selectively, suggests that face-specific processing derives from the extensive experience humans acquire at extracting the diagnostic identity cues located in the horizontal orientation band of upright faces. 
It is plausible that the acquisition of perceptual expertise with a nonface visual category may also tune its processing to the orientation ranges carrying the most diagnostic cues. In line with this idea, visual expertise with cars has been shown to render car matching performance more dependent on the precise spatial frequency image content as observed for the matching of faces (McGugin & Gauthier, 2010). 
Interestingly, while horizontal information has been shown to support interactive/holistic processing of faces (Goffaux & Dakin, 2010), other studies using spatial frequency filtering have indicated that holistic processing is mainly supported by low spatial frequency (LSF) face information (Goffaux, 2009; Goffaux & Rossion, 2006; see also Goffaux et al., 2003). Given these observations, one might expect that the information supporting holistic face processing is coarse (low spatial frequency) and horizontal. However, the influence of the manipulation of both spatial frequency and orientation content on face holistic processing was never directly investigated. In a recent behavioral study (Goffaux et al., 2011), we filtered face images both in the spatial frequency and orientation domains and asked participants to match the identity of the filtered faces. We found that face identity discrimination is driven by the horizontal information contained in the intermediate and high spatial frequencies (>8 cycles per image). Future studies using tasks that specifically tap into holistic face processing are needed to determine how and when spatial frequency and orientation combine when processing faces holistically. 
Further research is also needed to precisely determine what features and properties of the horizontal organization of face images are crucial for triggering face-specific neural processes. For instance, it might be that early face processing is tuned to a coarse prototypic template that contains local horizontal elements arranged in a facial configuration. Compatible with this idea, visual patterns that contain elementary properties common to all face images, such as more contrasted elements in the upper part (Caldara et al., 2006) or a concentric shape (Wilkinson et al., 2000), have been shown to activate face-selective regions in the fusiform gyrus and to elicit a higher N170 (Ohla, Busch, Dahlem, & Herrmann, 2005) compared to other patterns. 
Considering the ongoing conceptual and methodological controversies regarding the processes specifically engaged when viewing faces, the present and past findings on the horizontal tuning of face perception are remarkable as they indicate that most of the information driving face-specific processing is contained in a narrow band centered on horizontal orientation. Studies on the horizontal tuning of face perception thus offer perspectives toward a parsimonious description of the visual information feeding face-specific processing that is grounded in the encoding principles known to operate at earlier processing stages. More generally, these studies show that the consideration of image orientation content, which is a visual dimension thought to be fully resolved in early stages of visual processing, helps understanding the complexity and specificity of high-level face representations. 
Conclusion
In conclusion, it was recently shown that upright face perception is special in that it relies on the processing of horizontal information more heavily than the processing of inverted faces or other visual categories (Dakin & Watt, 2009; Goffaux & Dakin, 2010; Goffaux et al., 2011; Pachai et al., 2013b). The present experiments addressed the visual processing stage at which face perception is tuned to horizontal orientation. We provide ERP evidence that face-specific neural representations activated early, in the N170 time window, are preferentially tuned to the information contained in the horizontal orientation range. This indicates that a core aspect of face-specific processing is its early reliance upon horizontally structured image content. The present research line offers new perspectives for a description of the visual information feeding face-specific perception. 
Supplementary Materials
Acknowledgments
The authors are grateful to Kevin Collet, Danielle Hoffmann, and Anne-Marie Schuller for their assistance during EEG recording on the 64-electrode cap and to Joan Liu-Shuang for her help during EEG recording on the 127-electrode cap. We would also like to thank Steven C. Dakin as the algorithms used for narrow-band phase-randomization were developed based on his original filtering codes. Corentin Jacques and Valérie Goffaux are supported by the Belgian National Fund for Scientific Research (FRS-FNRS) and the Belgian Federal Science Policy Office (BELSPO). 
Commercial relationships: none. 
Corresponding author: Valerie Goffaux. 
Email: valerie.goffaux@uclouvain.be. 
Address: Research Institute for Psychological Science (ISPSY), Université Catholique de Louvain, Belgium. 
References
Bartlett J. C. Searcy J. (1993). Inversion and configuration of faces. Cognitive Psychology, 25, 281–316. [CrossRef] [PubMed]
Barton J. J. Keenan J. P. Bass T. (2001). Discrimination of spatial relations and features in faces: effects of inversion and viewing duration. British Journal of Psychology, 92, 527–549. [CrossRef]
Bentin S. McCarthy G. Perez E. Puce A. Allison T. (1996). Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience, 8, 551–565. [CrossRef] [PubMed]
Boutsen L. Humphreys G. W. Praamstra P. Warbrick T. (2006). Comparing neural correlates of configural processing in faces and objects: an ERP study of the Thatcher illusion. Neuroimage, 32, 352–367. [CrossRef] [PubMed]
Caharel S. Arripe O. Ramon M. Jacques C. Rossion B. (2009). Early adaptation to repeated unfamiliar faces across viewpoint changes in the right hemisphere: Evidence from the N170 ERP component. Neuropsychologia, 47, 639–643. [CrossRef] [PubMed]
Caharel S. Fiori N. Bernard C. Lalonde R. Rebai M. (2006). The effects of inversion and eye displacements of familiar and unknown faces on early and late-stage ERPs. International Journal of Psychophysiology, 62, 141–151. [CrossRef] [PubMed]
Caldara R. Schyns P. Mayer E. Smith M. L. Gosselin F. Rossion B. (2005). Does prosopagnosia take the eyes out of face representations? Evidence for a defect in representing diagnostic facial information following brain damage. Journal of Cognitive Neuroscience, 17, 1652–1666. [CrossRef] [PubMed]
Caldara R. Seghier M. L. Rossion B. Lazeyras F. Michel C. Hauert C.-A. (2006). The fusiform face area is tuned for curvilinear patterns with more high-contrasted elements in the upper part. NeuroImage, 31, 313–319. [CrossRef] [PubMed]
Cohen J. (1988). Statistical power analysis for the behavioral sciences (2nd ed). Hillsdale, NJ: Lawrence Erlbaum Associates.
Dakin S. C. Watt R. J. (2009). Biological “bar codes” in human faces. Journal of Vision, 9 (4): 2, 1–10, http://www.journalofvision.org/content/9/4/2, doi:10.1167/9.4.2. [PubMed] [Article]
De Heering A. Schiltz C. (2013). Sensitivity to spacing information increases more for the eye region than for the mouth region during childhood. International Journal of Behavioral Development, 37, 166–171. [CrossRef]
Delorme A. Makeig S. (2004). EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods, 134, 9–21. [CrossRef] [PubMed]
Ellis H. D. (1975). Recognizing faces. British Journal of Psychology, 66, 409–426. [CrossRef] [PubMed]
Farah M. J. Tanaka J. W. Drain H. M. (1995). What causes the face inversion effect? Journal of Experimental Psychology: Human Perception & Performance, 21, 628–634. [CrossRef]
Ganis G. Smith D. Schendan H. E. (2012). The N170, not the P1, indexes the earliest time for categorical perception of faces, regardless of interstimulus variance. NeuroImage, 62, 1563–1574. [CrossRef] [PubMed]
Garner W. R. (1970). The stimulus in information processing. American Psychology, 25, 350–358. [CrossRef]
George N. Evans J. Fiori N. Davidoff J. Renault B. (1996). Brain events related to normal and moderately scrambled faces. Brain Research: Cognitive Brain Research, 4, 65–76. [CrossRef] [PubMed]
Goffaux V. (2009). Spatial interactions in upright and inverted faces: Re-exploration of spatial scale influence. Vision Research, 49, 774–781. [CrossRef] [PubMed]
Goffaux V. Dakin S. C. (2010). Horizontal information drives the behavioral signatures of face processing. Frontiers in Psychology, 1, 143. [PubMed]
Goffaux V. Gauthier I. Rossion B. (2003). Spatial scale contribution to early visual differences between face and object processing. Cognitive Brain Research, 16, 416–424. [CrossRef] [PubMed]
Goffaux V. Rossion B. (2006). Faces are “spatial”—Holistic face perception is supported by low spatial frequencies. Journal of Experimental Psychology: Human Perception & Performance, 32, 1023–1039. [CrossRef]
Goffaux V. Rossion B. (2007). Face inversion disproportionately impairs the perception of vertical but not horizontal relations between features. Journal of Experimental Psychology: Human Perception & Performance, 33, 995–1002. [CrossRef]
Goffaux V. van Zon J. Schiltz C. (2011). The horizontal tuning of face perception relies on the processing of intermediate and high spatial frequencies. Journal of Vision, 11 (10): 1, 1–9, http://www.journalofvision.org/content/11/10/1, doi:10.1167/11.10.1. [PubMed] [Article] [CrossRef] [PubMed]
Gold J. M. Mundy P. J. Tjan B. S. (2012). The perception of a face is no more than the sum of its parts. Psychological Science, 23, 427–434, doi:10.1177/0956797611427407. [CrossRef] [PubMed]
Hansen B. Essock E. (2005). Influence of scale and orientation on the visual perception of natural scenes. Visual Cognition, 12, 1199–1234. [CrossRef]
Hansen B. C. Johnson A. P. Ellemberg D. (2012). Different spatial frequency bands selectively signal for natural image statistics in the early visual system. Journal of Neurophysiology, 108, 2160–2172. [CrossRef] [PubMed]
Haxby J. V. Hoffman E. A. Gobbini M. I. (2000). The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223–233. [CrossRef]
Hubel D. H. Wiesel T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. Journal of Physiology, 160, 106–154. [CrossRef] [PubMed]
Itier R. J. Alain C. Sedore K. McIntosh A. R. (2007). Early face processing specificity: It's in the eyes! Journal of Cognitive Neuroscience, 19, 1815–1826. [CrossRef] [PubMed]
Itier R. J. Latinus M. Taylor M. J. (2006). Face, eye and object early processing: What is the face specificity? NeuroImage, 29, 667–676. [CrossRef] [PubMed]
Itier R. J. Taylor M. J. (2004a). N170 or N1? Spatiotemporal differences between object and face processing using ERPs. Cerebral Cortex, 14, 132–142. [CrossRef]
Itier R. J. Taylor M. J. (2004b). Effects of repetition learning on upright, inverted and contrast-reversed face processing using ERPs. NeuroImage, 21, 1518–1532. [CrossRef]
Jacques C. d'Arripe O. Rossion B. (2007). The time course of the inversion effect during individual face discrimination. Journal of Vision, 7 (8): 3, 1–9, http://www.journalofvision.org/content/7/8/3, doi:10.1167/7.8.3. [PubMed] [Article]
Jacques C. Rossion B. (2006). The speed of individual face categorization. Psychological Science, 17, 485–492. [CrossRef] [PubMed]
Jacques C. Rossion B. (2007). Early electrophysiological responses to multiple face orientations correlate with individual discrimination performance in humans. NeuroImage, 36, 863–876. [CrossRef] [PubMed]
Jacques C. Rossion B. (2009). The initial representation of individual faces in the right occipito-temporal cortex is holistic: Electrophysiological evidence from the composite face illusion. Journal of Vision, 9 (6): 8, 1–16, http://www.journalofvision.org/content/9/6/8, doi:10.1167/9.6.8. [PubMed] [Article] [PubMed]
Jacques C. Rossion B. (2010). Misaligning face halves increases and delays the N170 specifically for upright faces: Implications for the nature of early face representations. Brain Research, 1318, 96–109. [CrossRef] [PubMed]
Joyce C. Rossion B. (2005). The face-sensitive N170 and VPP components manifest the same brain processes: The effect of reference electrode site. Clinical Neurophysiology, 116, 2613–2631. [CrossRef] [PubMed]
Leder H. Bruce V. (2000). When inverted faces are recognized: the role of configural information in face recognition. Quarterly Journal of Experimental Psychology: A, 53, 513–536. [CrossRef]
Letourneau S. M. Mitchell T. V. (2008). Behavioral and ERP measures of holistic face processing in a composite task. Brain and Cognition, 67, 234–245. [CrossRef] [PubMed]
Lewis M. B. Edmonds A. J. (2003). Face detection: Mapping human performance. Perception, 32, 903–920. [CrossRef] [PubMed]
Linkenkaer-Hansen K. Palva J. M. Sams M. Hietanen J. K. (1998). Face-selective processing in human extrastriate cortex around 120 ms after stimulus onset revealed by magneto- and electroencephalography. Neuroscience Letters, 253, 147–150. [CrossRef] [PubMed]
McGugin R. W. Gauthier I. (2010). Perceptual expertise with objects predicts another hallmark of face perception. Journal of Vision, 10 (10): 15, 1–12, http://www.journalofvision.org/content/10/10/15, doi:10.1167/10.10.15. [PubMed] [Article] [CrossRef]
McKone E. Yovel G. (2009). Why does picture-plane inversion sometimes dissociate perception of features and spacing in faces, and sometimes not? Toward a new theory of holistic processing. Psychonomic Bulletin and Review, 16, 778–797. [CrossRef] [PubMed]
Milivojevic B. Clapp W. C. Johnson B. W. Corballis M. C. (2003). Turn that frown upside down: ERP effects of thatcherization of misorientated faces. Psychophysiology, 40, 967–978. [CrossRef] [PubMed]
Näsänen R. (1999). Spatial frequency bandwidth used in the recognition of facial images. Vision Research, 39, 3824–3833. [CrossRef] [PubMed]
Ohla K. Busch N. A. Dahlem M. A. Herrmann C. S. (2005). Circles are different: The perception of Glass patterns modulates early event-related potentials. Vision Research, 45, 2668–2676. [CrossRef] [PubMed]
Oppenheim A. V. Lim J. S. (1981). The importance of phase in signals. Proceedings of IEEE, 69, 529–541. [CrossRef]
Pachai M. Sekuler A. Bennett P. (2013a). Masking of individual facial features reveals the use of horizontal structure in the eyes. Journal of Vision, 13 (9): 411, http://www.journalofvision.org/content/13/9/411, doi:10.1167/13.9.411. [Abstract]
Pachai M. V. Sekuler A. B. Bennett P. J. (2013b). Sensitivity to information conveyed by horizontal contours is correlated with face identification accuracy. Frontiers in Psychology, 4, 74. [CrossRef]
Piotrowski L. N. Campbell F. W. (1982). A demonstration of the visual importance and flexibility of spatial-frequency amplitude and phase. Perception, 11, 337–346. [CrossRef] [PubMed]
Rhodes G. Brake S. Atkinson A. P. (1993). What's lost in inverted faces? Cognition, 47, 25–57. [CrossRef] [PubMed]
Rosburg T. Ludowig E. Dümpelmann M. Alba-Ferrara L. Urbach H. Elger C. E. (2010). The effect of face inversion on intracranial and scalp recordings of event-related potentials. Psychophysiology, 47 (1), 147–157, doi:10.1111/j.1469-8986.2009.00881.x. [CrossRef] [PubMed]
Rossion B. (2008). Picture-plane inversion leads to qualitative changes of face perception. Acta Psychol, 128, 274–289. [CrossRef]
Rossion B. Caharel S. (2011). ERP evidence for the speed of face categorization in the human brain: Disentangling the contribution of low-level visual cues from face perception. Vision Research, 51, 1297–1311. [CrossRef] [PubMed]
Rossion B. Delvenne J. F. Debatisse D. Goffaux V. Bruyer R. Crommelinck M. Guérit J. M. (1999). Spatio-temporal localization of the face inversion effect: an event-related potentials study. Biological Pyschology, 50 (3), 173–189. [CrossRef]
Rossion B. Gauthier I. Tarr M. J. Despland P. Bruyer R. Linotte S. Crommelinck M. (2000). The N170 occipito-temporal component is delayed and enhanced to inverted faces but not to inverted objects: An electrophysiological account of face-specific processes in the human brain. Neuroreport, 11 (1), 69–74. [CrossRef] [PubMed]
Rossion B. Jacques C. (2008). Does physical interstimulus variance account for early electrophysiological face sensitive responses in the human brain? Ten lessons on the N170. NeuroImage, 39, 1959–1979. [CrossRef] [PubMed]
Rossion B. Jacques C. (2011). The N170: Understanding the time-course of face perception in the human brain. In Kappenman E. S. Luck S. J. (Eds.), The Oxford handbook of ERP components (pp. 115–142). New York: Oxford University Press.
Rossion B. Joyce C. A. Cottrell G. W. Tarr M. J. (2003). Early lateralization and orientation tuning for face, word, and object processing in the visual cortex. Neuroimage, 20, 1609–1624. [CrossRef] [PubMed]
Rousselet G. A. Husk J. S. Bennett P. J. Sekuler A. B. (2008). Time course and robustness of ERP object and face differences. Journal of Vision, 8 (12): 3, 1–18, http://www.journalofvision.org/content/8/12/3, doi:10.1167/8.12.3. [PubMed] [Article]
Rousselet G. A. Mace M. J. M. Fabre-Thorpe M. (2003). Is it an animal? Is it a human face? Fast processing in upright and inverted natural scenes. Journal of Vision, 3 (6): 5, 440–455, http://www.journalofvision.org/content/3/6/5, doi:10.1167/3.6.5. [PubMed] [Article] [PubMed]
Sadr J. Jarudi I. Sinha P. (2003). The role of eyebrows in face recognition. Perception, 32, 285–293. [CrossRef] [PubMed]
Sekuler A. B. Gaspar C. M. Gold J. M. Bennett P. J. (2004). Inversion leads to quantitative, not qualitative, changes in face processing. Current Biology, 14, 391–396. [CrossRef] [PubMed]
Sergent J. (1984). An investigation into component and configural processes underlying face perception. British Journal of Psychology, 75, 221–242. [CrossRef] [PubMed]
Schneider B. L. DeLong J. E. Busey T. A. (2007). Added noise affects the neural correlates of upright and inverted faces differently. Journal of Vision, 7 (4): 4, 1–24, http://www.journalofvision.org/content/7/4/4, doi:10.1167/7.4.4. [PubMed] [Article]
Stanislaw H. Todorov N. (1999). Calculation of signal detection theory measures. Behavior Research Methods, Instruments, & Computers, 31, 137–149. [CrossRef]
Valentine T. (1988). Upside-down faces: A review of the effect of inversion upon face recognition. British Journal of Psychology, 79, 471–491. [CrossRef] [PubMed]
Wilkinson F. James T. W. Wilson H. R. Gati J. S. Menon R. S. Goodale M. A. (2000). An fMRI study of the selective activation of human extrastriate form vision areas by radial and concentric gratings. Current Biology, 10, 1455–1458. [CrossRef] [PubMed]
Yin R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology: Human Perceptions & Performance, 81, 141–145.
Young A. W. Hellawell D. Hay D. C. (1987). Configurational information in face perception. Perception, 16, 747–759. [CrossRef] [PubMed]
Yovel G. Kanwisher N. (2004). Face perception: Domain specific, not process specific. Neuron, 44, 889–898. [PubMed]
Figure 1
 
Making of the stimuli. (A) An image (i.e., face, car or scene) is first decomposed into its phase and amplitude spectra. The phase coefficients around either the horizontal or the vertical orientation are randomized using a narrow orientation filter with a Gaussian profile. The unaltered amplitude spectrum is then combined with the modified phase spectrum to generate images in which the image information contained in the vertical or the horizontal orientation is rendered meaningless (i.e., vertical and horizontal randomization, respectively). (B) Example original, vertically and horizontally randomized face, car ,and scene images used in the experiments.
Figure 1
 
Making of the stimuli. (A) An image (i.e., face, car or scene) is first decomposed into its phase and amplitude spectra. The phase coefficients around either the horizontal or the vertical orientation are randomized using a narrow orientation filter with a Gaussian profile. The unaltered amplitude spectrum is then combined with the modified phase spectrum to generate images in which the image information contained in the vertical or the horizontal orientation is rendered meaningless (i.e., vertical and horizontal randomization, respectively). (B) Example original, vertically and horizontally randomized face, car ,and scene images used in the experiments.
Figure 2
 
The magnitude of the inversion effect in sensitivity (d′) and correct RT (ms) is plotted for the different randomization by category conditions. Error bars are across-subjects SEM.
Figure 2
 
The magnitude of the inversion effect in sensitivity (d′) and correct RT (ms) is plotted for the different randomization by category conditions. Error bars are across-subjects SEM.
Figure 3
 
Grand average ERP waveforms (from −50 to 350 ms relative to stimulus onset) elicited in the 12 experimental conditions at two occipito-temporal electrodes (PO7 in left hemisphere; PO8 in right hemisphere) in Experiment 2. (A) ERPs elicited by upright and inverted faces are grouped by randomization with nonrandomized in the first row, vertically randomized in the second row, and horizontally randomized in the third row. Each plot shows the ERP response to upright face stimuli (full line), inverted face stimuli (black dashed line), and the difference between upright and inverted (gray dashed line). Results of the time-by-electrode permutation tests comparing upright to inverted stimuli are shown in the bottom part of each plot for the displayed ERP waveforms. Significant differences (p < 0.01; two-tailed; 10,000 permutations) are color-coded as a function of the amplitude of the ERP difference between inverted and upright conditions. The bottom row shows superimposed inversion effect difference waves (inverted minus upright) for each randomization condition (± SEM in shaded). (B) Identical to (A) for car stimuli.
Figure 3
 
Grand average ERP waveforms (from −50 to 350 ms relative to stimulus onset) elicited in the 12 experimental conditions at two occipito-temporal electrodes (PO7 in left hemisphere; PO8 in right hemisphere) in Experiment 2. (A) ERPs elicited by upright and inverted faces are grouped by randomization with nonrandomized in the first row, vertically randomized in the second row, and horizontally randomized in the third row. Each plot shows the ERP response to upright face stimuli (full line), inverted face stimuli (black dashed line), and the difference between upright and inverted (gray dashed line). Results of the time-by-electrode permutation tests comparing upright to inverted stimuli are shown in the bottom part of each plot for the displayed ERP waveforms. Significant differences (p < 0.01; two-tailed; 10,000 permutations) are color-coded as a function of the amplitude of the ERP difference between inverted and upright conditions. The bottom row shows superimposed inversion effect difference waves (inverted minus upright) for each randomization condition (± SEM in shaded). (B) Identical to (A) for car stimuli.
Figure 4
 
Time course of the significant ERP effects of stimulus inversion and interaction with randomization for face and car stimuli. Left column: Topographical maps (view from above the head) of the significant effect (p < 0.01; two-tailed; 10,000 permutations) of face inversion in the nonrandomized (first row), vertically randomized (second row), and horizontally randomized conditions (third row), as well as the interaction between face orientation and vertical versus horizontal randomization (fourth row). Maps are shown for nine time windows from 80 to 240 ms after stimulus onset in 20-ms steps. Significant differences are color-coded as a function of the amplitude of the ERP difference between inverted and upright faces. White means there was no significant effect. The time window of the N170 component (120–200 ms) is highlighted on the top row. Right column: Identical to the left column except the maps show significant effects for car stimuli. The bottom row of the Figure depicts the differential effect of randomization on stimulus inversion as measured for faces and cars (i.e., three-way interaction category × randomization (vertical vs. horizontal) × planar orientation). This clearly shows the stronger differential effect of randomization (vertical vs. horizontal) on the magnitude of the IE for faces compared to cars. Topographical maps were created using EEGLAB (Delorme & Makeig, 2004).
Figure 4
 
Time course of the significant ERP effects of stimulus inversion and interaction with randomization for face and car stimuli. Left column: Topographical maps (view from above the head) of the significant effect (p < 0.01; two-tailed; 10,000 permutations) of face inversion in the nonrandomized (first row), vertically randomized (second row), and horizontally randomized conditions (third row), as well as the interaction between face orientation and vertical versus horizontal randomization (fourth row). Maps are shown for nine time windows from 80 to 240 ms after stimulus onset in 20-ms steps. Significant differences are color-coded as a function of the amplitude of the ERP difference between inverted and upright faces. White means there was no significant effect. The time window of the N170 component (120–200 ms) is highlighted on the top row. Right column: Identical to the left column except the maps show significant effects for car stimuli. The bottom row of the Figure depicts the differential effect of randomization on stimulus inversion as measured for faces and cars (i.e., three-way interaction category × randomization (vertical vs. horizontal) × planar orientation). This clearly shows the stronger differential effect of randomization (vertical vs. horizontal) on the magnitude of the IE for faces compared to cars. Topographical maps were created using EEGLAB (Delorme & Makeig, 2004).
Figure 5
 
The magnitude of the inversion effect on the peak latency (left) and mean amplitude (right) of the N170 component averaged over bilateral occipito-temporal electrodes PO7/PO8 is shown separately for each category by randomization conditions. Latency was measured at the peak of the N170 and amplitude was measured as the mean amplitude in a ±20-ms window centered on the latency of the across-subject average N170 computed for each condition. Latency and amplitude values were then subtracted to reflect the inversion effect. Error bars are SEM.
Figure 5
 
The magnitude of the inversion effect on the peak latency (left) and mean amplitude (right) of the N170 component averaged over bilateral occipito-temporal electrodes PO7/PO8 is shown separately for each category by randomization conditions. Latency was measured at the peak of the N170 and amplitude was measured as the mean amplitude in a ±20-ms window centered on the latency of the across-subject average N170 computed for each condition. Latency and amplitude values were then subtracted to reflect the inversion effect. Error bars are SEM.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×