September 2021
Volume 21, Issue 10
Open Access
Article  |   September 2021
Self-operated stimuli improve subsequent visual motion integration
Author Affiliations
  • Giulia Sedda
    Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
    giulia.sedda@edu.unige.it
  • David J. Ostry
    Department of Psychology, McGill University, Montreal, Canada
    Haskins Laboratories, New Haven, CT, USA
    david.ostry@mcgill.ca
  • Vittorio Sanguineti
    Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
    vittorio.sanguineti@unige.it
  • Silvio P. Sabatini
    Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
    silvio.sabatini@unige.it
Journal of Vision September 2021, Vol.21, 13. doi:https://doi.org/10.1167/jov.21.10.13
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Giulia Sedda, David J. Ostry, Vittorio Sanguineti, Silvio P. Sabatini; Self-operated stimuli improve subsequent visual motion integration. Journal of Vision 2021;21(10):13. https://doi.org/10.1167/jov.21.10.13.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Evidences of perceptual changes that accompany motor activity have been limited primarily to audition and somatosensation. Here we asked whether motor learning results in changes to visual motion perception. We designed a reaching task in which participants were trained to make movements along several directions, while the visual feedback was provided by an intrinsically ambiguous moving stimulus directly tied to hand motion. We find that training improves coherent motion perception and that changes in movement are correlated with perceptual changes. No perceptual changes are observed in passive training even when observers were provided with an explicit strategy to facilitate single motion perception. A Bayesian model suggests that movement training promotes the fine-tuning of the internal representation of stimulus geometry. These results emphasize the role of sensorimotor interaction in determining the persistent properties in space and time that define a percept.

Introduction
Active interaction with the environment is a defining feature of our daily activities and critically relies on the interplay between motor and perceptual processes. This continuous exchange, besides promoting a proper calibration of sensory and motor systems and stabilizing the functional architecture of the respective circuits (Held & Hein, 1963), allows mutual adaptation after sensory perturbations or motor training. The influence of movement on perception has been documented in situations in which movement changes – induced by adaptation or learning – elicit perceptual changes. For instance, adaptation to force fields (Ostry et al., 2010; Vahdat et al., 2011; Mattar et al., 2012), visuomotor rotations (Cressman & Henriques, 2009; Volcic et al., 2013), optic prisms (Harris, 1963; Beckett, 1980), and locomotion (Jensen et al., 1998; Leech et al., 2018) induce a shift in position sense. Similarly, adaptation to altered auditory feedback in speech results in changes in speech perception (Nasir & Ostry, 2009; Lametti et al., 2012). 
There has been substantially less work on the effects of motor learning on visual function (Brown et al., 2007). It has been suggested that movement and perception share the same common representations (Prinz, 1997). Most studies report transient changes to visual perception which accompany movement (see Schütz-Bosbach and Prinz, 2007, for a review). Movement execution (Zwickel et al., 2007; Beets et al., 2010a), movement planning (Wohlschläger, 2001), and cognitive expectations (Veto et al., 2018) each shape the visual perception of a moving stimulus. Movement can bias perceptual sensitivity toward visual events that either share features with what we are currently doing (Wohlschläger, 2000) or that deviate from the expected sensory consequences of our movements (Zwickel et al., 2007). This finding suggests that action may guide inferential processes from visual cues to categories and suggests that cognitive or context expectations can concurrently influence our perceptual judgments (Beets et al. 2010a; De Lange et al., 2018; Dogge et al., 2019). Yet, these studies suggest no evidence of effects that reflect visual perceptual learning, as an enhancement of perceptual discrimination/detection capabilities after motor practice with a visual stimulus. Vision is indeed a highly reliable source of information and it is difficult to induce changes in visual perception, at least when simple forms of visual feedback like displayed positions or trajectories are involved. However, vision, like other exteroceptive senses, does not provide a unique interpretation of reality, for example, when we look at objects with shadows or in different lighting. During development, through active interaction with the environment we learn to combine different cues and contextual information to find a unique solution that usually corresponds with veridical interpretation. A grating that moves through an aperture is an example of an inherently ambiguous visual pattern, because its movement direction cannot be uniquely determined from visual information alone (Wallach, 1935; Fennema & Thompson, 1979; Adelson & Movshon, 1982). Moreover, it has the desired feature of selectively activating specific early spatiotemporal frequency channels in the cortex. When we observe two superimposed gratings moving in different directions – a “plaid” stimulus – we tend to integrate their drifting speeds into one coherent motion. Alternatively, the plaid can be perceived as two separate gratings, which slide over each other in different directions – a situation referred to as transparent motion (Stoner et al., 1990). By varying the features of the individual gratings, the perceptual ambiguity can be manipulated (Stoner & Albright, 1992; Kim & Wilson, 1993; Hupé & Rubin, 2004). As a general rule, when the two component gratings are more balanced, that is, they are more similar in terms of spatial frequency, contrast, and luminance, the plaid is more likely perceived as a coherent pattern moving in one direction. 
To understand how movement affects the way we make sense of this complex visual information, we ask specifically if experiencing the visual consequences of self-generated movements can promote perceptual changes that affect subsequent judgment tasks. In this respect, the point is not learning a motor skill or adapting to external perturbations, but just exercising sensorimotor contingencies, by experiencing the sensory consequences of self-generated movements. Accordingly, we designed a motor task in which the direction and speed of the hand is continuously displayed as a plaid moving through an aperture. We then looked at whether motor training affects the ability to perceive subsequent plaid motions. A Bayesian generative model of the perceptual process helped to identify the underlying mechanisms. 
Methods
Subjects
A total of 30 subjects (11 male and 19 female, 18–30 years old) participated in this study. All participants had normal or corrected-to-normal vision and reported no history of a neurological disorder. They were naïve to the purpose of the study and received written and verbal instructions before the start of the experiment. Each participant was randomly assigned to one of three groups. 
A total of 10 subjects was chosen for each group. The calculation was performed using the t test for normally distributed data with unknown standard deviation. The research was approved by the Ethical Committee of the Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa. Each subject signed a consent form conforming to these guidelines. 
Apparatus
Visual stimuli were presented on a 19-inch LCD monitor (Samsung B2430L) at 1920\(\times\)1080 pixels, and refreshed at 60 Hz. In a dimly lit room, participants were seated in front of the screen at about 57 cm of distance, so that the visual angle of the whole display was 60°; see Figure 1a. In one part of the experiment (see below), participants grasped the puck of a digitizing tablet (CalComp, Inc, 3200-series DrawingSlate II, Model 32120) to actively drive the motion of the visual stimulus using planar movements. The digitizer had a 305 mm\(\times\)457 mm workspace, and a 125 Hz sampling rate. The center point of the screen was mapped onto the center of the digitizing tablet, with a 1:1 scale factor, see Figure 1c. 
Stimuli
We presented a plaid stimulus composed of two square-wave gratings through a circular aperture, about 13° in diameter, on a black background, as shown in Figure 1a. The luminance of the black background outside the aperture was 0 cd/\(\rm {m^2}\). The two gratings had normal directions \(\theta _1\) and \(\theta _2\). The plaid moved at speed \(v=\) 5°/s in the direction \(\theta =\) 45° (from the lower left corner of the screen to the upper right corner). \(\Delta {\theta _1}=\theta _1-\theta =\) −60° and \(\Delta {\theta _2}=\theta _2-\theta =\) −75.5° define the relative directions of the individual gratings with respect to the direction of the plaid. With this geometric arrangement, the ratio between the two gratings speeds is \(\cos \Delta {\theta _1}/{\rm cos} \Delta {\theta _2}\). In particular, stimuli were designed as plaids, whose direction fell outside the range of the directions of the two component gratings (type II plaid; see Ferrera & Wilson, 1990). We chose gratings directions that were relatively close to one another, and far away from the direction of the whole plaid pattern. Because of this geometric arrangement the plaid motion direction was distinct from that of the gratings (Cropper et al., 1996), and the directions of gratings were sufficiently close to one another to be interchangeable with their average. Each grating was composed of dark (55–65 cd/\(\rm {m^2}\)) and light (115–125 cd/\(\rm {m^2}\)) stripes, and a spatial frequency of 0.6 cycle/°. Stimulus was presented in transparency (Stone et al., 1990; Stoner et al., 1990; Stoner & Albright, 1992), and the perceptual uncertainty was modulated by varying the contrast level of each grating. The overall plaid image was defined as:  
\begin{eqnarray*} L({\boldsymbol{x}},t)=L_0 [1+C_1 g_1({\boldsymbol{x}},t)+C_2 g_2({\boldsymbol{x}},t)], \end{eqnarray*}
where \(L_0\) is the mean intensity, \(g_1\) and \(g_2\) are the functions that defined the two component gratings, and \(C_1\) and \(C_2\) are the gratings’ contrast levels, respectively (Stoner et al., 1990). The total contrast \(C = C_1 + C_2\) was kept constant, and the relative contrast difference between the gratings of each plaid was defined as \(\Delta {c}=|C_1-C_2|/C\). In all experiments, we set \(L_0\simeq \rm 90\, cd/m^2\) and \(C = 0.5\). Participants were instructed to maintain fixation at the center of the stimulus during the entire duration of stimulus presentation. Stimuli were generated using Psychophysics Toolbox for Matlab (Brainard, 1997; Kleiner et al., 2007). 
Experimental protocol
The experimental procedure had three phases; see Figure 1c (top). Participants were initially administered a perceptual judgment task (pre-training test). Next, they underwent a training phase under a variety of conditions (see below). After training, they repeated the perceptual judgment task (post-training test). 
Perceptual judgment task
The purpose of this test was to quantify the ability to correctly assess the direction of plaid motion as the relative contrast difference of the two gratings (\(\Delta {c}\)) was varied. The test used a two-alternative forced-choice (2AFC) paradigm (Figure 1d). Each trial started with a fixation point (black screen with a white cross at the center) displayed for 2 s. Then, a red arrow with a \(\theta _{a}=\) 45° direction, was displayed for 1 s. Finally, two different plaids were presented for 1 s each, separated by a 1 s fixation point. The two plaids were identical and both moved in the direction \(\theta =45\)°, but had a different \(\Delta {c}\). At the end of the trial, participants were asked to choose which of the two plaids had a movement direction that was most similar to that denoted by the red arrow. They had to provide an answer by pressing the left or right arrow on the keyboard within a 3 s time limit to indicate the first or the second plaid, respectively. Throughout the entire test, one plaid (reference stimulus, R) had a constant contrast difference, \(\Delta {c_R}=\) 0.8, which corresponds to a large imbalance in the contrast of the component gratings, which in turn favours the perception of the individual gratings motions. In the other plaid (test stimulus, T), the contrast difference \(\Delta {c_T}\) changed on each trial, within a 0 to 0.8 range. The test and reference plaids were presented in random order. 
We used a Bayesian adaptive procedure – \(\Psi\) (Psi) method (Kontsevich & Tyler, 1999; Prins, 2013) – to select the value of \(\Delta {c_T}\) on the current trial, based on the participant’s answers in the previous trials. We took the selection of the test stimulus as the correct answer. Every time the subject answered correctly, the \(\Delta {c_T}\) value was increased, so that it gradually became more and more similar to \(\Delta {c_R}\)
The entire perceptual judgment test took a total of 100 trials to complete, which corresponded with a duration of about 30 minutes. 
Active motor training
Participants were instructed to perform out and back planar arm movements between two briefly presented visual cues, in a target direction \(\theta _T\) (Figure 1c, bottom, left). The motion of a plaid on the screen was continuously yoked to the instantaneous direction of hand movement, \(\theta (t)\), so that the two gratings moved in directions \(\theta _1(t)=\theta (t)+\Delta {\theta _1}\) and \(\theta _2(t)=\theta (t)+\Delta {\theta _2}\) while their relative orientations with respect to plaid motion, that is, \(\Delta {\theta _1}\) and \(\Delta {\theta _2}\) remained constant. 
The training phase was organized into a series of trials, each characterized by a different target hand direction. 
At the beginning of each trial, participants had to place the hand (depicted as a blue cursor on the screen) inside a start region (circle on a black background) and hold it there for 2 s. Then both the start region and the cursor disappeared, and a circular aperture was displayed. Two white circles placed just outside the aperture were displayed for 1 s, at opposite sides with respect to the center of the aperture, 28° of visual angle from one another with respect to the participant. The circles indicated the target hand direction for that trial. As the circles disappeared, a plaid appeared inside the aperture. Participants were instructed to move the hand back and forth in the target direction, between the two remembered circle positions. Participants were encouraged to maintain a speed no greater than 5°/s – the speed of the plaid used in the perceptual judgment task. To aid in maintaining the correct speed, participants continuously received visual feedback on movement speed (circular spot in the bottom left corner of the screen; green if the speed was \(\le\)5°/s, red otherwise). Each trial had a fixed duration of 30 s. 
During training, participants were prevented from seeing their arm, so that the only visual information about their movement direction was provided by the plaid motion. During the movement training phase, the relative contrast difference, \(\Delta c\), in the plaid was set to that subject’s threshold level, as estimated at the end of the pre-training perceptual judgment task. The entire training protocol involved four target directions (0°, 45°, 90°, 135°) each repeated 10 times in pseudo-random order, for a total of 40 trials and an approximate duration of 40 minutes. 
Visual-only training
Participants were instructed to observe on the screen a plaid moving through an aperture, while performing no movements. The plaid stimulus was the playback of a stimulus generated by another participant in the active training group (Figure 1c, bottom, middle). Again, the total duration of this phase was about 40 minutes. 
Cognitive training
As in the visual-only training condition, participants had to observe on the screen a plaid while performing no movements. In addition, they were provided a hint to estimate the plaid movement direction – attend to the movements of the grating intersection points which have the same direction and speed of the plaid; see Figure 1c (bottom, right). Again, the total duration was about 40 minutes. 
Data analysis
For each subject, we quantified performance in the perceptual judgment tasks before and after training by estimating a psychometric curve using a Bayesian adaptive \(\Psi\) (psi) method (Kontsevich & Tyler, 1999; Prins, 2013) and assuming a normal cumulative distribution function. We used the threshold and slope of the estimated psychometric curve as measures of perceptual performance. The threshold value is defined as the \(\Delta {c_T}\) value corresponding with a 75\(\%\) probability of selecting the test stimulus, whereas the slope is defined as the inclination value of the psychometric curve at the threshold point. It is important to note that the number of trials (i.e., 100) chosen for the perceptual judgment task allows full convergence for the perceptual threshold values, but not for the slope estimates (Kontsevich & Tyler, 1999). We then assessed whether perceptual performance was affected by training in the active, visual or cognitive training conditions. To do this, we took perceptual threshold and slope before training (\(\mbox{Th}_{\rm{pre}}\), \(\mbox{Slope}_{\rm{pre}}\)) as the baseline perceptual performance. We then looked at the threshold and slope after training (\(\mbox{Th}_{\rm{post}}\), \(\mbox{Slope}_{\rm{post}}\)). For each quantity and for all experimental conditions, we first assessed normality (Anderson-Darling test). If normality was not ruled out for perceptual thresholds and/or slopes, we ran a repeated measures two-way ANOVA with time (PRE, POST) and experimental condition (active, visual, cognitive) as within- and between-subject factors. 
In case the normality assumption had to be rejected, we used a non-parametric test (Kruskal-Wallis) to assess differences among conditions in the perceptual baseline (\(\mbox{Th}_{\rm{pre}}\) and \(\mbox{Slope}_{\rm{pre}}\)). We then focused on the training-related change (\(\Delta \mbox{Th}=\mbox{Th}_{\rm{post}}-\mbox{Th}_{\rm{pre}}\); same for slope). We tested for differences among experimental conditions, using one-way ANOVA if normality was not ruled out; a nonparametric test (Wilcoxon rank sum test) otherwise. Post hoc analyses were conducted using pairwise t test, with a Bonferroni-Holm correction. 
Finally, we examined movements of the hand in the active motor training condition. For each trial, we calculated the statistical distribution of hand velocities (direction and magnitude), by separately accounting for forward and backward movements. We subtracted the target direction from the distribution of movement directions and then took the mean (bias) and standard deviation of the directional error for each block and each subject. We assessed how these quantities changed over the course of training (correlation with block number) and whether these changes correlated with changes in perceptual performance. 
Computational model
Plaid geometry
Plaid geometry is completely specified by the overall plaid velocity, \({\boldsymbol{v}}\) and by the directions of the two gratings, \(\theta _1\) and \(\theta _2\). The velocity of a single grating, \(\boldsymbol{v}_i\), \(i=1,2\) is calculated as the projection of the plaid velocity onto the grating’s normal direction: \(\boldsymbol{v}_i = {\boldsymbol{u}}_i \cdot ({\boldsymbol{u}}_i^T \cdot {\boldsymbol{v}})\) where \(\boldsymbol{u}_i = {\left[ \cos \theta _i\, \sin \theta _i \right]}^T\), \(i=1,2\); see Figure 1b. This expression can be rewritten as  
\begin{equation} \boldsymbol{v}_i = (\boldsymbol{u}_i \cdot \boldsymbol{u}_i^T) \cdot \boldsymbol{v} = U_i \cdot \boldsymbol{v}. \end{equation}
(1)
 
Sensory system
We assumed that the perceived velocity of each grating, \(\boldsymbol{m}_i, i=1,2\), is affected by additive zero-mean Gaussian noise, so that:  
\begin{equation} \left\lbrace \begin{array}{@{}l@{\quad }l@{}}\boldsymbol{m}_1 = \boldsymbol{v}_1 + \boldsymbol{\eta }_1 = U_1 \cdot \boldsymbol{v} + \boldsymbol{\eta }_1 \\ \boldsymbol{m}_2 = \boldsymbol{v}_2 + \boldsymbol{\eta }_2 = U_2 \cdot \boldsymbol{v} + \boldsymbol{\eta }_2, \end{array}\right. \end{equation}
(2)
where \(\boldsymbol{\eta }_i \sim \mbox{Normal}(0,Q_i)\), \(i=1,2\) and the noise covariance matrix, \(Q_i\), is defined as  
\begin{equation} Q_i=R(\theta _i)^T \cdot \left[{\begin{array}{@{}l@{\quad}l@{}}\sigma _{i \perp }^2 & 0 \\ 0 & \sigma _{i \parallel }^2 \end{array}}\right] \cdot R(\theta _i), \end{equation}
(3)
where \(\sigma _{i \perp }^2\) and \(\sigma _{i \parallel }^2\) are the noise variances in directions that are perpendicular and parallel to grating \(i\), and \(R(\theta _i)\) is a rotation matrix. As in Hedges et al. (2011), we set \(\sigma _{i \parallel }^2 = h\, \sigma _{i \perp }^2\) with \(h=0.3\), so that the covariance matrix is aligned with the grating’s normal direction. As a consequence, we have that \(p({\boldsymbol{m}_i}|{\boldsymbol{v}}) = \mbox{Normal}({\boldsymbol{m}}_i; U_i \cdot {\boldsymbol{v}}, Q_i)\)
Perception of a single grating is known to be affected by contrast. We assume that the noise variance is proportional to the inverse power of the relative contrast \(c_i=C_i/C\), i.e. \(\sigma _i^2=s^2/{c_i^q}\), where \(q\gt \) 0 is the power exponent and \(s^2\) is the variance corresponding to a relative contrast \(c_i=\) 1. This model is consistent with the findings of Hürlimann et al. (2002), who derived a similar expression. This expression also predicts that zero contrast (i.e., no grating) corresponds with an infinite noise variance. As a consequence, the covariance matrix of each grating is a function of the contrast: \(Q_i = Q_i(c_i)\)
Bayesian generative model of plaid perception
We used a Bayesian framework to model the way humans perceive plaid motion (Hürlimann et al., 2002; Stocker & Simoncelli, 2006; Hedges et al., 2011). The optimal estimate of plaid velocity, \({\boldsymbol{v}}\), from the observed gratings velocities, \(\boldsymbol{m}_1\) and \(\boldsymbol{m}_2\), is the one which maximizes the posterior probability of \({\boldsymbol{v}}\), given \(\boldsymbol{m}_1\) and \(\boldsymbol{m}_2\):  
\begin{equation} \hat{\boldsymbol{v}} = \arg \max _{\boldsymbol{v}} p({\boldsymbol{v}}|{\boldsymbol{m}}) \end{equation}
(4)
From Bayes’ theorem, the posterior probability is given by: \(p({\boldsymbol{v}}| {\boldsymbol{m}}_1, {\boldsymbol{m}}_2) \propto L(\boldsymbol{v}) \cdot p({\boldsymbol{v}})\), where \(L(\boldsymbol{v})=p({\boldsymbol{m}}_1|{\boldsymbol{v}})\cdot p({\boldsymbol{m}}_2|{\boldsymbol{v}})\) is the likelihood of \({\boldsymbol{v}}\) given the observations (\(\boldsymbol{m}_1\) and \(\boldsymbol{m}_2\)), whereas \(p({\boldsymbol{v}})\) is the velocity prior, which reflects prior experience of the subject with observation of moving stimuli. Several studies have reported a perceptual bias toward low-velocity stimuli, which was modeled as a zero-mean, exponential (Stocker & Simoncelli, 2006), or power-law (Hedges et al., 2011) probability density function. Here, we assume a Gaussian dependence: \(p({\boldsymbol{v}}) = \mbox{Normal} ({\boldsymbol{v}}; \boldsymbol{0}, I \sigma _p^2)\)
This perceptual model implies that the contrasts of the gratings affect plaid velocity estimation through the gratings covariances, \(Q_i(c_i)\). In fact, when the two gratings have the same contrast they equally activate the corresponding Fourier (bandpass) motion channels and they equally contribute to the perception of the moving plaid. However, there is some evidence that perceiving the velocity of a single grating is affected by vision of another moving grating with a different contrast (Stone et al., 1990; Champion et al., 2007). Hence in the case of contrast unbalance, that is, \(\Delta c \ne 0\), one grating systematically affects the perception of the other. To incorporate this effect, we tentatively assumed that the perceptual system uses an inaccurate representation of plaid geometry, \(\hat{U}_i\), thus generating inaccurate predictions of the grating velocities. We specifically set \(\hat{U}_i = U_i + \Delta U_i\), where \(\Delta U_1 = k\, U_2\, \Delta {c}\) and similarly \(\Delta U_2 = k\, U_1\, \Delta {c}\), in which \(k\) denotes the amount of cross-talk. A consequence of this inaccurate representation of plaid geometry is that each grating is perceived as slightly rotated toward the other, in a way that is proportional to the relative contrast difference. In conclusion, our Bayesian perceptual model assumes that contrast unbalance has both a systematic and a random effect (on \(U_i\) and \(Q_i\), \(i=1,2\), respectively). 
The optimal estimate of plaid velocity, \(\hat{{\boldsymbol{v}}}\) is a random variable (different \({\boldsymbol{m}}_i\)’s give a different estimate) with a normal distribution, in which both mean and covariance depend on the relative contrast difference, \(\Delta c\): \(p(\hat{{\boldsymbol{v}}} | {\boldsymbol{v}}, \Delta {c} ) = \mbox{Normal}(\hat{{\boldsymbol{v}}}; {\boldsymbol{\mu }}_v (\Delta {c}), \Sigma _v (\Delta {c}))\). Notice that, because of the prior, the estimate is biased – namely, the estimator’s expected value is not the true plaid velocity \({\boldsymbol{v}}\). Different from earlier Bayesian formulations (Weiss et al., 2002; Hedges et al., 2011), the proposed model predicts two key empirical findings about the error in perceived plaid direction: (i) the error decreases with the logarithm of the contrast ratio (Stone et al., 1990) and (ii) the error is directed toward the higher contrast grating at high plaid speeds, but when the speed decreases, the perceived plaid direction is biased toward the low contrast grating (Champion et al., 2007); see the Supplementary Material for details. 
Perceptual judgment task
The probability of estimating a plaid direction \(\hat{\theta }\) given a specific \(\Delta {c}\) is given by  
\begin{equation} p(\hat{\theta } | \Delta {c}) = \int _0^\infty p(\hat{\boldsymbol{v}} | \Delta {c}) \cdot | \hat{\boldsymbol{v}}| \cdot d|\hat{\boldsymbol{v}}| \end{equation}
(5)
 
The perceptual judgment task can be modelled as a binary decision between two possible answers, test (T) or reference (R). The probability of answering T as a function of the contrast difference \(\Delta {c_T}\) in the test stimulus and \(\Delta {c_R}\) in the reference stimulus, that is, \(\mbox{Pr} ( { T} | \hat{\theta }, \Delta {c_T}, \Delta {c_R})\), where \(\hat{\theta }=\theta _a\) (45° in our experiment), can be calculated from Bayes’ theorem:  
\begin{equation} \mbox{Pr} ({ T} | \hat{\theta }, \Delta {c_T}, \Delta {c_R}) = \frac{p(\hat{\theta } | \Delta {c_T})}{p(\hat{\theta } | \Delta {c_T})+p(\hat{\theta } | \Delta {c_R})} \end{equation}
(6)
Note that the model predicts that for \(\Delta {c_T} = \Delta {c_R}\), the posterior probability is \(\mbox{Pr}({T} | \hat{\theta }, \Delta {c_T}, \Delta {c_R})=\) 0.5. By decreasing \(\Delta {c_T}\), the \(\mbox{Pr} ({T} | \hat{\theta }, \Delta {c_T}, \Delta {c_R})\) is expected to be greater. Hence, for a given value of \(\Delta {c_R}\) and \(\hat{\theta }\), the function \(f_{T}(\Delta {c_T})=\mbox{Pr} ({T}| \hat{\theta }, \Delta {c_T}, \Delta {c_R})\) can be interpreted as a psychometric curve, whose magnitude ranges between 0.5 and 1. 
Figure 2 summarizes the proposed Bayesian model of plaid perception. Relative contrast modulates the distribution of the estimated plaid velocity. Two relative contrast conditions, one fixed (\(R\)) and one variable (\(T\)), are used to build a psychometric curve which denotes the probability of selecting plaid T when asked which of plaid T or R has a movement direction which is closest to the displayed cue. 
Estimation of model parameters
The psychometric curve of Equation 6 is a function of the model parameters \(w= [s^2, q, \sigma _p^2, k ]^T\), that is, \(\mbox{Pr}({T}|\Delta {c_T}, \hat{\theta }=\theta _{a}) = f_{T}(\Delta {c_T}; w)\). We identified the model parameters \(w\) from the perceptual judgment data before and after each of the training conditions. The available dataset, \(D=\lbrace (\Delta {c_R}^{(l)}, \Delta {c_T}^{(l)}, y^{(l)}), l=1,\dots , L\rbrace\), was obtained from repeated forced-choice tests with different values of \(\Delta {c_R}^{(l)}\) and \(\Delta {c_T}^{(l)}\), where \(y^{(l)}\) is the T/R answer to the \(l\)-th test trial (we assume that if T is chosen then \(y^{(l)} = 1\); \(y^{(l)}=0\) otherwise). The answer \(y\) can be modeled as a random variable with a binomial distribution: \(\mbox{Pr}(y) = p^y \cdot (1-p)^{1-y}\), where \(p = \mbox{Pr}(y=1|\Delta {c_T},\hat{\theta }=\theta _{a})=\mbox{Pr}({T}|\Delta {c_T},\hat{\theta }=\theta _{a})=f_{T}(\Delta {c_T};w)\)
The optimal estimate of the model parameters given the data were obtained by maximizing the model log-likelihood, assuming that the \(L\) trials of the perceptual task are independent. The likelihood is given by:  
\begin{eqnarray} \begin{array}{@{}l@{}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!{\cal L}(w) = \prod\limits _{l=1}^L \left\lbrace f_{T}(\Delta {c_T}^{(l)}; w)^{y^{(l)}} \cdot \left[1-f_{T}(\Delta {c_T}^{(l)}; w) \right]^{1-y^{(l)}} \right\rbrace \end{array}\;\; \end{eqnarray}
(7)
For each subject and for each condition (before and after training), we estimated the model parameters \(w\) through numeric maximization of \(\log {\cal L}(w)\)
Results
The experimental apparatus and procedure used in this study are illustrated in Figure 1
Figure 1.
 
Experimental setup and protocol. (a) Experimental setup: The participant is seated in front of a screen and is exposed to moving visual stimuli (plaid). During active training, they perform planar movements that result in motion of the plaid on the screen. Visual feedback of the arm is blocked. (b) A plaid stimulus with velocity \(\boldsymbol{v}\), composed of two gratings moving at velocities \(\boldsymbol{v}_1, \boldsymbol{v}_2\). (c) The experimental protocol has three phases: participants start with a perceptual judgment task, then they perform a training task, and finally they repeat the perceptual task. Participants were divided into three groups, each with a different training condition: active, visual-only, and cognitive. (d) The perceptual task is a 2AFC paradigm. Participants see two consecutive moving plaid stimuli, and are asked to choose which stimulus is moving in a direction more similar to that of the red arrow. (e) During training, participants are exposed to moving plaids. In the active group, they perform planar hand movements to control the plaid motion on the screen, while participants of both the visual-only and cognitive groups observe played-back motions. In the cognitive condition, participants are instructed to focus their attention on the intersections of the gratings.
Figure 1.
 
Experimental setup and protocol. (a) Experimental setup: The participant is seated in front of a screen and is exposed to moving visual stimuli (plaid). During active training, they perform planar movements that result in motion of the plaid on the screen. Visual feedback of the arm is blocked. (b) A plaid stimulus with velocity \(\boldsymbol{v}\), composed of two gratings moving at velocities \(\boldsymbol{v}_1, \boldsymbol{v}_2\). (c) The experimental protocol has three phases: participants start with a perceptual judgment task, then they perform a training task, and finally they repeat the perceptual task. Participants were divided into three groups, each with a different training condition: active, visual-only, and cognitive. (d) The perceptual task is a 2AFC paradigm. Participants see two consecutive moving plaid stimuli, and are asked to choose which stimulus is moving in a direction more similar to that of the red arrow. (e) During training, participants are exposed to moving plaids. In the active group, they perform planar hand movements to control the plaid motion on the screen, while participants of both the visual-only and cognitive groups observe played-back motions. In the cognitive condition, participants are instructed to focus their attention on the intersections of the gratings.
Figure 1b illustrated the plaid stimulus, which is formed by two gratings with different orientations. When a single moving grating is observed through an aperture, only the component velocity perpendicular to its orientation can be perceived. By adjusting the relative difference in the contrast of the two gratings, they seem either to be sliding one over the other in directions \({\boldsymbol{v}_1}\) and \({\boldsymbol{v}_2}\), or as a single plaid pattern moving in direction \({\boldsymbol{v}}\). In this way, the extent to which one perceives the coherent motion of a single plaid or two separate gratings can be manipulated. 
In the experiment, participants undergo an initial perceptual judgment task to assess perception of plaid motion for different contrast values (see Figure 1d). The perceptual task involves a two-alternative forced-choice (2AFC). Participants are presented with two consecutive moving plaids that differ in the amount of the contrast difference. They are required to indicate which plaid is moving in a direction most similar to that shown by a red arrow. One of the two plaids, a reference stimulus, has a fixed contrast difference \(\Delta {c_R}\) between the two gratings. In the other, a test stimulus, the contrast difference \(\Delta {c_T}\) is systematically varied, and it is always less, which makes it easier to detect the plaid motion direction. This is followed by a training phase (Figure 1e), after which the perceptual task is repeated. In all conditions the plaid motion is seen through an aperture. Three different groups of participants were tested. In an active training condition participants use self-operated plaids: they control the plaid motion by moving their hand, such that the direction and velocity of the moving plaid corresponds with that of the hand; vision of the hand is blocked. Participants were instructed to make continuous movements back and forth between two circles that were presented briefly at the start of a continuous movement trial. The contrast difference \(\Delta {c}\) between the single gratings that form the plaid is based on the individual threshold estimated from the pre-training perceptual task. In a visual-only condition, the participant sees a played-back moving plaid stimulus of another participant. In a cognitive condition the stimulus is identical to that in the visual-only condition, and in addition the experimenter instructs the participant to attend to the intersections of the two gratings. The motion of the intersections corresponds with that of the plaid. This focuses the participant’s attention on the relevant information (Adelson & Movshon, 1982; Lu & Sperling, 1995) and provides them with an explicit strategy that enables them to correctly estimate the plaid motion. 
Perceptual learning
The results of perceptual task (the probability of selecting the test stimulus as a function of the contrast difference \(\Delta {c_T}\)) and, in particular, training related changes in perception are presented in Figure 3a. 
Figure 2.
 
Bayesian model for the plaid estimation process and forced-choice paradigm. A test (T) and reference (R) plaid are shown, with \(\Delta {c_T}=0\) (top) and \(\Delta {c_R}=0.08\) (bottom). For each plaid, the optimal estimate of plaid velocity, \(\hat{{\boldsymbol{v}}}\), is represented. \(p(\hat{{\boldsymbol{v}}}|{\boldsymbol{v}}, \Delta {c})\) has a normal distribution, in which both mean and covariance depend on the relative contrast difference, \(\Delta c\). The probability of estimating a plaid direction \(\hat{\theta }\) given a specific \(\Delta {c}\) is given by \(p(\hat{\theta }|\Delta {c})\). The psychometric curve represents the probability of answering T as a function of the contrast difference \(\Delta {c_T}\) and \(\Delta {c_R}\), i.e. \(\mbox{Pr} ( { T} | \hat{\theta }, \Delta {c_T}, \Delta {c_R})\), where \(\hat{\theta }=\theta _a\) (45° in our experiment).
Figure 2.
 
Bayesian model for the plaid estimation process and forced-choice paradigm. A test (T) and reference (R) plaid are shown, with \(\Delta {c_T}=0\) (top) and \(\Delta {c_R}=0.08\) (bottom). For each plaid, the optimal estimate of plaid velocity, \(\hat{{\boldsymbol{v}}}\), is represented. \(p(\hat{{\boldsymbol{v}}}|{\boldsymbol{v}}, \Delta {c})\) has a normal distribution, in which both mean and covariance depend on the relative contrast difference, \(\Delta c\). The probability of estimating a plaid direction \(\hat{\theta }\) given a specific \(\Delta {c}\) is given by \(p(\hat{\theta }|\Delta {c})\). The psychometric curve represents the probability of answering T as a function of the contrast difference \(\Delta {c_T}\) and \(\Delta {c_R}\), i.e. \(\mbox{Pr} ( { T} | \hat{\theta }, \Delta {c_T}, \Delta {c_R})\), where \(\hat{\theta }=\theta _a\) (45° in our experiment).
Both threshold differences (\(\Delta {\mbox{Th}} = \mbox{Th}_{\rm{post}} - \mbox{Th}_{\rm{pre}}\)) and differences (\(\Delta {\mbox{Slope}} = \mbox{Slope}_{\rm{post}} - \mbox{Slope}_{\rm{pre}}\)) in the slope of the psychometric function are shown. A better perceptual performance is reflected in an ability to select the test stimulus under conditions of greater contrast difference, that is for larger values of \(\Delta {c_T}\). Both threshold and slope values were estimated using the adaptive \(\Psi\) procedure (see Methods). It is worth noting that the number of trials (100) chosen for the perceptual judgment task allows full convergence for perceptual threshold values, but not for the slope estimates (see Methods) (Kontsevich & Tyler, 1999). The values of the perceptual slope are shown for completeness and to allow for qualitative analysis of the results. Figure 3b shows psychometric threshold differences and slope differences for all participants in each experimental condition. It can be seen that there are changes in the psychometric threshold for the active group, only, and no changes in slope in any of the experimental conditions. Statistical analyses were conducted using difference scores which were found to be normally distributed (p > 0.05; Anderson-Darling test), whereas the pre-training and post-training perceptual values were not normally distributed (p < 0.05). We ran nonparametric tests (Kruskal-Wallis) to verify that baseline values for threshold and slope did not differ. We tested for differences in both the threshold (\(\Delta {\mbox{Th}}\)) and the slope (\(\Delta {\mbox{Slope}}\)) of the psychometric curves. We observed a significant difference in threshold between experimental conditions, p = 0.0002; F(2,27) = 7.45; one-way analysis of variance (ANOVA), and no reliable difference in slope. Post hoc analyses (Bonferroni-Holm) revealed a significant difference in the threshold of the active and visual-only conditions (p = 0.003) and between the active and cognitive conditions (p = 0.019). 
Figures 3c,d summarize intersubject variability. After training, all subjects in the active group exhibit a threshold value which is close to the maximum value of 0.8, in other words they correctly select the test stimulus throughout the entire range of \(\Delta {c_T}\). If we look at the subjects with low thresholds, the majority of threshold increases are seen in the active condition. In contrast, subjects in the visual and cognitive groups exhibit no consistent trend in either threshold or slope. Figure 3d displays intersubject variability in terms of the minimum polygons enclosing all data points in each group. Before training, the subjects within each group exhibit a similar amount of variability. After training, the subjects in the active group display a polarization towards greater threshold values. The distribution of the differences between ’pre’ and ’post’ values in both thresholds and slopes shows for the three training conditions a different clustering in three distinct regions of the \(\Delta\)Threshold \(-\Delta\)Slope plane. 
As a control, to test the idea that the observed perceptual changes are tied to movement, the following experimental condition was conducted, in which we introduce random variation between the direction of the hand movement and the direction of the displayed plaid. If perceptual learning depends on movement, this new condition will result in decreased perceptual change. Ten additional subjects participated in this control experiment. They received the same instructions as in the active training condition. However, for each new target direction (\(\theta _T\)) – denoted by the two initial cues at the edge of the aperture – we set the plaid velocity (\(v\)) so that the speed was identical to the speed of the hand (\(|v_H|\)), i.e. \(|v|=|v_H|\), but the direction of the plaid (\(\theta\)) was set to be random, where \(\theta = \theta _T+\theta _{rnd}\), where \(\theta _{rnd}\) is a random angle, uniformly distributed in the range [–45°, 45°]. Accordingly, in this condition the plaid velocity is defined as:  
\begin{eqnarray*} v = |v_H| \cdot {sgn}(d_T \cdot v_H) \cdot \left[\cos \theta ,\, \sin \theta \right] \end{eqnarray*}
where \(d_T = \left[ \cos \theta _T, \sin \theta _T \right]\) is the direction of the target. In this way, the direction of the plaid movement is completely unrelated to the direction of the hand. 
The results of this control experiment – active less-matched condition – are summarized in Figure 3b (data points and bars are displayed in orange) to facilitate comparison with the results of the main experiment. These results indicate no systematic change in either the threshold or the slope. Similar to the main experiment, for both threshold and slope the normal distribution hypothesis was rejected for pre- and post-training values (p < 0.05; Anderson-Darling test), but not for their change (p > 0.05). The change from before to after training, in both threshold (\(\Delta {\mbox{Th}}\)) and slope (\(\Delta {\mbox{Slope}}\)), was found to be not-significant (p > 0.5; one-sample t test). This result can be compared with the changes observed in the other conditions: active (p = 0.0016), visual (p > 0.05), and cognitive (p > 0.5). In conclusion, only the active condition resulted a significant change in perceptual performance. 
Motor training
During movements, the subjects in the active group initially exhibit a positive (counterclockwise) directional bias (Figures 4a,b). This motor bias tends to decrease with training. Figure 4c summarizes the intersubject variability in both perceptual and motor performance. Individuals exhibiting a lower initial perceptual threshold (hence a “poor” sensory performance) benefit more from motor training. Subjects with an initially higher perceptual thresholds (i.e., an already good sensory performance) exhibit a lower motor bias; in this case the perceptual thresholds remain constant or increase. We observed a reliable correlation (\(\rm R^2\) = 0.79, p = 0.002) between the change in movement direction (\(\Delta\)Motor bias) observed during training and the perceptual change before and after training (\(\Delta\)Perceptual Threshold) (Figure 4d). Subjects who decrease their motor bias after the training (\(\Delta\)Motor bias \(\lt 0\)) show greater perceptual changes. 
Figure 3.
 
Results of the perceptual judgment task. (a) Representative psychometric curves. Each curve shows the probability that the participant chooses the test stimulus over a range of relative contrast differences \(\Delta {c_T}\). The grey curve represents the perceptual baseline of a representative subject (before training), whereas the colored curve indicates the perceptual change (after training). Solid lines represent the average values, the filled circles indicate the 75\(\%\) threshold value (\(\mbox{Th}_{\rm{pre}}, \mbox{Th}_{\rm{post}}\)), and the dashed black lines show the slope of the curves at the threshold point (\(\mbox{Slope}_{\rm{pre}}, \mbox{Slope}_{\rm{post}}\)). The horizontal black segment displays the threshold difference, \(\Delta {\mbox{Th}}= \mbox{Th}_{\rm{post}} - \rm \mbox{Th}_{\rm{pre}}\). (b) Bar plots represent the average values of threshold differences \(\Delta {\mbox{Th}}\) and slope differences \(\Delta {\mbox{Slope}}\) in all experimental conditions: active (ACT) in red; visual only (VIS) in blue; cognitive (COG) in green; active less-matched (ALM) in orange. Dots represent the individual values for each subject. Error bars denote standard errors. The average value of \(\Delta {\mbox{Th}}\) in the active group is significantly greater than in the visual-only (p = 0.003) and the cognitive (p = 0.019) groups. (c) Qualitative analysis of intersubject variability is shown in terms of threshold and slope changes for the individual subjects in each condition. In all three conditions, the grey dots represent the pre-training values. (d) Qualitative analysis of inter subjects variability is shown in terms of the minimum polygons enclosing all data points in each group.
Figure 3.
 
Results of the perceptual judgment task. (a) Representative psychometric curves. Each curve shows the probability that the participant chooses the test stimulus over a range of relative contrast differences \(\Delta {c_T}\). The grey curve represents the perceptual baseline of a representative subject (before training), whereas the colored curve indicates the perceptual change (after training). Solid lines represent the average values, the filled circles indicate the 75\(\%\) threshold value (\(\mbox{Th}_{\rm{pre}}, \mbox{Th}_{\rm{post}}\)), and the dashed black lines show the slope of the curves at the threshold point (\(\mbox{Slope}_{\rm{pre}}, \mbox{Slope}_{\rm{post}}\)). The horizontal black segment displays the threshold difference, \(\Delta {\mbox{Th}}= \mbox{Th}_{\rm{post}} - \rm \mbox{Th}_{\rm{pre}}\). (b) Bar plots represent the average values of threshold differences \(\Delta {\mbox{Th}}\) and slope differences \(\Delta {\mbox{Slope}}\) in all experimental conditions: active (ACT) in red; visual only (VIS) in blue; cognitive (COG) in green; active less-matched (ALM) in orange. Dots represent the individual values for each subject. Error bars denote standard errors. The average value of \(\Delta {\mbox{Th}}\) in the active group is significantly greater than in the visual-only (p = 0.003) and the cognitive (p = 0.019) groups. (c) Qualitative analysis of intersubject variability is shown in terms of threshold and slope changes for the individual subjects in each condition. In all three conditions, the grey dots represent the pre-training values. (d) Qualitative analysis of inter subjects variability is shown in terms of the minimum polygons enclosing all data points in each group.
Computational model
A Bayesian framework was used to model the way humans perceive plaid motion. The model incorporates a number of empirical findings on how the perception of single gratings is affected by contrast. 
We specifically assumed that perception is affected by both random and systematic effects. In particular, the variance in the perceived velocity of a grating (hereafter referred to as perceptual variance) increases with the negative power of the contrast (Hürlimann et al., 2002), where \(s^2\) is the perceptual variance at maximum contrast, and \(q\) is the power exponent. We also assumed a reciprocal influence of one grating on the perception of the other grating’s velocity (cross-talk), proportional to their contrast imbalance through a parameter \(k\). Finally, we assumed a Gaussian prior for grating velocities with zero mean and variance \(\sigma {_p}^2\). As a whole, the model is fully characterized by the parameter vector \(w=[s^2, q, k, \sigma {_p}^2]^T\); see Methods for further details. 
Unlike previous Bayesian formulations (Weiss et al., 2002; Hedges et al., 2011), this model is able to reproduce key findings concerning the directional bias in the perception of plaid movements (Stone et al., 1990; Champion et al., 2007); see the Supplementary Material (Figures S1 and S2) for details. 
The perceptual judgment task was modelled as a binary decision between two possible alternatives, test (T) or reference (R). The probability of choosing T as a function of the estimate of plaid direction, \(\hat{\theta }\), and the contrast difference \(\Delta {c_T}\) in the test stimuli, that is, Pr(\(T|\hat{\theta }, \Delta {c_T}\)), is modelled as a Bayesian decision process (see Methods, Equation 6). The predicted psychometric curve is a function of the model parameters \(w\), that is, Pr\((T|\hat{\theta }, \Delta {c_T})=f_T(\Delta {c_T},w)\) (Figure 2). 
Figure 4.
 
Results of active motor training. (a) For a typical subject, the trajectories (blue dots) of hand movements across all trials of the first block for all movement directions are represented. Solid black lines represent the tested directions. Scale bar: 2 cm. (b) For the same typical subject, the frequency distribution (grey polar histogram) of hand movement across all trials of the first block for all movement directions are represented. Dashed black lines represent the tested directions. Red solid lines represent the median value of the motor bias for each direction. The black arrow shows the sign of the directional bias. Bin size: 1°. (c) Qualitative analysis of inter subject variability is shown in terms of perceptual threshold and motor bias changes for the individual subjects in the active group. Perceptual thresholds relate to the judgment task (not the training task). (d) Relationship between perceptual change from before to after movement training (\(\Delta\)Perceptual threshold), and the change in movement direction (\(\Delta\)Motor bias) that occurred in conjunction with training (p = 0.0006).
Figure 4.
 
Results of active motor training. (a) For a typical subject, the trajectories (blue dots) of hand movements across all trials of the first block for all movement directions are represented. Solid black lines represent the tested directions. Scale bar: 2 cm. (b) For the same typical subject, the frequency distribution (grey polar histogram) of hand movement across all trials of the first block for all movement directions are represented. Dashed black lines represent the tested directions. Red solid lines represent the median value of the motor bias for each direction. The black arrow shows the sign of the directional bias. Bin size: 1°. (c) Qualitative analysis of inter subject variability is shown in terms of perceptual threshold and motor bias changes for the individual subjects in the active group. Perceptual thresholds relate to the judgment task (not the training task). (d) Relationship between perceptual change from before to after movement training (\(\Delta\)Perceptual threshold), and the change in movement direction (\(\Delta\)Motor bias) that occurred in conjunction with training (p = 0.0006).
We fitted the model to the data and identified the model parameters \(w\) before and after each of the training conditions. Figure 5 summarizes the fitting results. 
Figure 5.
 
Results of model fitting. (a) Comparative parameter fitting for the different training conditions: \(\sigma ^{2}_p\) (Anderson-Darling test for normality: p > 0.05, t-test p = 0.01 for the active group), \(k\) (Anderson-Darling test for normality: p > 0.05, t-test p = 0.026 for the active group), \(s^2\) (Anderson-Darling test for normality: p = 0.003, Wilcoxon rank sum test p = 0.025 for the active group), and \(q\). Grey and colored boxes refer to pre- and post-training conditions, respectively. (b) Correlation between thresholds estimated using perceptual test data and from the Bayesian generative model (p = 0.001). (c) Average thresholds of Bayes’ model psychometric curves over all subjects of each group. (d) Correlation between the perceptual threshold change, \(\Delta\)Threshold, and the corresponding variation of the cross-talk parameter, \(\Delta {k}\), (p = 0.004).
Figure 5.
 
Results of model fitting. (a) Comparative parameter fitting for the different training conditions: \(\sigma ^{2}_p\) (Anderson-Darling test for normality: p > 0.05, t-test p = 0.01 for the active group), \(k\) (Anderson-Darling test for normality: p > 0.05, t-test p = 0.026 for the active group), \(s^2\) (Anderson-Darling test for normality: p = 0.003, Wilcoxon rank sum test p = 0.025 for the active group), and \(q\). Grey and colored boxes refer to pre- and post-training conditions, respectively. (b) Correlation between thresholds estimated using perceptual test data and from the Bayesian generative model (p = 0.001). (c) Average thresholds of Bayes’ model psychometric curves over all subjects of each group. (d) Correlation between the perceptual threshold change, \(\Delta\)Threshold, and the corresponding variation of the cross-talk parameter, \(\Delta {k}\), (p = 0.004).
Figure 5a shows the average value of each model parameter among subjects, before (grey boxes) and after (colored boxes) each training condition. We found a significant change in \(\sigma ^{2}_p\) (p = 0.01), \(k\) (p = 0.026), and \(s^2\) (p = 0.025) parameters in the active group alone. Perceptual threshold values for the model are estimated at the 75th percentile (see Methods) and show a high correlation with the threshold values in the experimental tests (see Figure 5b). Figure 5c shows psychometric threshold differences (\(\Delta {\mbox{Th}}= \mbox{Th}_{\rm{post}} - \mbox{Th}_{\rm{pre}}\)) calculated from model fitting for all the participants in each experimental condition (see Equation 6). It can be seen that, as in the empirical results, there are significant changes in the psychometric thresholds in the active group alone. These observations are confirmed by statistical analysis. We tested for differences in threshold (\(\Delta {\mbox{Th}}\)) of the psychometric curves. We observed a significant difference in threshold between experimental conditions (F(2,27) = 7.76; p = 0.002). Post hoc analyses (Bonferroni-Holm) revealed a significant difference in the threshold of the active and visual-only conditions (p = 0.003) and between the active and cognitive conditions (p = 0.024). These results on the perceptual threshold of the curves obtained by fitting the data with the Bayesian generative model are in agreement with those found in the results of the perceptual judgment task by fitting the data with the cumulative Gaussian function, as shown in Figure 5c; see Methods. Moreover, we found a reliable relationship between the change in the model parameter \(k\) (\(\Delta {k}\)) from before to after training, and the perceptual change from before to after training (\(\Delta {\mbox{Th}}\)) (\(\rm R^2\) = 0.81, p = 0.0004); see Figure 5d. This means that, for subjects with greater perceptual changes the model predicts a greater decrease in the cross-talk parameter \(k\)
Discussion
The present study shows that active interaction with an ambiguous visual stimulus alters the subsequent perception of stimulus motion. Three groups of participants performed the same perceptual task before and after training. Self-operated motion of plaid stimulus was generated by an active group that performed planar movements. This was designed to assess whether perceptual decisions regarding plaid movements were affected by actively interacting with the stimulus. A visual-only group observed played-back plaid motion that was generated by another subject. This condition quantified the effect of prolonged exposure to the moving plaid stimulus. A cognitive group experienced the same stimuli as the visual-only group. These subjects were additionally instructed to focus their attention on the gratings intersections and thus had an explicit strategy that would enable them to follow the coherent plaid motion. We found that the perceptual threshold for the direction of plaid motion changed significantly following training only in the active movement condition, where it showed more robust perceptual integration against contrast imbalance in the plaid. A control condition in which plaid movements are tied to the speed but not the direction of the hand (an active less-matched condition) yielded no significant perceptual learning, very much like the visual and cognitive conditions. 
In the active condition, we also observed practice-related changes to movement. Movement direction changed over the course of training, presumably because the plaid, which effectively serves as a cursor showing movement direction, is more easily seen by subjects as moving in the remembered target direction. We found that the change in the perceptual threshold was strongly correlated with the change in movement direction measured during the active training, consistent with the idea that the perceptual change is tied to motor training. 
A computational model suggests that movement training affected perceptual judgment by improving the accuracy of the internal representation of the plaid geometry. The findings indicate that motor training resolves visual perceptual ambiguity and contributes to changes in visual perceptual ability. 
Motor training implications for perceptual discrimination
A small number of studies have examined the effects of motor learning on vision. Brown et al. (2007) found that movement initiation toward a moving object that was to be intercepted differed depending on the direction of a previously learned force field, indicating that expectations regarding visual motion are altered as a result of learning. Beets et al. (2010b) showed that, when participants trained to make movements that violated to different degrees the 2/3 power law, there were improvements in visual discrimination of movements that corresponded with those that they experienced during training. These studies indicate that motor learning can induce a bias in visual perception. The present study suggests that movement training plays an even more pivotal role. Indeed, visual perception is inherently ambiguous. Movement training leads to a reduction in perceptual uncertainty, and to a change of perceptual sensitivity which in the present case is related to a stimulus parameter (gratings’ contrast difference) that is not directly controlled during training. Both movement training and perceptual change occur here without feedback motor error provided to participants during training. Changes in movement direction within single training blocks and over the entire training session are significantly correlated with the observed perceptual change (between pre- and post-training perceptual tasks) suggesting that the two kinds of learning are cross-related. If we assume that visual motion perception is based upon an empirical strategy which serves to resolve perceptual uncertainty (Sung et al., 2009; Purves et al., 2014), the perceived plaid motion direction is determined by accumulated sensorimotor experience. Perceptual decisions regarding motion direction provide, in turn, sensory evidence that instructs behaviour. Our results suggest that visual function over time can be adapted with training, which is provided by interaction with the stimulus. The absence of perceptual change when motion of the plaid was decoupled from the motion of the hand is further evidence that a moving visual stimulus yoked with motor action provides a necessary condition of agency that facilitates perceptual grouping (i.e., plaid motion) of consistently moving components (i.e., gratings’ motion). 
It should be noted that focusing attention on the intersections of the gratings without movement, as in our cognitive condition, had no effects on subsequent perceptual judgments. Attending to the intersections of the gratings facilitates stimulus disambiguation and coherent stimulus motion is easily seen (Adelson & Movshon, 1982; Lu & Sperling, 1995). However, although the immediate perceptual effect is compelling, attention on its own did not result in perceptual learning. 
Modeling the fine-tuning of internal representation of plaid geometry
Several studies (Weiss et al., 2002; Hedges et al., 2011) used a Bayesian framework to model the perceptual task of estimating the velocity of the plaid from the perceived velocities of the two gratings. These models posit that prior information and an internal (neural) representation of plaid geometry are combined to obtain the expected value of plaid velocity (Hedges et al., 2011). Prior information captures the participant’s prior experience with observing moving patterns and is summarized by the statistical distribution of the plaid velocities. The representation of plaid geometry approximates the mapping from plaid to gratings’ velocities. Accordingly, inaccurate perception of plaid motion may be due to (i) an inaccurate representation of plaid geometry (sensory model), (ii) an inaccurate perception of the velocity of each grating (noise variance), (iii) the bias introduced by previously experienced plaid motions (the prior), or (iv) a combination of the se reasons. 
Recent studies suggest that this Bayesian formulation cannot account for key observations in the way the perception of plaid direction is affected by speed (Champion et al., 2007). In the present article, we made a number of specific assumptions on how the gratings’ contrasts affect the perceived plaid velocity. First, consistent with previous findings (Hürlimann et al., 2002), we assumed that the variance of sensory noise in perceiving the velocity of a grating is proportional to an inverse power of the grating contrast. This finding is reflected in two model parameters: the variance at maximum contrast and the power exponent. With these simple additions our model predicts those same observations (Champion et al., 2007) that have been claimed to falsify Bayesian models of plaid perception. We also posited an additional effect in the representation of plaid geometry: if two gratings have different contrasts, the represented direction of a grating affects that of the other – this was denoted by the “cross-talk” parameter (cf. Stone et al., 1990). This effect results in a systematic error in the representation of plaid geometry. 
For each participant, we estimated the parameters that maximize the model likelihood given the data from the perceptual judgment task. For each experimental condition, we then assessed the model parameter changes from before to after training. Significant changes in the model parameters were obtained for the active training condition alone. Specifically, we found that participants in this condition exhibited a significant decrease in the cross-talk parameter and an increase of the power law exponent. The decrease in cross-talk leads to a more accurate representation of the direction of the gratings and therefore a more accurate representation of plaid geometry. The increase of the power law exponent leads to a decreased sensitivity of sensory noise to contrast. 
Note that the cross-talk decrease exhibits a strong correlation with the observed change in perceptual threshold. That is, participants who exhibit greater perceptual changes also show a greater decrease in the cross-talk. This finding suggests that motor training in this task leads to a fine-tuning of the internal representation of plaid geometry. 
Why does movement improve the sensory model, whereas observation on its own does not? One possible explanation is that during movement, the sensory model predicts the sensory consequences of movements – the expected movements of the gratings (Miall & Wolpert, 1996; Wolpert & Flanagan, 2001). The mismatch between these predictions and the observed movements of the gratings – sometimes called sensory prediction error – is the source of information which can be used to adapt the sensory model. This information is not available during passive observation of plaid movements. Consistent with this view, sensorimotor adaptation to dynamic or visual perturbations has been reported to critically depend on the availability of a sensory prediction error signal (Haith et al., 2009; Krakauer & Mazzoni, 2011). In addition, this effect may be facilitated by the availability of multiple sensory modalities (vision, proprioception), which may mutually calibrate (Ernst & Banks, 2002). In particular, movement can mediate the integration between visual and proprioceptive information, and eventually the linking of the predicted sensory consequences of movement to plaid motion disambiguation. 
Implications for neural representations of complex visual motion
Because plaid stimuli are composed of a minimal number of one-dimensional Fourier components (two), each selectively recruiting narrow early vision oriented band-pass frequency channels, they can contribute to understanding how these channels are involved in the perceptual learning of coherent sensorimotor dependencies. 
Finding a solution to the plaid motion problem can be related to the evidence from component-motion and pattern-motion cells, observed respectively in striate and extrastriate areas along the primary visual motion pathway, such as area V3A and middle temporal area (MT or V5) (Albright & Stoner, 1995). Ultimately, the steps in the formation of perceptual decisions and/or guidance of visual behaviours can be linked to higher-level brain areas (e.g., lateral intraparietal cortex and prefrontal cortex), which are often described as “evidence accumulators” (Law & Gold, 2008); Latimer et al., 2015; Zhang & Tadin, 2019). The present model simulation suggests a decrease, after training, of the cross-talk between the two gratings in the corresponding sensory channels, when the gratings contrasts are unbalanced, as well as a decrease of the noise variance. The decrease in cross-talk magnitude would be consistent with an early neural instantiation of the perceptual learning process, which might occur at the coding stage of the component motion directions (cf., contrast normalization processes in V1). In contrast, because the cross-talk in the model acts on gratings’ velocities, it requires a pooling of the responses of different oriented channels, and its change might occur at pattern motion coding in an extrastriate area. Notably, the null effect of visual-only training leads us to exclude a role of a oculomotor-specific, but rather a reach-specific, sensorimotor cortical area. Specific experiments and recordings of neural correlates would be necessary to disambiguate the different hypotheses. From a broader perspective, this study suggests a shift in focus to pattern and motion vision investigation, which includes continuous interaction with visual stimulation. 
Acknowledgments
Commercial relationships: none. 
Corresponding author: Giulia Sedda. 
Email: seddagiulia@gmail.com. 
Address: Via Opera Pia 13,16145, Genova (Italy). 
References
Adelson, E. H., & Movshon, J. A. (1982). Phenomenal coherence of moving visual patterns. Nature, 300(5892), 523. [CrossRef]
Albright, T. D., & Stoner, G. R. (1995). Visual motion perception. Proceedings of the National Academy of Sciences of the United States of American, 92(7), 2433–2440. [CrossRef]
Beckett, P. A. (1980). Development of the third component in prism adaptation: Effects of active and passive movement. Journal of Experimental Psychology: Human Perception and Performance, 6(3), 433.
Beets, I. A. M., Rösler, F., & Fiehler, K. (2010). Nonvisual motor learning improves visual motion perception: Evidence from violating the two-thirds power law. Journal of Neurophysiology, 104(3), 1612–1624. [CrossRef]
Beets, I. A. M., ’t Hart, B. M., Rösler, F., Henriques, D. Y. P., Einhäuser, W., & Fiehler, K. (2010). Online action-to-perception transfer: only percept-dependent action affects perception. Vision Research, 50(24), 2633–2641. [CrossRef]
Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10(4), 433–436. [CrossRef]
Brown, L. E., Wilson, E. T., Goodale, M. A., & Gribble, P. L. (2007). Motor force field learning influences visual processing of target motion. Journal of Neuroscience, 27(37), 9975–9983. [CrossRef]
Champion, R. A., Hammett, S. T., & Thompson, P. G. (2007). Perceived direction of plaid motion is not predicted by component speeds. Vision Research, 47(3), 375–383. [CrossRef]
Chris Miall, R., & Wolpert, D. M. (1996). Forward models for physiological motor control. Neural Networks, 9(8), 1265–1279. [CrossRef]
Cressman, E. K., & Henriques, D. Y. P. (2009). Sensory recalibration of hand position following visuomotor adaptation. Journal of Neurophysiology, 102(6), 3505–3518. [CrossRef]
Cropper, S. J., Mullen, K. T., & Badcock, D. R. (1996). Motion coherence across different chromatic axes. Vision Research, 36(16), 2475–2488. [CrossRef]
De Lange, F. P., Heilbron, M., & Kok, P. (2018). How do expectations shape perception? Trends in Cognitive Sciences, 22(9), 764–779. [CrossRef]
Dogge, M., Custers, R., Gayet, S., Hoijtink, H., & Aarts, H. (2019). Perception of action-outcomes is shaped by life-long and contextual expectations. Scientific Reports, 9(1), 1–9. [CrossRef]
Ernst, M. O., & Banks, M. S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415(6870), 429–433. [CrossRef]
Fennema, C. L., & Thompson, W. B. (1979). Velocity determination in scenes containing several moving objects. Computer Graphics and Image Processing, 9(4), 301–315. [CrossRef]
Ferrera, V. P., & Wilson, H. R. (1990). Perceived direction of moving two-dimensional patterns. Vision Research, 30(2), 273–287. [CrossRef]
Haith, A., Jackson, C., Miall, C., & Vijayakumar, S. (2009). Unifying the sensory and motor components of sensorimotor adaptation. Advances in Neural Information Processing Systems, 21, 593–600.
Harris, C. S. (1963). Adaptation to displaced vision: visual, motor, or proprioceptive change? Science, 140(3568), 812–813. [CrossRef]
Hedges, J. H., Stocker, A. A., & Simoncelli, E. P. (2011). Optimal inference explains the perceptual coherence of visual motion stimuli. Journal of Vision, 11(6), 14–14. [CrossRef]
Held, R., & Hein, A. (1963). Movement-produced stimulation in the development of visually guided behavior. Journal of Comparative and Physiological Psychology, 56(5), 872. [CrossRef]
Hupé, J.-M., & Rubin, N. (2004). The oblique plaid effect. Vision Research, 44(5), 489–500. [CrossRef]
Hürlimann, F., Kiper, D. C., & Carandini, M. (2002). Testing the bayesian model of perceived speed. Vision Research, 42(19), 2253–2257. [CrossRef]
Jensen, L., Prokop, T., & Dietz, V. (1998). Adaptational effects during human split-belt walking: influence of afferent input. Experimental Brain Research, 118(1), 126–130. [CrossRef]
Kim, J., & Wilson, H. R. (1993). Dependence of plaid motion coherence on component grating directions. Vision Research, 33(17), 2479–2489. [CrossRef]
Kleiner, M., Brainard, D. H., & Pelli, D. G. (2007). What is new in psychophysics toolbox. Perception, 36, 416.
Kontsevich, L. L., & Tyler, C. W. (1999). Bayesian adaptive estimation of psychometric slope and threshold. Vision Research, 39(16), 2729–2737. [CrossRef]
Krakauer, J. W., & Mazzoni, P. (2011). Human sensorimotor learning: Adaptation, skill, and beyond. Current Opinion in Neurobiology, 21(4), 636–644. [CrossRef]
Lametti, D. R., Nasir, S. M., & Ostry, D. J. (2012). Sensory preference in speech production revealed by simultaneous alteration of auditory and somatosensory feedback. Journal of Neuroscience, 32(27), 9351–9358. [CrossRef]
Latimer, K. W., Yates, J. L., Meister, M. L. R., Huk, A. C., & Pillow, J. W. (2015). Single-trial spike trains in parietal cortex reveal discrete steps during decision-making. Science, 349(6244), 184–187. [CrossRef]
Law, C., & Gold, J. (2008). Neural correlates of perceptual learning in a sensory-motor, but not a sensory, cortical area. Nature Neuroscience, 11, 505–513. [CrossRef]
Leech, K. A., Day, K. A., Roemmich, R. T., & Bastian, A. J. (2018). Movement and perception recalibrate differently across multiple days of locomotor learning. Journal of Neurophysiology, 120(4), 2130–2137. [CrossRef]
Lu, Z. L., & Sperling, G. (1995). Attention-generated apparent motion. Nature, 377(6546), 237–239. [CrossRef]
Mattar, A. A. G., Darainy, M., & Ostry, D. J. (2012). Motor learning and its sensory effects: Time course of perceptual change and its presence with gradual introduction of load. Journal of Neurophysiology, 109(3), 782–791. [CrossRef]
Nasir, S. M., & Ostry, D. J. (2009). Auditory plasticity and speech motor learning. Proceedings of the National Academy of Sciences of the United States of America, 106(48), 20470–20475. [CrossRef]
Ostry, D. J., Darainy, M., Mattar, A. A. G., Wong, J., & Gribble, P. L. (2010). Somatosensory plasticity and motor learning. Journal of Neuroscience, 30(15), 5384–5393. [CrossRef]
Prins, N. (2013). The psi-marginal adaptive method: How to give nuisance parameters the attention they deserve (no more, no less). Journal of Vision, 13(7), 3. [CrossRef]
Prinz, W. (1997). Perception and action planning. European Journal of Cognitive Psychology, 9(2), 129–154. [CrossRef]
Purves, D., Monson, B. B., Sundararajan, J., & Wojtach, W. T. (2014). How biological vision succeeds in the physical world. Proceedings of the National Academy of Sciences of the United States of America, 111(13), 4750–4755. [CrossRef]
Schütz-Bosbach, S., & Prinz, W. (2007). Perceptual resonance: action-induced modulation of perception. Trends in Cognitive Science, 11(8), 349–355. [CrossRef]
Stocker, A. A., & Simoncelli, E. P. (2006). Noise characteristics and prior expectations in human visual speed perception. Nature Neuroscience, 9(4), 578–585. [CrossRef]
Stone, L. S., Watson, A. B., & Mulligan, J. B. (1990). Effect of contrast on the perceived direction of a moving plaid. Vision Research, 30(7), 1049–1067. [CrossRef]
Stoner, G. R., & Albright, T. D. (1992). Motion coherency rules are form-cue invariant. Vision Research, 32(3), 465–475. [CrossRef]
Stoner, G. R., Albright, T. D., & Ramachandran, V. S. (1990). Transparency and coherence in human motion perception. Nature, 344(6262), 153–155. [CrossRef]
Sung, K., Wojtach, W. T., & Purves, D. (2009). An empirical explanation of aperture effects. Proceedings of the National Academy of Sciences of the United State of America, 106(1), 298–303. [CrossRef]
Vahdat, S., Darainy, M., Milner, T. E., & Ostry, D. J. (2011). Functionally specific changes in resting-state sensorimotor networks after motor learning. Journal of Neuroscience, 31(47), 16907–16915. [CrossRef]
Veto, P., Uhlig, M., Troje, N. F., & Einhäuser, W. (2018). Cognition modulates action-to-perception transfer in ambiguous perception. Journal of Vision, 18(8), 5–5. [CrossRef]
Volcic, R., Fantoni, C., Caudek, C., Assad, J. A., & Domini, F. (2013). Visuomotor adaptation changes stereoscopic depth perception and tactile discrimination. Journal of Neuroscience, 33(43), 17081–17088.
Wallach, H. (1935). Über visuell wahrgenommene bewegungsrichtung. Psychologische Forschung, 20(1), 325–380.
Weiss, Y., Simoncelli, E. P., & Adelson, E. H. (2002). Motion illusions as optimal percepts. Nature Neuroscience, 5(6), 598–604.
Wohlschläger, A. (2000). Visual motion priming by invisible actions. Vision Research, 40(8), 925–930.
Wohlschläger, A. (2001). Mental object rotation and the planning of hand movements. Perception & Psychophysics, 63(4), 709–718.
Wolpert, D. M., & Flanagan, J. R. (2001). Motor prediction. Current Biology, 11(18), R729–R732.
Zhang, R., & Tadin, D. (2019). Disentangling locus of perceptual learning in the visual hierarchy of motion processing. Scientific Reports, 9, 1557.
Zwickel, J., Grosjean, M., & Prinz, W. (2007). Seeing while moving: measuring the online influence of action on perception. Quarterly Journal of Experimental Psychology (Hove), 60(8), 1063–1071.
Figure 1.
 
Experimental setup and protocol. (a) Experimental setup: The participant is seated in front of a screen and is exposed to moving visual stimuli (plaid). During active training, they perform planar movements that result in motion of the plaid on the screen. Visual feedback of the arm is blocked. (b) A plaid stimulus with velocity \(\boldsymbol{v}\), composed of two gratings moving at velocities \(\boldsymbol{v}_1, \boldsymbol{v}_2\). (c) The experimental protocol has three phases: participants start with a perceptual judgment task, then they perform a training task, and finally they repeat the perceptual task. Participants were divided into three groups, each with a different training condition: active, visual-only, and cognitive. (d) The perceptual task is a 2AFC paradigm. Participants see two consecutive moving plaid stimuli, and are asked to choose which stimulus is moving in a direction more similar to that of the red arrow. (e) During training, participants are exposed to moving plaids. In the active group, they perform planar hand movements to control the plaid motion on the screen, while participants of both the visual-only and cognitive groups observe played-back motions. In the cognitive condition, participants are instructed to focus their attention on the intersections of the gratings.
Figure 1.
 
Experimental setup and protocol. (a) Experimental setup: The participant is seated in front of a screen and is exposed to moving visual stimuli (plaid). During active training, they perform planar movements that result in motion of the plaid on the screen. Visual feedback of the arm is blocked. (b) A plaid stimulus with velocity \(\boldsymbol{v}\), composed of two gratings moving at velocities \(\boldsymbol{v}_1, \boldsymbol{v}_2\). (c) The experimental protocol has three phases: participants start with a perceptual judgment task, then they perform a training task, and finally they repeat the perceptual task. Participants were divided into three groups, each with a different training condition: active, visual-only, and cognitive. (d) The perceptual task is a 2AFC paradigm. Participants see two consecutive moving plaid stimuli, and are asked to choose which stimulus is moving in a direction more similar to that of the red arrow. (e) During training, participants are exposed to moving plaids. In the active group, they perform planar hand movements to control the plaid motion on the screen, while participants of both the visual-only and cognitive groups observe played-back motions. In the cognitive condition, participants are instructed to focus their attention on the intersections of the gratings.
Figure 2.
 
Bayesian model for the plaid estimation process and forced-choice paradigm. A test (T) and reference (R) plaid are shown, with \(\Delta {c_T}=0\) (top) and \(\Delta {c_R}=0.08\) (bottom). For each plaid, the optimal estimate of plaid velocity, \(\hat{{\boldsymbol{v}}}\), is represented. \(p(\hat{{\boldsymbol{v}}}|{\boldsymbol{v}}, \Delta {c})\) has a normal distribution, in which both mean and covariance depend on the relative contrast difference, \(\Delta c\). The probability of estimating a plaid direction \(\hat{\theta }\) given a specific \(\Delta {c}\) is given by \(p(\hat{\theta }|\Delta {c})\). The psychometric curve represents the probability of answering T as a function of the contrast difference \(\Delta {c_T}\) and \(\Delta {c_R}\), i.e. \(\mbox{Pr} ( { T} | \hat{\theta }, \Delta {c_T}, \Delta {c_R})\), where \(\hat{\theta }=\theta _a\) (45° in our experiment).
Figure 2.
 
Bayesian model for the plaid estimation process and forced-choice paradigm. A test (T) and reference (R) plaid are shown, with \(\Delta {c_T}=0\) (top) and \(\Delta {c_R}=0.08\) (bottom). For each plaid, the optimal estimate of plaid velocity, \(\hat{{\boldsymbol{v}}}\), is represented. \(p(\hat{{\boldsymbol{v}}}|{\boldsymbol{v}}, \Delta {c})\) has a normal distribution, in which both mean and covariance depend on the relative contrast difference, \(\Delta c\). The probability of estimating a plaid direction \(\hat{\theta }\) given a specific \(\Delta {c}\) is given by \(p(\hat{\theta }|\Delta {c})\). The psychometric curve represents the probability of answering T as a function of the contrast difference \(\Delta {c_T}\) and \(\Delta {c_R}\), i.e. \(\mbox{Pr} ( { T} | \hat{\theta }, \Delta {c_T}, \Delta {c_R})\), where \(\hat{\theta }=\theta _a\) (45° in our experiment).
Figure 3.
 
Results of the perceptual judgment task. (a) Representative psychometric curves. Each curve shows the probability that the participant chooses the test stimulus over a range of relative contrast differences \(\Delta {c_T}\). The grey curve represents the perceptual baseline of a representative subject (before training), whereas the colored curve indicates the perceptual change (after training). Solid lines represent the average values, the filled circles indicate the 75\(\%\) threshold value (\(\mbox{Th}_{\rm{pre}}, \mbox{Th}_{\rm{post}}\)), and the dashed black lines show the slope of the curves at the threshold point (\(\mbox{Slope}_{\rm{pre}}, \mbox{Slope}_{\rm{post}}\)). The horizontal black segment displays the threshold difference, \(\Delta {\mbox{Th}}= \mbox{Th}_{\rm{post}} - \rm \mbox{Th}_{\rm{pre}}\). (b) Bar plots represent the average values of threshold differences \(\Delta {\mbox{Th}}\) and slope differences \(\Delta {\mbox{Slope}}\) in all experimental conditions: active (ACT) in red; visual only (VIS) in blue; cognitive (COG) in green; active less-matched (ALM) in orange. Dots represent the individual values for each subject. Error bars denote standard errors. The average value of \(\Delta {\mbox{Th}}\) in the active group is significantly greater than in the visual-only (p = 0.003) and the cognitive (p = 0.019) groups. (c) Qualitative analysis of intersubject variability is shown in terms of threshold and slope changes for the individual subjects in each condition. In all three conditions, the grey dots represent the pre-training values. (d) Qualitative analysis of inter subjects variability is shown in terms of the minimum polygons enclosing all data points in each group.
Figure 3.
 
Results of the perceptual judgment task. (a) Representative psychometric curves. Each curve shows the probability that the participant chooses the test stimulus over a range of relative contrast differences \(\Delta {c_T}\). The grey curve represents the perceptual baseline of a representative subject (before training), whereas the colored curve indicates the perceptual change (after training). Solid lines represent the average values, the filled circles indicate the 75\(\%\) threshold value (\(\mbox{Th}_{\rm{pre}}, \mbox{Th}_{\rm{post}}\)), and the dashed black lines show the slope of the curves at the threshold point (\(\mbox{Slope}_{\rm{pre}}, \mbox{Slope}_{\rm{post}}\)). The horizontal black segment displays the threshold difference, \(\Delta {\mbox{Th}}= \mbox{Th}_{\rm{post}} - \rm \mbox{Th}_{\rm{pre}}\). (b) Bar plots represent the average values of threshold differences \(\Delta {\mbox{Th}}\) and slope differences \(\Delta {\mbox{Slope}}\) in all experimental conditions: active (ACT) in red; visual only (VIS) in blue; cognitive (COG) in green; active less-matched (ALM) in orange. Dots represent the individual values for each subject. Error bars denote standard errors. The average value of \(\Delta {\mbox{Th}}\) in the active group is significantly greater than in the visual-only (p = 0.003) and the cognitive (p = 0.019) groups. (c) Qualitative analysis of intersubject variability is shown in terms of threshold and slope changes for the individual subjects in each condition. In all three conditions, the grey dots represent the pre-training values. (d) Qualitative analysis of inter subjects variability is shown in terms of the minimum polygons enclosing all data points in each group.
Figure 4.
 
Results of active motor training. (a) For a typical subject, the trajectories (blue dots) of hand movements across all trials of the first block for all movement directions are represented. Solid black lines represent the tested directions. Scale bar: 2 cm. (b) For the same typical subject, the frequency distribution (grey polar histogram) of hand movement across all trials of the first block for all movement directions are represented. Dashed black lines represent the tested directions. Red solid lines represent the median value of the motor bias for each direction. The black arrow shows the sign of the directional bias. Bin size: 1°. (c) Qualitative analysis of inter subject variability is shown in terms of perceptual threshold and motor bias changes for the individual subjects in the active group. Perceptual thresholds relate to the judgment task (not the training task). (d) Relationship between perceptual change from before to after movement training (\(\Delta\)Perceptual threshold), and the change in movement direction (\(\Delta\)Motor bias) that occurred in conjunction with training (p = 0.0006).
Figure 4.
 
Results of active motor training. (a) For a typical subject, the trajectories (blue dots) of hand movements across all trials of the first block for all movement directions are represented. Solid black lines represent the tested directions. Scale bar: 2 cm. (b) For the same typical subject, the frequency distribution (grey polar histogram) of hand movement across all trials of the first block for all movement directions are represented. Dashed black lines represent the tested directions. Red solid lines represent the median value of the motor bias for each direction. The black arrow shows the sign of the directional bias. Bin size: 1°. (c) Qualitative analysis of inter subject variability is shown in terms of perceptual threshold and motor bias changes for the individual subjects in the active group. Perceptual thresholds relate to the judgment task (not the training task). (d) Relationship between perceptual change from before to after movement training (\(\Delta\)Perceptual threshold), and the change in movement direction (\(\Delta\)Motor bias) that occurred in conjunction with training (p = 0.0006).
Figure 5.
 
Results of model fitting. (a) Comparative parameter fitting for the different training conditions: \(\sigma ^{2}_p\) (Anderson-Darling test for normality: p > 0.05, t-test p = 0.01 for the active group), \(k\) (Anderson-Darling test for normality: p > 0.05, t-test p = 0.026 for the active group), \(s^2\) (Anderson-Darling test for normality: p = 0.003, Wilcoxon rank sum test p = 0.025 for the active group), and \(q\). Grey and colored boxes refer to pre- and post-training conditions, respectively. (b) Correlation between thresholds estimated using perceptual test data and from the Bayesian generative model (p = 0.001). (c) Average thresholds of Bayes’ model psychometric curves over all subjects of each group. (d) Correlation between the perceptual threshold change, \(\Delta\)Threshold, and the corresponding variation of the cross-talk parameter, \(\Delta {k}\), (p = 0.004).
Figure 5.
 
Results of model fitting. (a) Comparative parameter fitting for the different training conditions: \(\sigma ^{2}_p\) (Anderson-Darling test for normality: p > 0.05, t-test p = 0.01 for the active group), \(k\) (Anderson-Darling test for normality: p > 0.05, t-test p = 0.026 for the active group), \(s^2\) (Anderson-Darling test for normality: p = 0.003, Wilcoxon rank sum test p = 0.025 for the active group), and \(q\). Grey and colored boxes refer to pre- and post-training conditions, respectively. (b) Correlation between thresholds estimated using perceptual test data and from the Bayesian generative model (p = 0.001). (c) Average thresholds of Bayes’ model psychometric curves over all subjects of each group. (d) Correlation between the perceptual threshold change, \(\Delta\)Threshold, and the corresponding variation of the cross-talk parameter, \(\Delta {k}\), (p = 0.004).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×