Abstract
Introduction: :
In a spatial localization task, an animal must infer the location of a stimulus from a probabilistic sensory input.
Purpose: :
Bayes’ theory provides a model that describes how these inferences can be made in a statistically optimal manner for different stimulus configurations. The Bayesian model predicts that the localization of a visual stimulus will be greatly enhanced by a co–localized auditory stimulus because the visual and auditory systems are statistically independent estimators of the environment. However, the Bayesian model predicts a substantially lower enhancement when a visual stimulus is co–localized with another visual stimulus. The benefit derived from their integration is reduced because they do not constitute statistically independent samples of the environment (they are conveyed by the same sensory system). Here we test this model.
Methods: :
Adult cats were trained to localize brief (40 ms) visual stimuli presented at 7 different locations using a perimetry apparatus. Localization was then tested with a single visual stimulus, two visual stimuli, or a visual and an auditory stimulus.
Results: :
There is good agreement between the predictions of the model and the performance of the animals in this task, which suggests that animals integrate within–modal and cross–modal stimuli differently according to their statistical relationships with one another.
Conclusions: :
These data indicate that there are substantial computational differences between unisensory and multisensory integration at the behavioral level.
Keywords: superior colliculus/optic tectum • perception • vision and action