Purchase this article with an account.
J. Shi, J. Wielaard, M. Busuioc, R. T. Smith, P. Sajda; Using a Spiking Neuron Model of V1 as a Substrate for Mapping Visual Stimuli to Perception. Invest. Ophthalmol. Vis. Sci. 2008;49(13):4497. doi: https://doi.org/.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
How visual stimuli map to neural activity and ultimately perception is important not only for understanding normal visual function but also for assessing how abnormalities and pathologies, for instance those arising in the retina, may ultimately affect perception. In this study we use a model of primary visual cortex (V1) as a substrate for mapping visual stimuli to a large population of neural activity and subsequently compare the accuracy of decoding this activity to the accuracy of human subjects for the same visual discrimination task.
We use a previously developed spiking neuron model of V1 as a recurrent network whose activity is consequently linearly decoded, providing a link to perception in the context of a visual discrimination task. We introduce a sparsity constraint in the decoder, given the hypothesis that information is sparsely distributed in a highly recurrent network of V1. A spatio-temporal word is constructed from the population spike trains, as input to the sparse decoder, to fully exploit the full dynamics of the model. We evaluate the decoding accuracy using a two alternative forced choice paradigm (face versus car discrimination) where we control the difficulty of the task by modulating the phase coherence in the images. We compare neurometric functions, constructed via the sparse decoding of the neural activity in the model, to psychometric functions obtained from 10 human subjects.
In general, we find that relatively small fractions of the neurons are required for highly accurate decoding of the visual stimuli. We find that linear decoding of neural activity in a recurrent V1 model can yield discrimination accuracy that is at least as good as, if not better than, human psychophysical performance for relatively complex visual stimuli. Thus substantial information for super-accurate decoding remains at the level of V1 and loss of information needed to better match behavioral performance is predicted to occur downstream in the decision making process. We also find marginally better decoding accuracy by fully utilizing the spatial-temporal dynamics compared with a static decoding strategy.
We have demonstrated how we can link the visual stimulus to perception via a mapping through a spiking neuron model of the early visual system. Future work will consider this as a framework for potentially analyzing the perceptual effect of retinal vision loss in patients with mild yet progressive macular disease, comparing predictions to those obtained strictly from the analysis of the spatial distribution of retinal abnormalities such as drusen.
This PDF is available to Subscribers Only