Abstract
Purpose :
Linking visually-guided behavior to its underlying retinal spike code has been extremely difficult in freely-moving mice for two main reasons: 1) Visual information is encoded in the population activity of ~40 distinct retinal ganglion cell (RGC) types, 2) Only a limited set of tools have existed for automatically quantifying stimulus exposure and behavioral performance in freely-moving animals. Our recent work has shown that the behavioral detection of dimmest lights is driven by ON sustained alpha RGCs in the mouse retina1. Relying on this recent work, we now propose a complete set of tools for the paradigm - Quantum Behavior - to automatically link mouse behavior to its underlying retinal code originating from single photons at the sensitivity limit of vision.
Methods :
We quantified behavioral sensitivity by training mice to do a dim-light detection task in a six-armed water maze in complete darkness. The body position, the head direction, and the eye movements of the mice were automatically tracked using deep learning based trackers designed to operate under extremely dim light conditions. Precisely tracked stimulus trajectories on the mouse retinas were used in combination with spatio-temporal RGC models, constrained with matching electrophysiological data1, to predict RGC population responses to sparse photons during behavioral trials.
Results :
Our automatic head tracking tool, that reaches human-level accuracy, allows us to analyze how a range of behavioral features, such as swimming speed, angular velocity of the head, meander, etc., affect the decision making of mice in a photon detection task. The predicted spike outputs of the most sensitive RGC populations allow us to correlate the performance of various decoding principles (ideal observers) to the observed behavior.
Conclusions :
We have developed tools for a new paradigm - Quantum Behavior - for linking visually-guided behavior to retinal spike codes at the sensitivity limit of vision. This paradigm helps us to address how behaviorally relevant information is extracted from the neural activity of the retina. We discuss the applications of this paradigm for the end-to-end characterization of the visual performance of freely-moving mice in single-photon detection task.
1Smeds, L, Takeshita, D., Turunen, T., Tiihonen, J., Westö, J., Martyniuk, N., Seppänen, A., and Ala-Laurila, P. (2019). Neuron 104, 576–587.
This is a 2020 ARVO Annual Meeting abstract.