September 2016
Volume 57, Issue 12
Open Access
ARVO Annual Meeting Abstract  |   September 2016
Robust extraction of the diversity of ganglion cell computation via spatially correlated stimuli and nonlinear modeling
Author Affiliations & Notes
  • Hope Shi
    Biology, University of Maryland, College park, Maryland, United States
  • Sarvenaz Memarzadeh
    Biology, University of Maryland, College park, Maryland, United States
  • Joshua H Singer
    Biology, University of Maryland, College park, Maryland, United States
  • Daniel Butts
    Biology, University of Maryland, College park, Maryland, United States
  • Footnotes
    Commercial Relationships   Hope Shi, None; Sarvenaz Memarzadeh, None; Joshua Singer, None; Daniel Butts, None
  • Footnotes
    Support  University of Maryland, Biology Department internal grant
Investigative Ophthalmology & Visual Science September 2016, Vol.57, 6408. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hope Shi, Sarvenaz Memarzadeh, Joshua H Singer, Daniel Butts; Robust extraction of the diversity of ganglion cell computation via spatially correlated stimuli and nonlinear modeling. Invest. Ophthalmol. Vis. Sci. 2016;57(12):6408.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : A prerequisite for understanding visual processing by the retina is the ability to characterize the diverse responses of retinal ganglion cells (GC) to light stimuli. Here, we designed a spatiotemporal noise stimulus and applied a nonlinear modeling approach to GC spike responses with the aim of gaining insight into different forms of computation performed in parallel retinal pathways terminating on individual GC types.

Methods : Spike responses of GCs to UV light in a whole-mount in vitro preparation of ventral mouse retina were recorded on a 60-channel, perforated multi-electrode array mounted on an inverted microscope. UV light was delivered through the objective by a coupled and modified DLP projector. A spatially correlated noise stimulus was generated by low-pass filtering random checkerboards, which comprised shapes appropriate to map a variety of GC receptive field features. GC receptive fields were derived from a nonlinear input model (NIM), in which the transform from stimulus to response via the integration of one or more excitatory and inhibitory subunits (sensitive to different stimulus features). A separable form of the NIM with parameters representing spatial and temporal tuning of multiple subunits was developed, and parameters were determined using maximum a posterior optimization.

Results : GCs were classified according to the structure of the nonlinear model that described them. The use of the cloud stimulus greatly enhanced characterization of receptive field surrounds and also identified many more ON-OFF cells than were evident from responses to full-field uniform contrast stimuli or the spike-triggered average. Most categories of GC had nonlinear suppressive subunits that often dominated the GC spatial surround relative to that predicted by linear receptive fields.

Conclusions : Unlike commonly-used white noise stimuli, spatially correlated stimuli drove surround responses of the GC robustly, and NIM modeling provided significant insight into surround processing. The NIM revealed large functional differences among receptive fields of GCs that had similar responses to full-field stimuli. Due to the correspondence between model subunits and physical retinal circuits, our analyses will reveal mechanisms underlying GC function that have been missed by standard linear-nonlinear modeling approaches.

This is an abstract that was submitted for the 2016 ARVO Annual Meeting, held in Seattle, Wash., May 1-5, 2016.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×