May 2006
Volume 47, Issue 13
Free
ARVO Annual Meeting Abstract  |   May 2006
Implementation of Figure–Background Separation in a Learning Retina Implant
Author Affiliations & Notes
  • O. Baruth
    Dept of Computer Science, University of Bonn, Bonn, Germany
  • R.E. Eckmiller
    Dept of Computer Science, University of Bonn, Bonn, Germany
  • D. Neumann
    Dept of Computer Science, University of Bonn, Bonn, Germany
  • Footnotes
    Commercial Relationships  O. Baruth, None; R.E. Eckmiller, None; D. Neumann, None.
  • Footnotes
    Support  University of Bonn
Investigative Ophthalmology & Visual Science May 2006, Vol.47, 3199. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      O. Baruth, R.E. Eckmiller, D. Neumann; Implementation of Figure–Background Separation in a Learning Retina Implant . Invest. Ophthalmol. Vis. Sci. 2006;47(13):3199.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: : To study the functional properties of a figure–background separation system (FB) for a learning epiretinal retina implant in conjunction with a corresponding simulator of the human central visual system.

Methods: : A: A figure–background separation system (FB) for a learning retina implant (Eckmiller et al., ARVO 2006) was integrated into a novel retina encoder (RE*) and simulated on a modern PC System. The FB module was designed to mimic two typical functions of the central visual system: 1) figure–background separation and 2) oculomotor mode selection in order to elicit visual ‘Gestalt’ perception in retinally blind subjects by means of epiretinal stimulation with an array of about 10 x 10 electrodes under realistic visual scene conditions. Visual patterns typically consisted of a stationary component (PS) and a moving component (PM) as input of RE* with spatio–temporal (ST) filters representing receptive field (RF) properties of individually tunable P– or M–type ganglion cells. B: Figure–background separation was implemented by combining the two mechanisms of: a. sequential monitoring of ST filter responses to temporal– vs. spatial input changes and b. response transfer of identified PS– vs. PM pixels into respective memory locations. C: Oculomotor mode selection decided about whether PS– or PM was chosen as figure. The selection e.g. of the smooth pursuit mode was implemented by subsequently providing the ST filter array only with the PM memory content and thus suppressing the stationary background pixels of PS. D: Retinal information processing was implemented by 10 x 10 ST filters with tunable spatial and temporal parameters to simulate typical P– or M–ganglion cell properties each feeding into one electrode channel. The photosensor layer as input for optical patterns PS– or PM was implemented as a hexagonal array with about 100 x 100 photosensor pixels. E: To mimic the human central visual system of future blind subjects with learning retina implants, an inverter module at the output of RE* was implemented. F: Alternative learning algorithms were developed for dialog–based tuning of RE*.

Results: : (1) Both FB functions: figure–background separation and oculomotor mode selection could be completely integrated into RE*. (2) Figure–background separation could be achieved within a few movement steps. (3) Implementation of smooth pursuit was achieved electronically without physically moving parts.

Conclusions: : FB as part of RE* in a learning retina implant provides future implant wearing subjects with the choice of fixation or smooth pursuit, which is essential to transform realistic, superimposed patterns into visual percepts.

Keywords: pattern vision • retinal connections, networks, circuitry • computational modeling 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×