May 2006
Volume 47, Issue 13
ARVO Annual Meeting Abstract  |   May 2006
Figure–Background Separation During Fixation and Smooth Pursuit With a Learning Retina Implant
Author Affiliations & Notes
  • R.E. Eckmiller
    Computer Science, University, Bonn, Germany
  • O. Baruth
    Computer Science, University, Bonn, Germany
  • D. Neumann
    Computer Science, University, Bonn, Germany
  • Footnotes
    Commercial Relationships  R.E. Eckmiller, None; O. Baruth, None; D. Neumann, None.
  • Footnotes
    Support  University of Bonn
Investigative Ophthalmology & Visual Science May 2006, Vol.47, 3196. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      R.E. Eckmiller, O. Baruth, D. Neumann; Figure–Background Separation During Fixation and Smooth Pursuit With a Learning Retina Implant . Invest. Ophthalmol. Vis. Sci. 2006;47(13):3196.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Purpose: : To explore a figure–background separation system (FB) with selectable fixation– versus pursuit eye movement modes as part of a novel learning retina encoder (RE*) for an epiretinal retina implant.

Methods: : A: A figure–background separation system (FB) as a novel integral part of RE* for pre–processing of partly moving superimposed visual patterns as a hexagonal array of 16 x 16 pixels was implemented (Baruth et al., ARVO 2006) based on 10 x 10 spatio–temporal (ST) filters. Different ST filter classes represented partly overlapping concentric retinal receptive fields (RF) and inputs from up to 7 RF–center pixels and up to 36 RF–periphery pixels to simulate individually tunable ganglion cell properties. FB was designed for pattern separation into a stationary component (PS) and a moving component (PM) based on specific spatial– vs. temporal ST filter responses. Selective information processing of PS represented the ‘fixation mode’ and of PM represented the ‘smooth pursuit mode’. B: In order to test the function of a learning retina implant with RE* including figure–background separation– and simulated smooth pursuit capabilities in future blind subjects, the central visual system was simulated by an inverter module (IM) in order to generate exact inverse mappings for the mapping operations of RE*. C: Simulated miniature eye movements (SM) were generated as pattern shifts of the selected input pattern (PS or PM) by one pixel in a given direction in order to support the RE* tuning.

Results: : (1) RE* with FB was able to separate the superpositions of simple (e.g. circle, cross, square) patterns into the stationary PS and the moving PM within a few movement steps and to transfer these patterns into separate memory locations for subsequent processing. (2) Subjects with normal vision in interaction with the RE* + IM system could simulate the fixation mode by selecting the PS memory or the smooth pursuit mode by selecting the PM memory for further retinal information processing by means of the tunable ST filter array. (3) Subjects were able to tune RE* during an iterative dialog such that part of the pixels of P2 resembled those of PS or PM, whereas the remaining pixels were still blank due to partial ambiguity of the ST filter outputs. (4) Upon completion of the tuning phase, any PS or PM (after being shifted at least once by SM) could be decoded by IM back into PS or PM with good quality.

Conclusions: : A. RE* with FB provides the capability of figure–background separation in a learning retina implant. B. RE* with FB implements the selective oculomotor modes of fixation of stationary patterns versus visual tracking of moving patterns in a learning retina implant.

Keywords: pattern vision • retinal connections, networks, circuitry • computational modeling 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.