Abstract
Purpose: :
To study the functional properties of a figure–background separation system (FB) for a learning epiretinal retina implant in conjunction with a corresponding simulator of the human central visual system.
Methods: :
A: A figure–background separation system (FB) for a learning retina implant (Eckmiller et al., ARVO 2006) was integrated into a novel retina encoder (RE*) and simulated on a modern PC System. The FB module was designed to mimic two typical functions of the central visual system: 1) figure–background separation and 2) oculomotor mode selection in order to elicit visual ‘Gestalt’ perception in retinally blind subjects by means of epiretinal stimulation with an array of about 10 x 10 electrodes under realistic visual scene conditions. Visual patterns typically consisted of a stationary component (PS) and a moving component (PM) as input of RE* with spatio–temporal (ST) filters representing receptive field (RF) properties of individually tunable P– or M–type ganglion cells. B: Figure–background separation was implemented by combining the two mechanisms of: a. sequential monitoring of ST filter responses to temporal– vs. spatial input changes and b. response transfer of identified PS– vs. PM pixels into respective memory locations. C: Oculomotor mode selection decided about whether PS– or PM was chosen as figure. The selection e.g. of the smooth pursuit mode was implemented by subsequently providing the ST filter array only with the PM memory content and thus suppressing the stationary background pixels of PS. D: Retinal information processing was implemented by 10 x 10 ST filters with tunable spatial and temporal parameters to simulate typical P– or M–ganglion cell properties each feeding into one electrode channel. The photosensor layer as input for optical patterns PS– or PM was implemented as a hexagonal array with about 100 x 100 photosensor pixels. E: To mimic the human central visual system of future blind subjects with learning retina implants, an inverter module at the output of RE* was implemented. F: Alternative learning algorithms were developed for dialog–based tuning of RE*.
Results: :
(1) Both FB functions: figure–background separation and oculomotor mode selection could be completely integrated into RE*. (2) Figure–background separation could be achieved within a few movement steps.
(3) Implementation of smooth pursuit was achieved electronically without physically moving parts.
Conclusions: :
FB as part of RE* in a learning retina implant provides future implant wearing subjects with the choice of fixation or smooth pursuit, which is essential to transform realistic, superimposed patterns into visual percepts.
Keywords: pattern vision • retinal connections, networks, circuitry • computational modeling