May 2007
Volume 48, Issue 13
ARVO Annual Meeting Abstract  |   May 2007
Portable Learning Retina Encoder RE-2 With Image Segment Tuning and Head Movement Control
Author Affiliations & Notes
  • R. E. Eckmiller
    Computer Science, University of Bonn, Bonn, Germany
  • R. Schatten
    Computer Science, University of Bonn, Bonn, Germany
  • O. Baruth
    Computer Science, University of Bonn, Bonn, Germany
  • Footnotes
    Commercial Relationships R.E. Eckmiller, None; R. Schatten, None; O. Baruth, None.
  • Footnotes
    Support University of Bonn
Investigative Ophthalmology & Visual Science May 2007, Vol.48, 654. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      R. E. Eckmiller, R. Schatten, O. Baruth; Portable Learning Retina Encoder RE-2 With Image Segment Tuning and Head Movement Control. Invest. Ophthalmol. Vis. Sci. 2007;48(13):654.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Purpose:: To simulate the ‘Gestalt’ perception induced by a retina implant with only 100 electrodes by means of a novel learning retina encoder (RE-2) and head-controlled tuning in humans with normal vision.

Methods:: A: RE-2 for retinal information processing of segmented input patterns (P1) with 16 x 16 pixels (Schatten et al., ARVO 2007) was implemented by a filter module (FM) with 10 x 10 spatio-temporal (ST) filters to mimic primate retinal receptive fields (RF).B: P1 were segmented into a corresponding set of line elements (Ei) at specific locations within P1.C: The central visual system with a perceived pattern (P2) of 16 x 16 pixels as output was simulated by an inverter module (IM) which performed inverse mappings of FM.D: Two small displays were integrated in a frame of glasses (to mimic the future head-mounted RE-2) for separate presentation of P1 (L-display, left eye) and the simulated P2 (R-display, right eye). A 3-D head acceleration sensor was attached to the head-mounted display set to monitor specific head movements as commands for the FM tuning process.E: During dialog-based FM tuning, subjects with normal vision interacted with the RE-2 + IM system by comparing a given pattern element Ei (or an entire pattern) at the FM-input on the L-display with the corresponding IM-output on the R-display.

Results:: (1) Subjects were able to tune RE-2 in a dialog-based learning process exclusively by means of specific small head movements.(2) During the dialog, an evolutionary algorithm was used to tune the parameter vector of FM. Initially, IM generated only random patterns on the R-display. Gradually, the IM-output pattern on the R-display became more and more similar to Ei.(3) The tuning procedure was repeated with all pattern elements of the set. Subsequently, test patterns P1 were generated by simultaneous or sequential presentation of the corresponding set of Ei on the L-display.(4) The number of required tuning iterations was in the range of 200-1000 and could be performed within 2 hours.(5) Upon completion of the tuning phase, new P1 that had not been used for tuning could typically be transformed by IM back into P1 with good quality.(6) The dependence of the visual percept quality on the number, and type of Ei and its temporal presentation could be analysed.

Conclusions:: A. RE-2 may significantly improve the quality of re-gained ‘Gestalt’ perception in future retina implant applications for blind subjects with only about 100 implanted electrodes at the retinal output.B. Usage of small head movements as command signals for various functions including dialog-based tuning can improve the acceptance of this intelligent visual prosthesis.

Keywords: pattern vision • retinal connections, networks, circuitry • computational modeling 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.