May 2007
Volume 48, Issue 13
Free
ARVO Annual Meeting Abstract  |   May 2007
Implementation of a Portable Learning Retina Implant With Image Segmentation and Head Control
Author Affiliations & Notes
  • R. Schatten
    Division of Neuroinformatics, University of Bonn, Bonn, Germany
  • R. Eckmiller
    Division of Neuroinformatics, University of Bonn, Bonn, Germany
  • S. Borbe
    Division of Neuroinformatics, University of Bonn, Bonn, Germany
  • Footnotes
    Commercial Relationships R. Schatten, None; R. Eckmiller, None; S. Borbe, None.
  • Footnotes
    Support University of Bonn
Investigative Ophthalmology & Visual Science May 2007, Vol.48, 655. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      R. Schatten, R. Eckmiller, S. Borbe; Implementation of a Portable Learning Retina Implant With Image Segmentation and Head Control. Invest. Ophthalmol. Vis. Sci. 2007;48(13):655.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose:: To study the function of a novel learning retina encoder (RE-2) with pattern segmentation properties.

Methods:: A: The input for optical patterns (P1) was simulated as an array with 16 x 16 pixels. RE-2 for retinal information processing of segmented P1 (Eckmiller et al., ARVO 2007) was implemented by a filter module (FM) with 10 x 10 spatio-temporal (ST) filters with tunable parameter vectors (PV) to mimic P- and M-ganglion cell properties.B: For segmentation of P1, a set of line elements (Ei) was defined by different lengths (2, 4, 8 pixels) and orientations (0, 45, 90, 135 deg) in addition to a point element (1 pixel) at specific locations.C: A lightweight head-mounted RE-2 unit was equipped with two small displays for separate presentation of P1 (L-display for left eye) and the simulated visual percept P2 (R-display for right eye) and with a head acceleration sensor (AS) in 3-D (VTI Tech.).D: The central visual system representing a foveal region of about 16 deg in diameter was simulated by an inverter module (IM) with a 16 x 16 pixel pattern P2 as output. The inverse mapping of IM was achieved by a combination of classification, decision tree, and simulated miniature eye movements (SM). P2 was presented with the same resolution as P1. IM assigned specific black or white pattern values to any of the P2 pixels only, if the ambiguity of the participating ST filter output signals had been resolved.E: A dialog module DM for dialog-based FM tuning in conjunction with an evolutionary algorithm (EA) was equipped with interfaces to AS and FM.F: For specific human commands regarding dialog-based tuning and other RE-2 functions, a set of small head movement signals (Mi) was defined (e.g.: head to: right, left, up, down).

Results:: (1) FM tuning with EA successfully used different Mi to iteratively select the best three out of six possible PV until an ouput pattern became identical to a given input pattern.(2) All Ei types as single input pattern segments were successfully recovered at the IM output as a result of dialog-based FM tuning with DM.(3) Subsequently, a simple P1 which was composed of a corresponding set of Ei was successfully recovered on the R-display (for P2) following a tuning phase.(4) Alternative learning algorithms for the acceleration of the dialog-based tuning process could be analysed.

Conclusions:: A. The use of segments rather than entire patterns in RE-2 offers a novel approach for retina encoder tuning.B. The composition of patterns by simultaneously or sequentially presented line elements Ei with RE-2 may significantly enhance the quality of electrically elicited visual percepts in retinally blind subjects.

Keywords: pattern vision • retinal connections, networks, circuitry • computational modeling 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×