May 2005
Volume 46, Issue 13
Free
ARVO Annual Meeting Abstract  |   May 2005
Learning Retina Encoder RE*: Results From Dialog–Based Tuning in Humans With Normal Vision
Author Affiliations & Notes
  • R.E. Eckmiller
    Computer Science, University of Bonn, Bonn, Germany
  • O. Baruth
    Computer Science, University of Bonn, Bonn, Germany
  • D. Neumann
    Computer Science, University of Bonn, Bonn, Germany
  • Footnotes
    Commercial Relationships  R.E. Eckmiller, None; O. Baruth, None; D. Neumann, None.
  • Footnotes
    Support  University of Bonn
Investigative Ophthalmology & Visual Science May 2005, Vol.46, 5266. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      R.E. Eckmiller, O. Baruth, D. Neumann; Learning Retina Encoder RE*: Results From Dialog–Based Tuning in Humans With Normal Vision . Invest. Ophthalmol. Vis. Sci. 2005;46(13):5266.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Abstract: : Purpose: To test the tunability of a learning retina encoder RE* for an epiretinal retina implant in humans with normal vision. Methods: A: A novel retina encoder (RE*) for retinal information processing in real–time (Baruth et al., ARVO 2005) with a hexagonal input pattern P1 of 16 x 16 pixels was implemented with 10 x 10 spatio–temporal (ST) filters. The ST filter centers were evenly distributed over the input pixel array. Different ST filter classes with partly overlapping concentric receptive fields (RF) and inputs from up to 7 RF–center pixels and up to 36 RF–periphery pixels to simulate P–On, P–Off, or M ganglion cell properties could be tuned by modification of spatial and temporal parameters. B: The central visual system with a 16 x 16 pixel pattern P2 as output was simulated by an inverter module (IM) with the capability to perform exact inverse mappings for a number of specific reference mapping operations RE*ref as defined by parameter vectors PVref. C: During dialog–based tuning, subjects with normal vision interacted with a RE* + IM system to compare a given RE*–input P1 on a monitor with the current IM–output P2 on another monitor. Initially, PV was different from PVref so that IM generated no pattern pixels in P2. During the dialog phase, subjects repeatedly selected those 3 out of 6 patterns P2, which appeared most similar to P1. Gradually, PV converged onto the corresponding PVref, while P2 appeared more and more similar to P1. D: Simulated miniature eye movements (SM) could be generated as P1 shifts by one pixel in a given direction at the RE*–input. Results: (1) Subjects were able to tune RE* (the output of which could initially not be inverted by a given pre–defined IM) during an iterative dialog to the corresponding RE*ref such that part of the pixels of P2 resembled those of P1, whereas the remaining pixels were still blank due to partial ambiguity of the ST filter outputs. (2) Subsequent elicitation of a SM (while P1, RE*ref , and IM remained unchanged) yielded a significant improvement or even completion of P2. (3) The number of required dialog iterations to reach a nearly perfect tuning (RE*ref) depended on the type and spatial distribution of the corresponding ST filter classes and was in the range of 200–1000 iterations. (4) The repeated use of several input patterns P1 yielded a faster tuning than selection of only one P1. (5) Upon completion of the tuning phase, any P1 after being encoded by RE*ref and shifted at least once by SM, could be decoded by IM back into P1 with good quality. Conclusions: A. RE* offers improved dialog–based tuning regarding decoding quality. B. SM may be important to improve the quality of re–gained ‘Gestalt’ perception.

Keywords: pattern vision • retinal connections, networks, circuitry • computational modeling 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×