May 2005
Volume 46, Issue 13
ARVO Annual Meeting Abstract  |   May 2005
Learning Retina Encoder RE*: Computer Implementation and Visual System Simulation
Author Affiliations & Notes
  • O. Baruth
    Dept of Computer Science, University of Bonn, Bonn, Germany
  • R.E. Eckmiller
    Dept of Computer Science, University of Bonn, Bonn, Germany
  • D. Neumann
    Dept of Computer Science, University of Bonn, Bonn, Germany
  • Footnotes
    Commercial Relationships  O. Baruth, None; R.E. Eckmiller, None; D. Neumann, None.
  • Footnotes
    Support  University of Bonn
Investigative Ophthalmology & Visual Science May 2005, Vol.46, 1512. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      O. Baruth, R.E. Eckmiller, D. Neumann; Learning Retina Encoder RE*: Computer Implementation and Visual System Simulation . Invest. Ophthalmol. Vis. Sci. 2005;46(13):1512.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Abstract: : Purpose: To study the functional properties of a novel retina encoder (RE*), algorithms for dialog–based learning, and the properties of a simulator of the human central visual system. Methods: A: A retina encoder (RE*) with variable parameter vector (PV) was developed (Eckmiller et al., ARVO 2005). A photosensor layer for input patterns P1 was implemented as a hexagonal array ranging from 2 x 2 to 100 x 100 photosensor pixels. Various spatio–temporal (ST) filter arrays could be defined ranging from 2 x 2 to 100 x 100 with adjustable parameters. Temporal ST filter properties were implemented for RF–center and –periphery paths separately as adjustable finite impulse response filters (FIR). For each ST filter class, step responses yielded characteristic (often ambiguous) time sequences of (typically four) values depending on the number of stimulated center– and periphery pixels. A battery of P1 patterns was provided. B: To mimic the human central visual system, an inverter module (IM) at the output of RE* was implemented. IM simulated the mapping of the RE*–output onto a perceived pattern P2. C: A graphic user interface (GUI) provided visualization of P1, ST filter distribution, RE*–output, P2, PV selector, and tuning progress as well as tools for RE* configuration, mouse–controlled human feedback, and dialog–based tuning. D: P1 could be shifted by simulated miniature eye movements (SM). SMs were requested by IM, if unidentified pixels of P2 remained. E: Alternative learning algorithms including genetic algorithms (GA), were developed for dialog–based tuning. Results: (1) ST filter step responses were unique for each filter class but ambiguous for stimulation of a given number of center– and periphery pixels. (2) Various RE* functions could be specified with the GUI. (3) For a number of RE* configurations, corresponding IM configurations for perfect RE*–output inversion into P2 = P1 (however with additional generation of one or two SMs) could be successfully specified. The respective PVs of those RE* were labeled PVref of RE*ref. (4) A range of learning algorithms including GA was successfully tested with several subjects to compare the robustness and speed of dialog–based tuning. Conclusions: A. RE* configurations, which generate guaranteed invertible signals streams may have a better chance to help future blind subjects to re–gain vision than retina implants that ignore both the encoding– and ambiguity removal requirements of the central visual system. B. RE* and its dialog–based tunability including the option of SM generation can be properly tested in subjects with normal vision by means of IM for future applications in the blind.

Keywords: pattern vision • retinal connections, networks, circuitry • computational modeling 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.