Purchase this article with an account.
E. Fernandez, C. Morillas, S. Romero, A. Martínez, F. Pelayo; Neuroengineering Tools for the Design and Test of Visual Neuroprostheses . Invest. Ophthalmol. Vis. Sci. 2005;46(13):1483.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Purpose: We present a suite of software/hardware tools conceived to (a) easily design retina–like encoders of visual information, (b) simulate them and check their response against biological recordings, (c) automatically build reconfigurable electronic circuits that implement the encoders in real time, and (d) a platform to tune and adjust prosthesis parameters for implanted individuals. Methods: The system is composed by a set of bio–inspired visual pre–processing modules and a spike–coding block based on a leaky integrate and fire model of spiking neurons. The system is able to encode the visual information captured by a standard camera, identify the target electrodes and produce optimal codes for stimulation. The model is highly parameterized. A communication scheme of address–event representation is used to finally send the output pulses to the implanted microelectrodes. The systems works on a PC for design and simulation, but it can be automatically synthesized into a single chip for portable real–time stimulation. Results: We successfully developed and tested a real–time retina–like encoder which contains a highly flexible retina model, with maximum independence of lighting conditions and able to put into each microelectrode the biggest amount of "usable information" that is possible. The input from a camera can be configured with irregular distributions of photoreceptors, and processed by a set of programmable spatio–temporal filters, including multichannel color–contrast enhancement. This retina model, complemented with the spiking neural module, forms an "artificial retina" that translates image sequences into a continuous stream of short pulses representing spike events. Comparing the system output with multielectrode recordings from biological retinas (human, rabbit, turtle, rat), similar patterns generated by the artificial coding module in response to the same types of stimuli are observed. Conclusions: Through a set of parameterized filters and functions, we obtain a portable model that can be automatically translated into portable autonomous hardware. Although the model is essentially analog, we have chosen a digital implementation to have a more flexible and standard approach, which can be easily customized for each implanted patient through a graphical user interface. The whole system facilitates the diagnosis, design and test of visual neuroprostheses.
This PDF is available to Subscribers Only