May 2004
Volume 45, Issue 13
Free
ARVO Annual Meeting Abstract  |   May 2004
Multi–Spectral Image Fusion for Application to Visual Prosthetics
Author Affiliations & Notes
  • S. Kalpin
    Product Development, Advanced Medical Electronics, Maple Grove, MN
  • G. Dagnelie
    Lions Vision Research and Rehabilitation Center, Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD
  • L. Yang
    Lions Vision Research and Rehabilitation Center, Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD
  • Footnotes
    Commercial Relationships  S. Kalpin, Advanced Medical Electronics E; Johns Hopkins University F; G. Dagnelie, Advanced Medical Electronics F; L. Yang, Johns Hopkins University F.
  • Footnotes
    Support  R43 EY014727–01
Investigative Ophthalmology & Visual Science May 2004, Vol.45, 4192. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      S. Kalpin, G. Dagnelie, L. Yang; Multi–Spectral Image Fusion for Application to Visual Prosthetics . Invest. Ophthalmol. Vis. Sci. 2004;45(13):4192.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Abstract: : Purpose: Visual prosthetics using retinal microelectrodes are a promising treatment for severe vision loss due to retinal disease. However, benefits from the retinal implant technology are limited by available spatial resolution. Energy density and thermal concerns at the electrode–retina interface will limit the resolution. Multi–spectral imaging electronics that process images in both visual and the infrared provide a potential means of processing scenes that greatly enhance scene perception for the patient/user. Infrared imaging relies on the radiated heat from objects rather than visual reflected light which is more conducive to scene perception when severe bandwidth limitations exist such as with retinal prosthestic devices. The purpose of the this project is to examine and quantify the improvement of scene perception by humans using an infrared verses a visual imaging front–end to a visual prosthetic device. Methods: An imaging system front–end that incorporates both an infrared camera and a visual camera has been developed. The output is processed by software that "pixelates" the video image information to simulate the visual effect of a bandwidth–limited retinal implant. The resolution of the simulated image is selectable. Human subjects will be presented the "simulated implant" imagery through an immersive display (a modified Low Vision Enhancement System). Perception of objects and scenes as well as their performance of various tasks will be scored. Analysis of the data will provide a quantification of the perception improvements of multi–spectral imaging for retinal implants. Results: Testing is scheduled to begin in January 2004. Preliminary results will not be available until May, 2004. Results from previous work provide qualitative results that are encouraging. Conclusions: Primarily, any task that involves navigating, communicating, or locating people or animals appears to be enhanced by infrared imaging.

Keywords: image processing • low vision • wound healing 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×