April 2011
Volume 52, Issue 14
Free
ARVO Annual Meeting Abstract  |   April 2011
The Development of a Picture Discrimination Test for People with Very Poor Vision
Author Affiliations & Notes
  • Radhika Gulati
    Visual Neuroscience, UCL Institute of Ophthalmology, London, United Kingdom
  • Hannah Roche
    Visual Neuroscience, UCL Institute of Ophthalmology, London, United Kingdom
  • Kavitha Thayaparan
    Visual Neuroscience, UCL Institute of Ophthalmology, London, United Kingdom
  • Ralf Hornig
    Clinical Validation, IMI Intelligent Medical Implants, Bonn, Germany
  • Gary S. Rubin
    Visual Neuroscience, UCL Institute of Ophthalmology, London, United Kingdom
    NIHR Biomedical Research Centre for Ophthalmology, London, United Kingdom
  • Footnotes
    Commercial Relationships  Radhika Gulati, None; Hannah Roche, None; Kavitha Thayaparan, None; Ralf Hornig, Intelligent Medial Images (E); Gary S. Rubin, Intelligent Medical Implants (F)
  • Footnotes
    Support  Unrestricted Grant from Intelligent Medical Implants, Bonn, Germany
Investigative Ophthalmology & Visual Science April 2011, Vol.52, 1197. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Radhika Gulati, Hannah Roche, Kavitha Thayaparan, Ralf Hornig, Gary S. Rubin; The Development of a Picture Discrimination Test for People with Very Poor Vision. Invest. Ophthalmol. Vis. Sci. 2011;52(14):1197.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: : To develop a picture discrimination test that quantifies ability to locate objects in natural scenes, for patients implanted with IMI’s Intelligent Retinal Implant System.

Methods: : 100 images of everyday urban scenes were photographed with a digital camera. Each scene had an object of interest (e.g. doorway, staircase, obstacle on the walking path) on the left or right side. Subjects 18-45 years old with normal vision viewed the pictures through a camera driven virtual reality headset. The images were projected on a screen at a fixed size and luminance and viewed with the camera mounted on the headset. The camera image was rendered on a 7 x 7 grid of Gaussian shaped pixels to simulate phosphene vision as it might be experienced with a retinal implant. Subjects had to indicate whether the object of interest was on the left or right side. All 100 pictures were shown twice in random order. In the first experiment no feedback was given to the participants. In the second experiment feedback was given during the first run only. 60 subjects participated in the study, 30 for each experiment.

Results: : In the first experiment, without feedback, the median scores were 64% and 66% correct on the two runs with 20 subjects scoring 80-90% correct and 10 subjects scoring 10-30% correct. Clearly, the low scoring group was responding to something in the pictures; they were just interpreting the information incorrectly. In the second experiment, with feedback during the first run, the median scores were 85% and 86% correct on runs 1 and 2. All subjects performed at or above chance. The pictures are currently undergoing Rasch analysis which suggests that there are at least 2 dimensions underlying performance, one related to contrast polarity (bright vs dark objects) and the other possibly related to symmetry of the scene.

Conclusions: : The results of these experiments indicate that subjects can locate an object of interest in everyday scenes even with extremely impoverished visual resolution. But to obtain stable and reliable measurements, the subjects must be given feedback to allow them to establish a useful strategy.

Keywords: low vision • pattern vision 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×