September 2016
Volume 57, Issue 12
Open Access
ARVO Annual Meeting Abstract  |   September 2016
Measurement of facial perception abilities in low vision
Author Affiliations & Notes
  • Chris Bradley
    Ophthalmology, Johns Hopkins Medicine, Baltimore, Maryland, United States
  • Danielle Natale
    Ophthalmology, Johns Hopkins Medicine, Baltimore, Maryland, United States
  • Frank Werblin
    UC Berkeley, Berkeley, California, United States
  • Robert W Massof
    Ophthalmology, Johns Hopkins Medicine, Baltimore, Maryland, United States
  • Footnotes
    Commercial Relationships   Chris Bradley, None; Danielle Natale, None; Frank Werblin, Visionize (I); Robert Massof, Sensics (I)
  • Footnotes
    Support  Reader's Digest Partner's For Sight Foundation
Investigative Ophthalmology & Visual Science September 2016, Vol.57, 1967. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Chris Bradley, Danielle Natale, Frank Werblin, Robert W Massof; Measurement of facial perception abilities in low vision. Invest. Ophthalmol. Vis. Sci. 2016;57(12):1967.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Purpose : Develop a calibrated item bank for measuring the effectiveness of treatments for improving facial perception ability in low vision patients.

Methods : On each trial, patients saw photographs presented in virtual reality of the same person's face from three different angles (left, center and right). They were tasked with identifying: 1) the gender, 2) the face having an emotional expression different from the other two, and 3) the emotional expression of that face (choices were: angry, sad or neutral). Images were selected from the Karolinska Directed Emotional Faces database and presented in an Oculus Rift DK2 head mounted display. There were 41 patients and 64 trials per patient. Half of the trials were done with a magnification bubble - a localized region of magnification - while the other half was done without magnification.

Results : Each trial - a set of three images - was treated as an item for purposes of Rasch analysis. Three separate Rasch analyses were performed on the 64 items, one for each facial perception task. The resulting item measures were only weakly correlated with each other, suggesting different facial perception tasks require separate measures of item difficulty. Signal detection theory was used to measure d-primes (d') for performance with and without the magnification bubble. Without magnification, d' was lowest (task was hardest) for identifying the face with the "odd" expression, and d' was highest for gender discrimination. Magnification improved performance most for the hardest task and least for the easiest task.

Conclusions : We applied Rasch analysis to construct three distinct measures of item difficulty for three different facial perception tasks: 1) gender discrimination, 2) identifying the face with the "odd" emotion, and 3) identifying the emotional expression of a face. These measures are useful for determining the effectiveness of treatments targeting improvements in facial perception. Using a magnification bubble improved facial perception ability in all three tasks; however, the easier the task was without magnification, the less improvement was seen with magnification.

This is an abstract that was submitted for the 2016 ARVO Annual Meeting, held in Seattle, Wash., May 1-5, 2016.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.