May 2006
Volume 47, Issue 13
Free
ARVO Annual Meeting Abstract  |   May 2006
Performance Measure of Facial Expression Discrimination in Low Vision
Author Affiliations & Notes
  • R.W. Massof
    Ophthalmology, Johns Hopkins University, Baltimore, MD
  • G.D. Barnett
    Ophthalmology, Johns Hopkins University, Baltimore, MD
  • C. Rainey
    Ophthalmology, Johns Hopkins University, Baltimore, MD
  • C. Epstein
    Ophthalmology, Johns Hopkins University, Baltimore, MD
  • R. Palmer
    Ophthalmology, Johns Hopkins University, Baltimore, MD
  • K. Chen
    Ophthalmology, Johns Hopkins University, Baltimore, MD
  • Footnotes
    Commercial Relationships  R.W. Massof, None; G.D. Barnett, None; C. Rainey, None; C. Epstein, None; R. Palmer, None; K. Chen, None.
  • Footnotes
    Support  EY12045
Investigative Ophthalmology & Visual Science May 2006, Vol.47, 2303. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      R.W. Massof, G.D. Barnett, C. Rainey, C. Epstein, R. Palmer, K. Chen; Performance Measure of Facial Expression Discrimination in Low Vision . Invest. Ophthalmol. Vis. Sci. 2006;47(13):2303.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: : To develop and validate a performance measure of low vision patients’ ability to discriminate facial expressions using realistic stimuli.

Methods: : Five–second color video clips were made of 10 adult models representing both genders and a wide range of appearances. Each clip began with a profile image of the model exhibiting a neutral expression. The model then turned his/her head to face the camera and generated one of four expressions: excited, pleased, indifferent, or angry. One set of clips was made at an equivalent viewing distance of 10 feet and another set at 3 feet, resulting in a total of 80 different video clips that served as stimuli. The 80 video clips were distributed into four overlapping sets of 40, with 50% of the stimuli in each set duplicating 50% of the stimuli in each of the other sets. Data were obtained from 200 low vision patients consecutively recruited from the Wilmer Low Vision Clinic. Each patient was presented with one of the four sets of video clips. The patient was instructed to imagine that he/she had just entered the room and the person in the video was responding to his/her presence. For each clip the patient was forced to judge which of the four possible responses was expressed by the person in the video. The patient’s response was scored as correct or incorrect. Also, as part of an intake survey conducted prior to their clinic visit and data collection session, patients rated the difficulty they have recognizing facial expressions at 10 feet, 6 feet, and 3 feet.

Results: : Rasch analysis on the 200 persons x 80 items matrix of dichotomous scores provided estimates of difficulty for each video clip and face discrimination ability for each patient on an interval scale. Construct validity was evaluated with mean square fit statistics. Both item and person measure infit mean squares were consistent with a unidimensional construct that varied between people and video clips. Separation reliabilities (fraction of the observed variance that can be attributed to variance between persons or items) were 0.83 for person measures and 0.88 for item measures. Person measures were significantly correlated with self–reported ratings of facial expression difficulty (Spearman = .35 for 10’; .42 for 6’; .40 for 3’).

Conclusions: : This study demonstrates that with the application of Rasch analysis, valid and reliable measures of facial expression discrimination ability by low vision patients can be estimated from performance measures using realistic stimuli.

Keywords: face perception • low vision • perception 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×