June 2022
Volume 63, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2022
Functional Vision Assessment in People with Ultra-Low Vision using Virtual Reality: A Reduced Version
Author Affiliations & Notes
  • Krishna Sargur
    Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, United States
  • Arathy Kartha
    Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland, United States
  • Roksana Sadeghi
    Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland, United States
  • Chris Bradley
    Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland, United States
  • Gislin Dagnelie
    Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland, United States
  • Footnotes
    Commercial Relationships   Krishna Sargur None; Arathy Kartha None; Roksana Sadeghi None; Chris Bradley None; Gislin Dagnelie None
  • Footnotes
    Support  NIH NEI R01EY028452
Investigative Ophthalmology & Visual Science June 2022, Vol.63, 4055 – F0019. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Krishna Sargur, Arathy Kartha, Roksana Sadeghi, Chris Bradley, Gislin Dagnelie; Functional Vision Assessment in People with Ultra-Low Vision using Virtual Reality: A Reduced Version. Invest. Ophthalmol. Vis. Sci. 2022;63(7):4055 – F0019.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Ultra-Low Vision (ULV) is a profound visual impairment limiting individuals to only detecting lights, silhouettes, and high contrast objects. Previous studies (Adeyemo et al 2017, Dagnelie et al 2017) have shown that visual information gathering under variable lighting conditions is one of the most common functional domains affected by ULV. In this study, we tested a reduced version of the functional assessment (Kartha et al. 2018) that can be useful for clinical evaluations.

Methods : Participants were tasked with completing a series of spatial localization and detection activities in a virtual reality environment. They were scored on the number of correct responses (m-alternative forced choice; m-AFC), for each of the 22 scenes presented. Data was processed into d prime (d’) values that assessed participant performance compared to average (Person Measure), and scene difficulty based on chance performance (Item Measure; 0 for chance performance, more negative for easier tasks). Data collected from two centres (n=59 for C1; n=45 for C2) were compared using item and person measures.

Results : The results from C1 and C2 lie within the expected range, and all scenes except one have a better than chance performance. An expected trend is that scenes with lower contrast had less negative d’ values as they were more difficult tasks.
There was significant correlation between the item measures from both centres (r (21) = 0.70, p<0.05). Item measures from C2 were consistently higher than those from C1 for all items. On examining person measures, it became evident that person measures from C2 were significantly higher than those from C1 (p<0.001) which shows a higher functional ability and therefore, less item difficulty for C2 than C1.

Conclusions : Our data suggests that item measures and person measures can be compared across centres but may be shifted, depending on the functional ability of participants in each centre. They also suggest that this reduced set is an effective functional measure of ULV. The use of a VR headset enables this test to be performed easily both at homes and hospitals, making this a novel tool with a potential for widespread impact due to its portability, and ease of administration.

This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.

 

Figure 1 – Comparison of Item Measures Between C1 and C2 (orange=C1; green=C2; blue=combined; yellow=difference)

Figure 1 – Comparison of Item Measures Between C1 and C2 (orange=C1; green=C2; blue=combined; yellow=difference)

 

Figure 2 – Comparison of Person Measures Between C1 and C2

Figure 2 – Comparison of Person Measures Between C1 and C2

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×