Investigative Ophthalmology & Visual Science Cover Image for Volume 62, Issue 8
June 2021
Volume 62, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2021
What objects do people use a smartphone magnification app to help with viewing?
Author Affiliations & Notes
  • Gang Luo
    Schepens Eye Research Institute of Massachusetts Eye and Ear, Boston, Massachusetts, United States
  • Anurag Shubham
    Schepens Eye Research Institute of Massachusetts Eye and Ear, Boston, Massachusetts, United States
  • Footnotes
    Commercial Relationships   Gang Luo, None; Anurag Shubham, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2021, Vol.62, 3527. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gang Luo, Anurag Shubham; What objects do people use a smartphone magnification app to help with viewing?. Invest. Ophthalmol. Vis. Sci. 2021;62(8):3527.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Many visually impaired people are using smartphone magnification apps to help see the fine details. The visual tasks performed with the vision assistance apps in their daily lives are largely unknown. Analytics studies on the visual targets viewed by the users can provide valuable insights into their visual demands.

Methods : The SuperVision Magnifier iOS app, which is free to the public, was used to collect data from people using the app in their daily lives. The images captured by the phone cameras were processed by Azure computer vision cloud service for object recognition. Only one image was processed for each app launch. The images were neither saved nor visually reviewed. The app received the object tags (e.g. text, person, child art), and uploaded the data to the Umeng analytics server for tallying in an aggregated manner, without any individually identifiable information being saved. Data across 31 days were downloaded and analyzed offline. More than 1000 types of object tags were grouped into 10 categories- Text, Indoor, Art, Human, Electronics, Outdoor, Food, Animal, Plant, and Others. The data collection and analysis were conducted separately for app users with at least one iOS vision accessibility option (e.g. voiceover, color inverted) toggled on. It is assumed these accessibility users had more severe vision loss than the other users.

Results : In total, 152,819 images from about 25,000 users were successfully processed by the Azure server. Textual targets appeared in 41.1% of the images for the accessibility users, and 29.8% for non-accessibility users. Among the non-textual targets, the top 4 categories were Indoor scene (31.3% and 37.7%), Art (7.4% and 7.4%), Human (6.5% and 10.3%), Electronics (5.7% and 6.0%) for accessibility users and non-accessibility users, respectively. Examining if one non-textual category was more than another non-textual category, it was found that the two groups of users were different in only 2 out of 36 category comparisons. According to the proportion test, the difference was not statistically significant (p=0.08, z=1.43).

Conclusions : The vision assistance app was used for reading text in about 30 to 40 percent of cases. People with more severe vision loss more frequently needed help with text reading. The majority of visual targets were non-textual, for which the visual demands may be similar for users with different severity of vision loss when grouping broadly.

This is a 2021 ARVO Annual Meeting abstract.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×