Abstract
Purpose :
Many visually impaired people are using smartphone magnification apps to help see the fine details. The visual tasks performed with the vision assistance apps in their daily lives are largely unknown. Analytics studies on the visual targets viewed by the users can provide valuable insights into their visual demands.
Methods :
The SuperVision Magnifier iOS app, which is free to the public, was used to collect data from people using the app in their daily lives. The images captured by the phone cameras were processed by Azure computer vision cloud service for object recognition. Only one image was processed for each app launch. The images were neither saved nor visually reviewed. The app received the object tags (e.g. text, person, child art), and uploaded the data to the Umeng analytics server for tallying in an aggregated manner, without any individually identifiable information being saved. Data across 31 days were downloaded and analyzed offline. More than 1000 types of object tags were grouped into 10 categories- Text, Indoor, Art, Human, Electronics, Outdoor, Food, Animal, Plant, and Others. The data collection and analysis were conducted separately for app users with at least one iOS vision accessibility option (e.g. voiceover, color inverted) toggled on. It is assumed these accessibility users had more severe vision loss than the other users.
Results :
In total, 152,819 images from about 25,000 users were successfully processed by the Azure server. Textual targets appeared in 41.1% of the images for the accessibility users, and 29.8% for non-accessibility users. Among the non-textual targets, the top 4 categories were Indoor scene (31.3% and 37.7%), Art (7.4% and 7.4%), Human (6.5% and 10.3%), Electronics (5.7% and 6.0%) for accessibility users and non-accessibility users, respectively. Examining if one non-textual category was more than another non-textual category, it was found that the two groups of users were different in only 2 out of 36 category comparisons. According to the proportion test, the difference was not statistically significant (p=0.08, z=1.43).
Conclusions :
The vision assistance app was used for reading text in about 30 to 40 percent of cases. People with more severe vision loss more frequently needed help with text reading. The majority of visual targets were non-textual, for which the visual demands may be similar for users with different severity of vision loss when grouping broadly.
This is a 2021 ARVO Annual Meeting abstract.