July 2018
Volume 59, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2018
Gaze-contingent screen magnification control
Author Affiliations & Notes
  • Roberto Manduchi
    UC Santa Cruz, Santa Cruz, California, United States
  • Susana T L Chung
    UC Berkeley, Berkeley, California, United States
  • Footnotes
    Commercial Relationships   Roberto Manduchi, None; Susana Chung, None
  • Footnotes
    Support  Reader’s Digest/Research to Prevent Blindness
Investigative Ophthalmology & Visual Science July 2018, Vol.59, 635. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Roberto Manduchi, Susana T L Chung; Gaze-contingent screen magnification control. Invest. Ophthalmol. Vis. Sci. 2018;59(9):635.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : People with low vision often use partial or full screen magnification software. In either case, the user needs to move the center of magnification using a mouse or trackpad in order to frame the desired portion of the screen content. Manual scrolling often results in slow reading, and can be challenging for those who don’t have full motor control of their hands. The goal of this project is to develop a gaze-directed, hands-free controller for screen magnification that affords a more natural experience when reading onscreen content. Specifically, we aim to develop a software system that enables gaze-directed control of the magnification focus. This presentation will describe a data collection study, as well as initial research work aiming to predict the desired mouse motion from the time series of gaze points.

Methods : We developed a software application that enables collection of mouse tracks, gaze point tracks, and image sequences from the computer’s camera. All data is time-stamped. A Tobii Pro X2-30 eye tracker was used for gaze point recording. Six subjects with low vision operated this system while reading two paragraphs each from two documents (one single-column and one three-column) using two types of magnification system. A Matlab application is used for visualization of the whole recorded session, including images collected by the camera, mouse location, location of the last fixation and of the current gaze point (see figure).

Results : Data collection was successful for all but one subject, who was not able to complete the experiment and kept too close to the screen, such that his eye gaze could not be measured. We plan to make this data publicly available (with the subjects’ approval in the IRB consent form) for access by other researchers. Initial experiments with an LSTM (Long-Short Term Memory) recurrent neural network trained to predict the motion of the mouse given the past time series of gaze points have shown promising results

Conclusions : This project aims to facilitate use of screen magnification. Our goal is to increase acceptance of this technology by those potential users who are currently reluctant to adopt it due to the difficulty associated with manual control of the magnifier. The study described in this presentation is the first of its kind, and is instrumental for the system we are envisioning.

This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.

 

This figure shows a screenshot from the Matlab application used to visualize the whole recorded session.

This figure shows a screenshot from the Matlab application used to visualize the whole recorded session.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×