Abstract
Purpose:
The purpose of this study was to test our 'Object Localization and Tracking System' (OLTS), which assists blind/low vision subjects with reaching and grasping for objects. The main goal was to explore how varying the visual angle of a computer generated feedback mechanism alters the accuracy of reaching and grasping in object localization tasks.
Methods:
The OLTS utilized a wide-angle (~100 degrees) monocular camera (Tanguay, Sahin), a central processing unit (CPU) and bone conduction headphones. Computer Vision algorithms (Context Tracker, Dinh & Medioni) on the CPU parsed, and processed the camera input to determine object(s) position. Bone conduction headphones produced verbal auditory feedback for the blind test subject based on object position. Thus, objects positioned left of the center of vision of the camera would elicit a “Left” to the test subject. Objects within the “central-region” of the field of view cause the computer to say “Center”. Once the object was centralized within the cameras center of vision, test subjects were asked to reach out and touch the object. Two blind test subjects evaluated the device. Four different “central-region” visual angles of the feedback algorithm were used (7.8, 15.6, 23.4 and 31 degrees); three experiments per angle were conducted. Each experiment consisted of localizing and grasping for an object.
Results:
Subject O-N was able to grasp the object on the first “reach” 1/3, 1/3, 1/3, and 2/3 times for visual angles 7.8, 15.6, 23.4 and 31 degrees, respectively. The average amount of “reaches” it took to successfully grasp the object were 3, 2.33, 1.67, and 1.33 for angles 7.8, 15.6, 23.4 and 31 degrees, respectively. Subject R-T was able to grasp the object on the first “reach” 0, 0, 2/3 and 0 times for angles 7.8, 15.6, 23.4 and 31 degrees. The average amount of reaches it took to grasp the object were 3.33, 3, 1.33 and 1.33 for angles 7.8, 15.6, 23.4 and 31 degrees, respectively. Object tracking paths, time to grasp, video and audio data were also recorded for each experiment.
Conclusions:
The experiments conducted have given initial indicators to finding an optimal visual angle for our sound guided feedback mechanism. Specifically, the initial results show that increasing the visual angle decreases the amount of attempts to grasp the object.