September 2016
Volume 57, Issue 12
Open Access
ARVO Annual Meeting Abstract  |   September 2016
Visuohaptic Integration During the Performance of a Precision Grasping & Placement Task
Author Affiliations & Notes
  • Naime Tugac
    Kinesiology, University of Waterloo, Waterloo, Ontario, Canada
  • Dave A Gonzalez
    Kinesiology, University of Waterloo, Waterloo, Ontario, Canada
  • Ewa Niechwiej-Szwedo
    Kinesiology, University of Waterloo, Waterloo, Ontario, Canada
  • Footnotes
    Commercial Relationships   Naime Tugac, None; Dave Gonzalez, None; Ewa Niechwiej-Szwedo, None
  • Footnotes
    Support  Banting Discovery Grant (ENS); Senate Graduate Scholarship (NT)
Investigative Ophthalmology & Visual Science September 2016, Vol.57, 1517. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Naime Tugac, Dave A Gonzalez, Ewa Niechwiej-Szwedo; Visuohaptic Integration During the Performance of a Precision Grasping & Placement Task. Invest. Ophthalmol. Vis. Sci. 2016;57(12):1517.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Binocular vision provides the most accurate and precise depth information; however, many people have impairments in binocular visual function. It is currently unknown whether depth information from another modality can improve depth perception during action planning and execution. We tested the hypothesis that haptic input will improve target localization during the performance of a precision grasping and placement task.

Methods : Visuohaptic integration was examined in 25 visually-normal participants while they performed a bead-threading task with their right hand, during binocular and monocular viewing. Upper limb kinematics and eye movements were recorded using the Optotrak and EyeLink 2 while participants picked up the beads and placed them on a vertical needle. In study 1, haptic and visual feedback provided input about needle location (ie, participants could see their left hand holding the needle). In study 2, only haptic feedback was provided (ie, view of the left hand holding the needle was blocked). The main outcome variable was placement time, defined as the interval when the hand was placing the bead on the needle. A repeated analysis of variance with 2 factors, Viewing Condition (binocular/monocular) and Modality (vision/haptic) was used to test the hypothesis.

Results : As expected, results from study 1 showed that placement time was significantly shorter (p=0.002) during binocular (720±120ms) as compared to monocular (904±110ms) viewing. Most importantly, and in accordance with our hypothesis, there was a significant interaction between viewing condition and modality for placement time (p=0.013). Haptic feedback improved placement time during monocular viewing (haptic: 857±110ms; no haptic: 951±100ms). In contrast, haptic feedback did not affect placement time during binocular viewing. Study 2 results were not in accordance with our hypothesis, as there was no effect of modality on placement time. This indicates that haptic feedback presented without the corresponding visual feedback of the limb does not facilitate localization during goal-directed movements.

Conclusions : This study demonstrates that depth information from another modality can improve depth perception if concurrent visual feedback is also present. Thus, binocular vision may be the most important modality towards target localization during goal-directed movements.

This is an abstract that was submitted for the 2016 ARVO Annual Meeting, held in Seattle, Wash., May 1-5, 2016.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×