Abstract
Purpose :
Perceptual learning (PL) has shown great promise as a rehabilitative strategy for low vision. PL paradigms that engage multiple sensory modalities can improve visual sensitivity more than visual-only paradigms. However, the visual stimulus features (e.g. contrast) or tasks (e.g. detection) for which visual learning may benefit from multisensory facilitation have not been quite explored. This study tested the hypothesis that learning to detect visual stimulus contrast will benefit from multisensory facilitation by a temporally correlated auditory stimulus.
Methods :
Two groups (N=10 each) of participants were recruited and trained to detect the contrast of a Gabor patch undergoing counterphase flickering (~21Hz) in the presence of varying levels of external noise. Both groups performed a 2AFC contrast detection task without feedback in the pre (day 1-2) and post-training (day 9-10) sessions. During the training sessions (day 3-8), the visual-only training (VOT) group performed the task with feedback while the audiovisual training (AVT) group performed the task with feedback and an auditory white noise. The auditory stimulus was amplitude modulated with the same frequency and duration as the visual stimulus in order to enhance temporal binding. Contrast detection thresholds for each external noise level were measured using an adaptive staircase procedure. The perceptual template model (PTM) was adopted to investigate whether different mechanisms underlie the two perceptual training paradigms.
Results :
While both groups improved with practice, the AVT group showed significantly more reduction in threshold than the VOT group during the training sessions when sound was present for AVT (VOT: -9±1%; AVT: -19.8±1.9%, p<0.001) and after training (VOT: -7.5±4.7%; AVT: -20.3±3.1%, p=0.035). Analysis from the PTM model revealed that while AVT training reduced thresholds in both low and high external noise levels during and after training, reflecting stimulus amplification and perceptual template retuning mechanisms respectively, VOT training reduced thresholds only in high external noise levels.
Conclusions :
Compared to visual PL, our results suggest that multisensory PL paradigms are more effective for visual learning, may engage separate mechanisms and thus, will provide a powerful new set of rehabilitative tools in the quest to improve visual function in patients with low vision.
This is a 2020 ARVO Annual Meeting abstract.