This content is PDF only. Please click on the PDF icon to access.
Abstract
The sensitivity of clinical measures of stereoacuity in the detection of interocular differences in retinal images was examined in 50 adults with normal binocularity. Interocular differences in retinal image size (aniseikonia), clarity (anisometropia) and brightness, as well as differences in absolute and relative pupil size (anisocoria) were created in small steps over a large range to determine their effect on threshold levels of stereopsis. Their effect on stereoacuity was measured in both contour (Titmus test) and random dot (Randot test) stereograms. Stereoacuity measured by both types of stereograms decreased in a curvilinear manner for aniseikonic and anisometropic test conditions. Monocular blur caused a more rapid decrease in stereoacuity than induced aniseikonia. Stereoacuity measured by the contour stereogram decreased about 1.8 times faster than that measured by the random dot stereogram during induced aniseikonia and anisometropia. This differential sensitivity suggests that the Titmus test would detect small interocular differences in retinal images more effectively than the Randot test in clinical screening procedures for vision abnormalities. However, both tests can miss clinically significant amounts of aniseikonia and anisometropia, and fail to differentiate the cause of reduced stereopsis. Interocular differences in retinal image brightness and pupil size within a normal physiologic range did not reduce stereopsis to clinically unacceptable levels.