Purpose: To determine the effect of induced monocular blur on stereoacuity measured with real depth and random dot tests. Methods: Monocular visual acuity deficits (range, 20/15 to 20/1600) were induced with 7 different Bangerter filters (<0.1, 0.1, 0.2, 0.3, 0.4, 0.8, and 1.0) in 15 visually normal adults. Stereoacuity was measured with Frisby and Frisby Davis Distance (FD2) real depth tests and Preschool Randot (PSR) and Distance Randot (DR) random dot tests. Stereoacuity results were grouped as either "fine" (≤60 arcsec), "moderate" (>60 and ≤200 arcsec), or "coarse/nil" (>200 arcsec to nil) stereo. Results: Across visual acuity deficits, stereoacuity was more severely degraded with random dot (PSR, DR) than with real depth (Frisby, FD2) tests. Degradation to worse-than-fine stereoacuity consistently occurred at 0.7 logMAR (20/100) or worse for Frisby, 0.1 logMAR (20/25) or worse for PSR, and 0.1 logMAR (20/25) or worse for FD2. There was no meaningful threshold for the DR because worse-than-fine stereoacuity was associated with -0.1 logMAR (20/15). Course/nil stereoacuity was consistently associated with 1.2 logMAR (20/320) or worse for Frisby, 0.8 logMAR (20/125) or worse for PSR, 1.1 logMAR (20/250) or worse for FD2, and 0.5 logMAR (20/63) or worse for DR. Conclusions: Stereoacuity thresholds are more easily degraded by reduced monocular visual acuity with the use of random dot tests (PSR and DR) than real depth tests (Frisby and FD2). We have defined levels of monocular visual acuity degradation associated with fine and nil stereoacuity. These findings have important implications for testing stereoacuity in clinical populations.
ASJC Scopus subject areas
- Pediatrics, Perinatology, and Child Health