June 2017
Volume 58, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2017
Comparing reading with classic versus computer-generated MNREAD sentences
Author Affiliations & Notes
  • nilsu atilgan
    Psychology, University of Minnesota, Minneapolis, Minnesota, United States
  • Gordon E Legge
    Psychology, University of Minnesota, Minneapolis, Minnesota, United States
  • John Stephen Mansfield
    Psychology, State University of New York College at Plattsburgh, Plattsburgh, New York, United States
  • Footnotes
    Commercial Relationships   nilsu atilgan, None; Gordon Legge, None; John Stephen Mansfield, None
  • Footnotes
    Support  NIH grant EY002934
Investigative Ophthalmology & Visual Science June 2017, Vol.58, 3275. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      nilsu atilgan, Gordon E Legge, John Stephen Mansfield; Comparing reading with classic versus computer-generated MNREAD sentences. Invest. Ophthalmol. Vis. Sci. 2017;58(8):3275.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Purpose : The MNREAD Acuity chart measures three parameters of reading performance —reading acuity, critical print size and reading speed. Each of the five versions of the chart has 19 sentences spanning a wide range of print sizes. These 95 “Classic” sentences are standardized to meet linguistic and typographic constraints. There is a need to enlarge the corpus of MNREAD testing material for research requiring repeated testing and for evaluating effects of text variables other than print size. An algorithmic sentence generator has been developed that produces thousands of “Generator sentences” that fit the MNREAD constraints. Here, we ask how reading performance measured with the Generator sentences compares with performance measured with the Classic MNREAD sentences.

Methods : 9 normally sighted subjects were tested on a computerized version of the MNREAD test in three sentence conditions: Classic sentences, Generator sentences, and Unordered sentences (i.e., Classic sentences with scrambled word order.) Testing was repeated in three “blur” conditions: No blur, Mild Blur (digital low pass filtering with an effective acuity of 20/80) and Severe Blur (effective acuity of 20/320). For each subject and each testing condition, curve fitting was used to estimate the three parameters of reading.

Results : No significant differences were found between the Classic and Generator sentences in estimates of reading acuity, critical print size and reading speed at any level of blur. Unordered sentences yielded reduced maximum reading speed, and Mild and Severe blur yielded larger values for reading acuity and critical print size.

Conclusions : Measuring visual reading performance with the Generator sentences yields findings comparable to those obtained with the Classic MNREAD sentences. The results with Mild and Severe blur support the use of the Generator sentences in testing individuals with reduced acuity.

This is an abstract that was submitted for the 2017 ARVO Annual Meeting, held in Baltimore, MD, May 7-11, 2017.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.