May 2007
Volume 48, Issue 13
ARVO Annual Meeting Abstract  |   May 2007
Non-Linear Correction of Eye Movements for Scanning Laser Ophthalmoscope Imagery
Author Affiliations & Notes
  • V. P. Nourrit
    Faculty of Life Sciences, University of Manchester, Moffat Building, Manchester M60 1QD, United Kingdom
  • B. Vohnsen
    Optics Laboratory (LOUM), CIOyN, Campus de Espinardo, Universidad de Murcia, 30071 Murcia, Spain
  • P. Artal
    Optics Laboratory (LOUM), CIOyN, Campus de Espinardo, Universidad de Murcia, 30071 Murcia, Spain
  • Footnotes
    Commercial Relationships V.P. Nourrit, None; B. Vohnsen, None; P. Artal, None.
  • Footnotes
    Support Research Fellowship from City University (London). Ministerio de Ciencia y Tecnología, Spain (grant FIS2004-02153 and Ramón y Cajal research contract RYC2002-006337).
Investigative Ophthalmology & Visual Science May 2007, Vol.48, 2765. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      V. P. Nourrit, B. Vohnsen, P. Artal; Non-Linear Correction of Eye Movements for Scanning Laser Ophthalmoscope Imagery. Invest. Ophthalmol. Vis. Sci. 2007;48(13):2765.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Purpose:: Adaptive optics allows one to overcome the limitations imposed by the optics of the eye on the resolution of images obtained with scanning laser ophthalmoscopes. However, since the image is obtained by scanning at a frame rate of typically 10-20 Hz, involuntary eye movements become a non-negligible limitation when the field covered by a single frame decreases and a higher resolution is sought. This in turn has a negative influence on image alignment and the necessary summation process to enhance the overall signal-to-noise ratio, as well as on further processing techniques such as deconvolution. Hardware solutions have been investigated but they rely typically on complex set-ups. For this reason, relatively simple software solutions have usually been implemented.

Methods:: In this work, we investigated how numerical correction for eye movements influenced the quality of the summation process on images smaller than 2 visual degrees, recorded with a home-made scanning laser ophthalmoscope. The standard approach is to consider that eye movements shift, rotate or scale the image. However for high-resolution images, local disruption can also degrade the quality within an image. For this reason, we also considered other piecewise linear and non-linear transformations. The difference between manual or automatic tag detection as well as the influence of the number of references was also taken into account.

Results:: Preliminary results suggest that non-linear methods can surpass linear transformations but require a larger number of tags which reduce the interest of the method. The presence of clearly identifiable tags is an important element of the correction. In noisy images, manual detection can overcome automatic detection (cross-correlation) but this gain vanishes (due to fatigue) when a large number of images and tags are required.

Conclusions:: Linear transformations offer a good compromise in terms of results, robustness and computational effort, despite relatively important assumptions. The non-linear corrections we investigated offer a limited benefit. This limitation is partly due to the fact that they require a large number of tags which are difficult to detect automatically. An additional explanation is that our transformations can only correct the deformation of a 2D image whereas the retina is a 3D structure. In this context, we plan to improve our tag detection method and to investigate other elastic deformation methods which have been used successfully in other fields.

Keywords: image processing • imaging methods (CT, FA, ICG, MRI, OCT, RTA, SLO, ultrasound) • eye movements 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.