June 2020
Volume 61, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2020
Robotically Aligned OCT for Subject Eye Translation and Rotation Compensation
Author Affiliations & Notes
  • Pablo Ortiz
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • Mark Draelos
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • Ryan P McNabb
    Ophthalmology, Duke University Medical Center, Durham, North Carolina, United States
  • Ruobing Qian
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • Christian Viehland
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • Anthony N Kuo
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • Joseph A Izatt
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • Footnotes
    Commercial Relationships   Pablo Ortiz, None; Mark Draelos, None; Ryan McNabb, Leica Microsystems (P); Ruobing Qian, None; Christian Viehland, None; Anthony Kuo, Leica Microsystems (P); Joseph Izatt, Carl Zeiss Meditec (P), Carl Zeiss Meditec (R), Leica Microsystems (P), Leica Microsystems (R), St. Jude Medical (P), St. Jude Medical (R)
  • Footnotes
    Support  NIH U01EY028079, R01EY029302, NIH R21EY029877
Investigative Ophthalmology & Visual Science June 2020, Vol.61, 2575. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Pablo Ortiz, Mark Draelos, Ryan P McNabb, Ruobing Qian, Christian Viehland, Anthony N Kuo, Joseph A Izatt; Robotically Aligned OCT for Subject Eye Translation and Rotation Compensation. Invest. Ophthalmol. Vis. Sci. 2020;61(7):2575.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Optical Coherence Tomography (OCT) systems used in the clinic require patient stabilization and gaze fixation to reduce motion artifacts and align the scanner with the region of interest in the retina. This limits the use of OCT for uncooperative patients, such as unconscious patients undergoing surgery. By tracking eye location and gaze, and mounting a retinal OCT scanner to a robot, we can automatically align the scanner with the subject’s eye. We demonstrate the capabilities of our system with retinal OCT images of freestanding subjects.

Methods : For retinal scanning, we utilized a 100kHz swept-source OCT engine. The system’s scanner (Fig. 1), which was designed for a 16-degree field of view, was mounted on a 6 degree-of-freedom robot (UR3, Universal Robots) along with three cameras and an IR LED illumination ring for high-resolution pupil and gaze tracking. The IR LEDs illuminated the pupil and produced corneal reflections that were captured in real-time by the cameras. For pupil tracking, the centroid of the pupil's image at each camera was segmented and, through linear triangulation, used to calculate the pupil center in space. For gaze tracking, the images of the corneal reflections were segmented in real-time and used to calculate the optical axis of the eye. This axis was used to adjust the robot orientation during the imaging session such that the mounted scanner aligned with the region of interest at the retina. We tested tracking performance using a mechanically translated and rotated eye model, and demonstrated in vivo operation by tracking and imaging the optic nerve and fovea of a freestanding a human subject without a fixation target (Fig. 2) consented under an IRB-approved protocol.

Results : Gaze tracking tests demonstrate 0.337° accuracy and 0.236° precision within a 28° range. Pupil tracking tests demonstrate 19.5μm lateral and 34.2μm axial accuracy along with 5.9μm lateral and 6.1μm axial precision within a 300mm lateral and 150mm axial range. Human imaging demonstrates stabilization of images automatically tracked to subject retinal features.

Conclusions : We demonstrated OCT imaging (Fig. 2) in freestanding subjects with arbitrary gaze angles.

This is a 2020 ARVO Annual Meeting abstract.

 

a) OCT scanner mounted on a robot, b) cross section of the scanner design

a) OCT scanner mounted on a robot, b) cross section of the scanner design

 

En face projections of OCT volumes (a and b) and B-scans (c and d) automatically tracked to a selected eye of a freestanding subject's optic nerve head and fovea, respectively.

En face projections of OCT volumes (a and b) and B-scans (c and d) automatically tracked to a selected eye of a freestanding subject's optic nerve head and fovea, respectively.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×