Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 7
June 2024
Volume 65, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2024
Real-time tracked intraoperative OCT using synthetically trained machine learning models
Author Affiliations & Notes
  • Harvey Shi
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
    Duke University School of Medicine, Durham, North Carolina, United States
  • Pablo Ortiz
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • Jianwei D Li
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • Amit Narawane
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • Robert Michael Trout
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • Yuan Tian
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • Mark Draelos
    Robotics, University of Michigan, Ann Arbor, Michigan, United States
    Ophthalmology, University of Michigan, Ann Arbor, Michigan, United States
  • Ryan McNabb
    Ophthalmology, Duke University School of Medicine, Durham, North Carolina, United States
  • Anthony N Kuo
    Ophthalmology, Duke University School of Medicine, Durham, North Carolina, United States
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • Joseph A. Izatt
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
    Ophthalmology, Duke University School of Medicine, Durham, North Carolina, United States
  • Footnotes
    Commercial Relationships   Harvey Shi None; Pablo Ortiz None; Jianwei Li None; Amit Narawane None; Robert Trout None; Yuan Tian None; Mark Draelos Horizon Surgical Systems, Code C (Consultant/Contractor); Ryan McNabb Johnson & Johnson Vision, Code F (Financial Support), Leica Microsystems, Code P (Patent), Leica Microsystems, Code R (Recipient); Anthony Kuo Johnson & Johnson Vision, Code F (Financial Support), Leica Microsystems, Code P (Patent), Leica Microsystems, Code R (Recipient); Joseph Izatt Alcon, Code C (Consultant/Contractor), Leica Microsystems, Code P (Patent), Leica Microsystems, Code R (Recipient)
  • Footnotes
    Support  NIH T32-GM145449, NIH U01-EY028079
Investigative Ophthalmology & Visual Science June 2024, Vol.65, 5898. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Harvey Shi, Pablo Ortiz, Jianwei D Li, Amit Narawane, Robert Michael Trout, Yuan Tian, Mark Draelos, Ryan McNabb, Anthony N Kuo, Joseph A. Izatt; Real-time tracked intraoperative OCT using synthetically trained machine learning models. Invest. Ophthalmol. Vis. Sci. 2024;65(7):5898.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Intraoperative optical coherence tomography (iOCT) visualization of ophthalmic surgeries is limited by the narrow field of view (FOV) of OCT relative to that of the surgical microscope, requiring precise manual repositioning by a trained operator. Approaches have been developed for automatic ophthalmic instrument tracking; however, these require markers on surgical instruments, use specialized imaging systems, or are designed for post-op analysis. We created a computer vision system that utilizes the surgical microscope video feed to track ophthalmic surgical instruments in real time via an automatically annotated machine learning dataset.

Methods : Our system uses a machine learning model trained on a synthetic image dataset we generated by 3D rendering models of surgical instruments (Fig. 1). This pipeline allows us to generate many distinct training images and to automatically annotate features. We selected cataract surgery as an example procedure with high velocity tool movement relative to surgical field size. The model was trained on 20000 synthetic images (640x640) of 4 tool classes: phacoemulsification (phaco) probes, knives, forceps, and needles. The trained model was integrated into a clinical iOCT system: the model output was used to steer the OCT galvo scanners and acquire OCT volumes at the tool location.

Results : In testing with a phantom eye and real cataract tools (Fig. 2), the system correctly tracked the tool for 89.5% of the 8384 frames in which a tool was visible. Over the course of 115 tracking sessions (3.6 hours) with phantom and porcine eyes, the model had a processing time per frame median (interquartile range) of 15 (12-21) ms, corresponding to a tracking rate of 67 (48-83) Hz, which exceeds the volume acquisition rate of the OCT system.

Conclusions : We demonstrate the successful implementation of a real-time surgical instrument tracking system via a machine learning model trained on synthetic microscope views. Our synthetic data approach allows us to rapidly update our training set and provides another level of flexibility for future automated tool tracking.

This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.

 

Overview of the synthetic data pipeline and tracking system. The model outputs the tracked microscope stream with an iris (blue box), a primary tool (pink box), and OCT target coordinates (orange cross).

Overview of the synthetic data pipeline and tracking system. The model outputs the tracked microscope stream with an iris (blue box), a primary tool (pink box), and OCT target coordinates (orange cross).

 

Microscope and OCT images showing a successful tracking session with a model eye phantom and all 4 classes of tool.

Microscope and OCT images showing a successful tracking session with a model eye phantom and all 4 classes of tool.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×