June 2022
Volume 63, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2022
Automated detection of the spatial location of vitreoretinal instruments from retinal images using Deep Learning methods
Author Affiliations & Notes
  • Marialejandra Diaz Ibarra
    Ophthalmology, University of California System, Irvine, California, United States
  • Josiah K To
    Ophthalmology, University of California System, Irvine, California, United States
  • Junze Liu
    Computer Science, University of California System, Irvine, California, United States
  • Sherif Abdelkarim
    Computer Science, University of California System, Irvine, California, United States
  • Anjali Herekar
    Ophthalmology, University of California System, Irvine, California, United States
  • Baruch D Kuppermann
    Ophthalmology, University of California System, Irvine, California, United States
  • Pierre Baldi
    Computer Science, University of California System, Irvine, California, United States
  • Andrew Browne
    Ophthalmology, University of California System, Irvine, California, United States
  • Footnotes
    Commercial Relationships   Marialejandra Diaz Ibarra None; Josiah To None; Junze Liu None; Sherif Abdelkarim None; Anjali Herekar None; Baruch Kuppermann None; Pierre Baldi None; Andrew Browne None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2022, Vol.63, 210 – F0057. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Marialejandra Diaz Ibarra, Josiah K To, Junze Liu, Sherif Abdelkarim, Anjali Herekar, Baruch D Kuppermann, Pierre Baldi, Andrew Browne; Automated detection of the spatial location of vitreoretinal instruments from retinal images using Deep Learning methods. Invest. Ophthalmol. Vis. Sci. 2022;63(7):210 – F0057.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To create and assess Deep Learning methods for automated detection of locations and depth of vitreoretinal instruments in retinal surgery videos.

Methods : Surgical videos in vitro were recorded using a custom 3D eye model, vitrectomy instrumentation and an ophthalmic surgery microscope. Videos of instrument manipulation throughout the surgical field and at different instrument depths (far, intermediate and near to the retina) were acquired. Video frames (n=26,460) were extracted, labeled for instrument location and used to train Machine learning (ML) employing Convolutional Neural Networks (CNNs) including ResNet-18. The CNN model used was adapted with two outputs predicting the location and depth of surgical instruments.

Results : The resultant algorithm was validated with high reproducibility. The predictability of the algorithm for detecting locations and depth of vitreoretinal instruments in retinal images achieved a 99% accuracy.

Conclusions : Deep Learning methods achieved remarkable accuracy for detecting locations and depth of vitreoretinal instruments in retinal surgical videos and demonstrates the utility of real-time surgical video analysis to aid in vitrectomy. Despite current advances in vitreoretinal (VR) surgery instrumentation and visualization systems, VR surgery remains challenging for trainees and surgeons. Deep learning is an advanced subfield of artificial intelligence that may aid and improve surgical performance.

This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×