Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 7
June 2024
Volume 65, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2024
Advancing Intraocular Surgery Analysis: A Tailored Video Annotation Tool for Enhanced Grading and AI Training
Author Affiliations & Notes
  • James Xu
    University of California Irvine Gavin Herbert Eye Institute, Irvine, California, United States
  • Josiah K To
    Cook County Health, Chicago, Illinois, United States
  • Dakari Harris
    Cornell University, Ithaca, New York, United States
  • Andrew Browne
    University of California Irvine Gavin Herbert Eye Institute, Irvine, California, United States
  • Footnotes
    Commercial Relationships   James Xu None; Josiah To None; Dakari Harris None; Andrew Browne Jcyte, Alimera, JeniVision, Code C (Consultant/Contractor), United States patent US20200336638, United States patent US20200163737, United States patent US10295526, Code P (Patent)
  • Footnotes
    Support  Gavin Herbert Eye Institute 20/20 Society Pilot Research, BrightFocus Foundation, NIH/NEI 1K08EY034912 - 01, The Retina Society Research and International Retina Research Foundation, Unrestricted grant to UC Irvine department of ophthalmology from Research to Prevent Blindness
Investigative Ophthalmology & Visual Science June 2024, Vol.65, 913. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      James Xu, Josiah K To, Dakari Harris, Andrew Browne; Advancing Intraocular Surgery Analysis: A Tailored Video Annotation Tool for Enhanced Grading and AI Training. Invest. Ophthalmol. Vis. Sci. 2024;65(7):913.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : This research focuses on developing a custom video annotation software for ophthalmic surgery. The objective was to create a user-friendly tool enabling graders to select and annotate surgical instrument types, surgical instrument tip location in 3D space, chapters denoting surgical maneuvers, and ocular anatomy in high-resolution from individual video frames. The software aims to address the limitations of existing tools, providing a foundation for developing artificial intelligence systems for objective data extraction from surgical videos.

Methods : The software development process was grounded in Python, with key dependencies like PyQt5 for constructing the graphical user interface and ffmpeg-python for efficient video processing. A systematic and logical architecture was implemented to store information in a readable way. The graphical user interface design centers around a video window, surrounded by customizable annotation buttons. These annotations provide a comprehensive label-set for the surgical video. Cursor tracking functionalities were incorporated to facilitate the precise labeling of surgical instrument tip locations. Furthermore, the inclusion of scrolling bars provides the ability to track instruments’ depth.

Results : The video annotation tool accurately displays the graphical interface featuring a central video frame encircled by annotations. The software records desired annotations of surgical tools in the XY and Z planes successfully. Beta testing demonstrated efficient surgical video annotation, with subjective feedback indicating greater facility and easier workflow than existing video annotation software.

Conclusions : The development of this beta software for surgical video annotation has yielded a tool that is subjectively more user-friendly for surgical video graders than existing options. The software is platform-independent and adaptable to label any surgical video. Anticipated to be a valuable asset, this tool lays the foundation for the future development of artificial intelligence systems to extract objective data from surgical videos, contributing to advancements in surgical research, surgical safety tools and education.

This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.

 

Figure: Screen shot of Graphical User Interface for the Video Annotation Program

Figure: Screen shot of Graphical User Interface for the Video Annotation Program

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×