Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 7
June 2024
Volume 65, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2024
Assessing vitreoretinal surgical training experience by leveraging instrument maneuvers and visual attention with deep learning neural networks
Author Affiliations & Notes
  • Rogerio Nespolo
    Biomedical Engineering, University of Illinois Chicago, Chicago, Illinois, United States
    Ophthalmology & Visual Sciences, University of Illinois Hospital & Health Sciences System, Chicago, Illinois, United States
  • George R Nahass
    Biomedical Engineering, University of Illinois Chicago, Chicago, Illinois, United States
    Ophthalmology & Visual Sciences, University of Illinois Hospital & Health Sciences System, Chicago, Illinois, United States
  • Mahtab Faraji
    Biomedical Engineering, University of Illinois Chicago, Chicago, Illinois, United States
    Ophthalmology & Visual Sciences, University of Illinois Hospital & Health Sciences System, Chicago, Illinois, United States
  • Darvin Yi
    Ophthalmology & Visual Sciences, University of Illinois Hospital & Health Sciences System, Chicago, Illinois, United States
  • Yannek Isaac Leiderman
    Ophthalmology & Visual Sciences, University of Illinois Hospital & Health Sciences System, Chicago, Illinois, United States
  • Footnotes
    Commercial Relationships   Rogerio Nespolo None; George Nahass None; Mahtab Faraji None; Darvin Yi None; Yannek Leiderman None
  • Footnotes
    Support  Research to Prevent Blindness
Investigative Ophthalmology & Visual Science June 2024, Vol.65, 900. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Rogerio Nespolo, George R Nahass, Mahtab Faraji, Darvin Yi, Yannek Isaac Leiderman; Assessing vitreoretinal surgical training experience by leveraging instrument maneuvers and visual attention with deep learning neural networks. Invest. Ophthalmol. Vis. Sci. 2024;65(7):900.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To develop a platform that uses deep-learning neural networks to distinguish the level of experience of vitreoretinal surgeons when analyzing their instrument maneuvers and areas of visual attention-via gaze tracking-, when performing standardized tasks with a surgical simulator.

Methods : Four attending surgeons, three fellows, and two resident surgeons were invited to perform a series of ophthalmic surgical tasks using a surgical simulator. These tasks included membrane peeling, hyaloid manipulation, endolaser photocoagulation, general vitrector use, and retinal detachment repair. An instance segmentation neural network was trained to track instrument maneuvers and to extract the gaze position provided by an eye-tracking bar. A second spatio-temporal neural network (CNN + LSTM) was trained to classify the level of experience of each subject by analyzing the acquired instrument maneuvers and areas of visual attention.

Results : Combining instrument maneuvers and gaze behavior proves most effective in discerning surgeons' experience levels, especially within core vitrectomy and membrane peeling tasks (M = 0.983, SD = 0.017). Notably, endolaser tasks exhibit lower efficacy (M = 0.32, SD = 0.159). Cross-task validation models successfully identify surgeons' experience (M = 0.733, SD = 0.216). Exclusive reliance on instrument maneuvers for training and evaluation outperforms gaze behavior assessment in predicting surgical experience (M = 0.456, SD = 0.319 vs. M = 0.254, SD = 0.241). Membrane peeling task models consistently demonstrate superior performance across all scenarios: combined maneuvers with gaze (M = 0.938, SD = 0.051), maneuvers alone (M = 0.707, SD = 0.284), and gaze alone (M = 0.242, SD = 0.277).

Conclusions : Vitreoretinal surgeons' experience levels can be distinguished by analyzing their surgical maneuvers and gaze behavior using deep-learning neural networks. Combining assessment of instrument maneuvers with gaze behavior was the most effective approach.

This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.

 

Assessment of surgical skills via neural networks. (A) Time series with instrument maneuvers and spatial position of anatomic elements are extracted from surgical cases; (B) Multiple sequential windows are used as the input of the model (C). (D) Output of the classifier.

Assessment of surgical skills via neural networks. (A) Time series with instrument maneuvers and spatial position of anatomic elements are extracted from surgical cases; (B) Multiple sequential windows are used as the input of the model (C). (D) Output of the classifier.

 

F1-Scores for surgeons’ experience classification

F1-Scores for surgeons’ experience classification

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×