June 2022
Volume 63, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2022
A hierarchical optical coherence tomography annotation workflow with crowds and medical experts
Author Affiliations & Notes
  • Miao Zhang
    Early Clinical Development Informatics, Genentech Inc, South San Francisco, California, United States
  • Simon S Gao
    Clinical Imaging Group, Genentech Inc, South San Francisco, California, United States
  • Verena Steffen
    Data and Statistical Sciences, Genentech Inc, South San Francisco, California, United States
  • Zhichao Wu
    Centre for Eye Research Australia Ltd, East Melbourne, Victoria, Australia
    Ophthalmology, Department of Surgery, The University of Melbourne, Melbourne, Victoria, Australia
  • Theodore Leng
    Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California, United States
  • Mahdi Abbaspour Tehrani
    Early Clinical Development Informatics, Genentech Inc, South San Francisco, California, United States
  • Seyyedeh Qazale Mirsharif
    Clinical Imaging Group, Genentech Inc, South San Francisco, California, United States
  • Nagamurali Movva
    Early Clinical Development Informatics, Genentech Inc, South San Francisco, California, United States
  • Mohsen Hejrati
    Early Clinical Development Informatics, Genentech Inc, South San Francisco, California, United States
  • Hao Chen
    Early Clinical Development, Genentech Inc, South San Francisco, California, United States
  • Footnotes
    Commercial Relationships   Miao Zhang Genentech, Code E (Employment); Simon Gao Genentech, Code E (Employment); Verena Steffen Genentech, Code E (Employment); Zhichao Wu Genentech, Code C (Consultant/Contractor); Theodore Leng Genentech, Code E (Employment); Mahdi Abbaspour Tehrani Genentech, Code E (Employment); Seyyedeh Qazale Mirsharif Genentech, Code C (Consultant/Contractor); Nagamurali Movva Genentech, Code C (Consultant/Contractor); Mohsen Hejrati Genentech, Code E (Employment); Hao Chen Genentech, Code E (Employment)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2022, Vol.63, 3013 – F0283. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Miao Zhang, Simon S Gao, Verena Steffen, Zhichao Wu, Theodore Leng, Mahdi Abbaspour Tehrani, Seyyedeh Qazale Mirsharif, Nagamurali Movva, Mohsen Hejrati, Hao Chen; A hierarchical optical coherence tomography annotation workflow with crowds and medical experts. Invest. Ophthalmol. Vis. Sci. 2022;63(7):3013 – F0283.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : There is a heavy burden to annotate clinical images due to the cost and availability of medical experts. To address this problem, we proposed a hierarchical annotation workflow in which medical experts review aggregated crowdsourced annotations, using dense annotation of optical coherence tomography (OCT) images from age-related macular degeneration (AMD) patients as an example.

Methods : Annotation was performed on a cloud-based platform (Labelbox) which distributed images to remote crowds and medical experts. OCT B-scans with ≥ 9 averages from 20x20° volume scans of AMD patients were randomly selected. Two medical experts annotated 25 representative B-scans with rich pathology. In a training session, 27 labellers read through an annotation guideline and practiced on 15 of the B-scans. B-scans with color-coded agreements and disagreements were presented to the crowd, visualizing their discrepancy to the expert annotations. Performance, compared to the medical experts’ annotations, was measured quantitatively by a weighted and adaptively normalized mean intersection over union (IOU) for each of the annotated structures (WANI). The top 10 performers on the remaining 10 OCT B-scans (test session, Fig. 1. Left) were selected for larger batches of annotation work. Additional 897 B-scans were labeled by 5 labellers each. Simultaneous truth and performance level estimation (STAPLE) was applied to aggregate the 5 copies of crowd labels. Medical experts qualitatively reviewed the STAPLE result to identify common mistakes to further train the crowd. The crowd and experts annotated another 8 representative B-scans in a follow-up analysis session (Fig. 1. Right).

Results : STAPLE results show above average IOU in individual anatomic structures, while outperforming all crowd labellers in WANI, though lower than experts. On average, it took experts 13.5 minutes to annotate one B-scan in contrast to 23.8 seconds to review one STAPLE result. Improvements in STAPLE WANI were observed in the follow-up session, as the crowd gained experience and further trained on the feedback from experts.

Conclusions : The proposed hierarchical annotation workflow with crowds and medical experts could reduce the burden on medical experts in extensive clinical annotation tasks.

This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.

 

IOUs for individual anatomic features and WANI score in test and follow-up sessions

IOUs for individual anatomic features and WANI score in test and follow-up sessions

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×