July 2019
Volume 60, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2019
A cost-effective and semi-automated annotation framework for OCT scans.
Author Affiliations & Notes
  • SANDIPAN CHAKROBORTY
    CARIn, Carl Zeiss India (Bangalore) Pvt. Ltd, ZEISS GROUP, Bangalore, India
  • Krunalkumar Ramanbhai Patel
    CARIn, Carl Zeiss India (Bangalore) Pvt. Ltd, ZEISS GROUP, Bangalore, India
  • Ashish Kumar Modi
    CARIn, Carl Zeiss India (Bangalore) Pvt. Ltd, ZEISS GROUP, Bangalore, India
  • Footnotes
    Commercial Relationships   SANDIPAN CHAKROBORTY, Carl Zeiss India (Bangalore) Pvt. Ltd, ZEISS GROUP (E); Krunalkumar Ramanbhai Patel, Carl Zeiss India (Bangalore) Pvt. Ltd, ZEISS GROUP (E); Ashish Modi, Carl Zeiss India (Bangalore) Pvt. Ltd, ZEISS GROUP (E)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science July 2019, Vol.60, 1515. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      SANDIPAN CHAKROBORTY, Krunalkumar Ramanbhai Patel, Ashish Kumar Modi; A cost-effective and semi-automated annotation framework for OCT scans.. Invest. Ophthalmol. Vis. Sci. 2019;60(9):1515.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Most Machine Learning (ML) algorithms are data-driven, supervised and computationally intensive, where annotated data are generally used to train the system to yield the model. In the ophthalmology context, this annotation process demands time from busy doctors and involves a considerable cost to incentivize them. In this work, we propose a novel annotation framework that divides the work among a specialist and either a General Ophthalmologist or an Optometrist (OP).

Methods : Given an Optical Coherence Tomography (OCT) databank, an OP is first asked to find only the normal cubes as they can confidently perform that. It is assumed that all B-scans will be normal within a normal cube. All normal B-scans are then used to construct an ML model (See 103 in Fig. 1), which functions as an outlier detector. Using this detector, one can able to detect all abnormal B-scans as anomalies. In the second step, the specialist is now involved to annotate only abnormal B-Scans. A second ML model can be constructed from the annotated abnormal B-Scans plus the B-scans from normal volumes. Hence the time and cost in the second step are balanced out by the first step using the pre-filtering. A set of B-scans and their distribution are shown in Fig 1, from PRIMUSTM 200 (ZEISS, Dublin, CA), and CIRRUSTM 5000 (ZEISS, Dublin, CA) and the same were used to train and test a one class model.

Results : An ML model (Fig 2.) is developed using retinal thickness vector from 12,765 normal B-scans, each of having the dimension of 1 x 512. Both sensitivity and specificity increase with an increase of Code Book size. The model starts getting saturated beyond code book size of 2048. However, lower sensitivity is not so important, since only the detected abnormal scans will be included in the secondary model and the false positives can be addressed by stating that the Retina Specialist can mark a B-scan as normal.

Conclusions : This proposed system saves considerable cost involved in the entire process of annotation as a Retina specialist does not need to annotate Normal B-scans; s/he is only engaged to annotate abnormal B-scans resulting in conserving cost and time for the entire clinical workflow.

This abstract was presented at the 2019 ARVO Annual Meeting, held in Vancouver, Canada, April 28 - May 2, 2019.

 

Fig 1. The overall architecture of the system (Top) (Source: Block 107 https://arxiv.org/pdf/1610.03628v1.pdf) and subject and scan tables (Bottom)

Fig 1. The overall architecture of the system (Top) (Source: Block 107 https://arxiv.org/pdf/1610.03628v1.pdf) and subject and scan tables (Bottom)

 

Fig 2. One-Class classifier model (Top) and associated results (Bottom)

Fig 2. One-Class classifier model (Top) and associated results (Bottom)

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×