June 2021
Volume 62, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2021
Semantic segmentation of gonioscopic images exploiting adaptive ROI localization and uncertainty estimation
Author Affiliations & Notes
  • Andrea Peroni
    VAMPIRE Project, Computing, School of Science and Engineering, University of Dundee, Dundee, United Kingdom
  • Mauro Campigotto
    NIDEK Technologies Srl., Albignasego, Italy
  • Anna Paviotti
    NIDEK Technologies Srl., Albignasego, Italy
  • Emanuele Trucco
    VAMPIRE Project, Computing, School of Science and Engineering, University of Dundee, Dundee, United Kingdom
  • Footnotes
    Commercial Relationships   Andrea Peroni, NIDEK Technologies Srl. (F), NIDEK Technologies Srl. (P); Mauro Campigotto, NIDEK Technologies Srl. (E), NIDEK Technologies Srl. (P); Anna Paviotti, NIDEK Technologies Srl. (E), NIDEK Technologies Srl. (P); Emanuele Trucco, NIDEK Technologies Srl. (F), NIDEK Technologies Srl. (P)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2021, Vol.62, 382. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Andrea Peroni, Mauro Campigotto, Anna Paviotti, Emanuele Trucco; Semantic segmentation of gonioscopic images exploiting adaptive ROI localization and uncertainty estimation. Invest. Ophthalmol. Vis. Sci. 2021;62(8):382.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To develop a semantic segmentation algorithm highlighting anatomical structures of interest in digital gonioscopic images acquired by a semi-automatic ophthalmic device, and to overcome ground truth limitations (e.g. missing or incomplete target annotations) by adaptive identification of the most informative region of interest (ROI) in the data and estimation of spatial prediction uncertainty.

Methods : In gonioscopic acquisitions, only the central part of each image is well illuminated, and the focus varies according to the selected focal plane; therefore, only part of the image can be reliably annotated. A dataset of 274 irido-corneal sector images have been annotated by four experienced ophthalmologists and used to train (202), validate (41) and test (31) a custom Dense-Unet from scratch. The network is trained to achieve two simultaneous, complementary aims: exploiting the ground truths to maximise the segmentation accuracy within the annotated region of the images, and learning to evaluate the informative value of local regions based mainly on sharpness and lightness, locating a ROI. The ROI is then used to filter out uncertain segmentation outputs. Moreover, the use of drop-out during inference makes it possible to generate multiple predictions, enabling us to estimate uncertainty via the pixel-wise variance of the predicted probabilities.

Results : With a test set representative for relevant clinical cases and un-correlated with the training and validation ones we obtain an overall pixel-wise classification accuracy above 90% within the annotated area of the ground truth data. The automatic ROI identification can locate the most informative region in every test image and the uncertainty estimation proves to effectively highlight most of the un-correctly predicted image pixel sub-sets.

Conclusions : The proposed system can maximise the information learnt from ground truth annotations and combine it with an effective ROI localization to provide an accurate segmentation of irido-corneal angle layers. Uncertainty estimation can help for a better interpretation of model predictions and may act as an important support in clinical applications.

This is a 2021 ARVO Annual Meeting abstract.

 

Fig. 1: an overview of the designed segmentation model: one decoder performs the semantic segmentation, the other one locates the ROI

Fig. 1: an overview of the designed segmentation model: one decoder performs the semantic segmentation, the other one locates the ROI

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×