Purchase this article with an account.
Tatiana Fountoukidou, Stefanos Apostolopoulos, Sebastian Wolf, Raphael Sznitman; Automatic assessment of time-lapse OCT for dosimetry control of selective retina therapy. Invest. Ophthalmol. Vis. Sci. 2017;58(8):4867.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Selective retina therapy (SRT), a laser treatment technique for eye diseases associated with disorders of the Retinal Pigment Epithelium (RPE), does not show effects of the applied treatment via direct visual inspection until over treatment. In this work, we examine a system integrating optical coherence tomography (OCT) with SRT, to provide visual feedback on the RPE during treatment and assess the treatment energy levels. In the hope of providing an automatic dosimetry control during treatment, we explore here how deep learning image processing methods can be used to estimate over treated SRT laser applications.
SRT pulses are executed using a frequency-doubled Nd:YLF laser with pulse width of 250 ns and a pulse repetition rate of 100 Hz for 30 pulse-trains. The OCT system has a line scan frequency of 70 kHz and a spectral bandwidth of 170 nm centered at 830 nm. We obtained time-lapsed OCT, or Mscans, of 153 porcine enucleated eyes and 14 treated patients. We divided each Mscan into identical time blocks, and labeled each block as to the presence of a visible SRT laser effect or not. We trained a convolutional neural network (CNN) using exclusively the porcine eyes data to automatically evaluate whether a block included a laser pulse effect or not. Then, as a baseline, we tested the generated model on the human data. In addition, we refined the model using the human data, to see the effects of model-adaptation.
A deep convolutional neural network was trained on 80% of the porcine data, and tested on the remaining 20%. A testing accuracy of 97.7% was achieved. The patient data were evaluated using this same model, achieving an accuracy of 82.7%. The model was then fine-tuned with 80% of the patient data, and tested on the rest 20%, resulting in an accuracy of 98.6%. Another model was trained solely on 80% of the human data and evaluated on the remaining 20%. In that case overfitting was observed due to the limited dataset, leading to an accuracy of 70.5%.
The results show that the model is able to automatically identify a laser effect on a short Mscan block, and therefore can be used as a first step to assess the applied SRT energy level during treatment. Furthermore, the model is robust and capable of generalizing from porcine to human eyes, indicating that data collected on ex-vivo porcine eye can still be useful in developing automatic dosimetry algorithms.
This is an abstract that was submitted for the 2017 ARVO Annual Meeting, held in Baltimore, MD, May 7-11, 2017.
This PDF is available to Subscribers Only