Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 7
June 2024
Volume 65, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2024
An “Eye-widening” Autoencoder for Corneal Topography Extrapolation
Author Affiliations & Notes
  • Kin Ho Chan
    The Hong Kong Polytechnic University School of Optometry, Kowloon, Hong Kong
  • Tsz Wing Leung
    The Hong Kong Polytechnic University School of Optometry, Kowloon, Hong Kong
    The Hong Kong Polytechnic University Centre for Myopia Research, Hong Kong, Hong Kong
  • Chea-Su Kee
    The Hong Kong Polytechnic University School of Optometry, Kowloon, Hong Kong
    The Hong Kong Polytechnic University Centre for Myopia Research, Hong Kong, Hong Kong
  • Footnotes
    Commercial Relationships   Kin Ho Chan None; Tsz Wing Leung None; Chea-Su Kee GOOD Vision Technologies Co. Limited, Code O (Owner)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2024, Vol.65, 3389. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kin Ho Chan, Tsz Wing Leung, Chea-Su Kee; An “Eye-widening” Autoencoder for Corneal Topography Extrapolation. Invest. Ophthalmol. Vis. Sci. 2024;65(7):3389.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Corneal topography is essential for contact lens fitting, but acquiring a full-coverage map in children could be challenging. Our study aimed to utilize machine-learning methodologies to reconstruct eyelid-obstructed areas within the maps, thereby assisting pediatric contact lens practice.

Methods : An autoencoder with 32 latent dimensions was trained to reconstruct synthetic tangential power maps that had been purposefully corrupted to mimic eyelid obstruction. These maps were synthesized by combining an ellipsoid with 7th order Zernike polynomials. After training, the model was applied to actual tangential power maps gathered from Chinese children (6 to 11 years) at the university optometry clinic, without further fine-tuning on the actual dataset. 239 maps with an 8-mm coverage, accounting for 10.5% of maps in our actual dataset, were selected as the ground truth. By masking these maps with the smallest 50% of maps (n=1129), we generated 269831 input-ground truth pairs for evaluation. The model’s performance was evaluated by two key metrics. First, the mean-absolute-errors (MAE) of the apical radii (R0) and asphericity (Q) along the steep and flat meridians, estimated from the extrapolated maps, were calculated. Second, the success rate of deriving lens parameters of a Paragon CRT-style lens within one step size by the extrapolated maps was also evaluated. For comparison, these estimates were also calculated without our model, simply by ignoring missing values in the masked maps.

Results : Our model notably improved the accuracy of the estimates. The MAE for R0 was reduced from 0.004mm to 0.001mm along the flat meridian and from 0.012mm to 0.004mm along the steep meridian. Similarly, the MAE for Q also decreased from 0.022 to 0.003 and from 0.087 to 0.029 along these two meridians. Regarding the model’s ability to derive contact lens parameters, our model improved the success rate from 96.45% to 99.69%, which implies that the likelihood of failure in deriving lens parameters with our model was ten times less (1 in 320) compared to that without (1 in 28).

Conclusions : Our study demonstrates that a model trained exclusively on synthetic corneas can be applied in a clinical setting to extrapolate corneal topography maps. Our model has the potential to assist contact lens fitting in children, especially those who may involuntarily squeeze their eyes due to unfamiliarity or discomfort with the measurement.

This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×