Diabetic retinopathy (DR) is a leading cause of severe visual loss worldwide, and especially diabetic macular edema (DME) and proliferative diabetic retinopathy (PDR) often result in visual disturbances.
1–4 Although several therapeutic strategies have been developed for DME, including photocoagulation, steroids, anti-VEGF drugs, and vitrectomy,
5–10 DME is often refractory to treatment and has a poor visual prognosis. Diabetes disrupts the blood-retinal barrier (BRB) in the retinal vasculature, leading to edematous changes in the neuroglial tissue in the macula and concomitant visual loss.
11–15
DME previously has been diagnosed subjectively as macular thickening seen on biomicroscopy, color fundus photography, and fluorescein angiography (FA) images in the Early Treatment Diabetic Retinopathy Study (ETDRS).
5 In contrast, optical coherence tomography (OCT) enables objective measurement of the macular thickness, and the Diabetic Retinopathy Clinical Research Network (DRCRnet) has proposed center-involved DME, depending on the averaged macular thickness determined by OCT.
16,17 The DRCRnet further reported a modest correlation between the macular thickness and visual acuity (VA) in DME,
18 suggesting the clinical relevance of the macular thickness on OCT as well as other mechanisms causing visual disturbances, including ischemia and neuroglial degeneration in the macula. OCT also provides qualitative information about macular pathomorphologies, including cystoid macular edema, serous retinal detachment (SRD), and spongelike retinal swelling.
19 Recent advances in OCT technology are providing better delineation of the fine physiologic and pathologic structures and are encouraging clinicians to investigate the association between the VA and foveal photoreceptor damage represented by the external limiting membrane (ELM) and the junction between the inner and outer segments (IS/OS) in DME.
20–26 The pathology in the vitreomacular interface or hyperreflective foci also have been seen on OCT images and have clinical relevance in DME.
27–30
The increasing number of OCT parameters has made models for classification or prediction more complex, and clinicians cannot integratively understand DME based on the OCT images. A self-organizing map, one of the artificial neural network algorithms, produces low-dimensional topology-preserving representations of the high-dimensional input data.
31 Unsupervised machine learning by this algorithm can show a two-dimensional lattice map on which mathematical similarity between nodes was represented by the geographic distances. Concomitantly, the subsequent clustering is one of the heuristic methods that suggest the novel classifications or segmentation in the multidimensional data. Recent medical and biologic advances often provide high-throughput data with too many parameters that are visualized and simply understood after application of self-organizing map.
32–34 Makinen et al.
32 reported an association between metabolic phenotypes and vascular complications in type 1 diabetes, based on the self-organizing map. The Encyclopedia of DNA Elements Project also applied the self-organizing map to the human genome informatics to interpret gene expression profiles.
33 These reports successfully discovered novel knowledge by using this unsupervised machine learning with clustering. However, it remains to be evaluated how the self-organizing map demonstrates the macular morphologic patterns described by multidimensional OCT parameters in DME.
In the current study, the self-organizing map enabled us to integrate multiple OCT findings onto two-dimensional maps to visualize the local associations and dissociations between the individual parameters and to objectively classify all 336 eyes of patients with DR into five macular morphologic patterns.