Abstract
Purpose :
Embedded computing systems with CPU and GPU shared memory can deliver a new generation of real-time algorithms by exploiting the advantages of both processor types in data processing pipelines. This technology is particularly promising for ophthalmic instrumentation due to its low cost and small footprint, high-level compiled and interpreted programming languages. Here we demonstrate pupil tracking in two such systems, for eventual real-time optical compensation of involuntary eye movement in scanning ophthalmoscopes.
Methods :
Pupil trackers were built using embedded computers Jetson Xavier AGX and Jetson Orin Nano, both by Nvidia (Santa Clara, CA, USA), running the operating system Linux Ubuntu 20.04. Two cameras with USB interface, BFS-U3-20S4M-C by Teledyne-FLIR (Wilsonville, Oregon, USA), and acA640-750um by Basler AG (Ahrensburg, Germany), were tested in a custom Scheimpflug off-axis optical setup with two 940 nm light emitting diodes for illumination. The off-axis optical setup facilitates integration with existing ophthalmoscopes, while the use of 940 nm illumination mitigates photoreceptor responses for compatibility with psychophysical experiments and functional retinal imaging. A Python application with C++ modules was developed for real time tracking of pupil location, size and orientation, using previously described algorithms (PMCID: PMC8548015).
Results :
The camera application programming interfaces (APIs) allow image access only after frames are fully downloaded to RAM, forcing sequential, as opposed to overlapping, image download and processing. This results in longer total latencies when compared with those from an FPGA-CPU computing platform (PMCID: PMC8548015) performing the same exact calculations (see Table 1). The new computing hardware parallel calculations without having to copy data between processor-specific RAM, reduces calculation times of operations such as filtering manyfold.
Conclusions :
Computing platforms ($1,000 AGX and $500 Nano) lower in cost than a previous FPGA + CPU combination ($8,000), and cameras with USB interface were evaluated. Despite the total pupil tracking latency being limited by the camera APIs, 2-3 ms latency was achieved, with potential for reducing calculation times with further algorithm optimization.
This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.