Abstract
Purpose :
Virtual reality (VR) head-mounted displays (HMDs) are an attractive method for displaying intrasurgical optical coherence tomography (OCT) volumes because they free surgeons from microscope oculars. We demonstrate real-time interactive manipulation and viewing of static and live OCT volumes in a commercial HTC Vive® immersive VR system.
Methods :
We designed a VR rendering pipeline built with CUDA, OpenGL, and OpenVR that supports interactive translation, rotation, and scaling of volumes as well as volume sectioning with multiple cut planes. We modified previously reported raycasting techniques to use arbitrary projection matrices for ray generation and to extensively exploit spatially-cached texture memory to meet the Vive’s native refresh rate of 90 Hz. We used custom OpenGL shaders to composite the volumetric render image into a standard 3D scene using ray propagation depth. To allow full GPU occupancy for volumetric renderings without degrading VR frame rate, we dedicated an NVIDIA GTX 1080 for raycasting and used an NVIDIA GTX 670 for compositing. We tested the pipeline using both static and live imaging.
Results :
Our VR rendering pipeline operated at 90 Hz without frame drops for a 1024x1327x128 static volume. Stereo raycasting at a resolution of 512x512 pixels had a median and maximum runtime of 7.3 ms and 10.3 ms over 15000 consecutive frames. For live imaging, concurrent raycasting and updating of the volume did not degrade rendering performance. The immersed user was able to view the volume from any perspective through head orientation changes and walking around or through the volume. Using the interactive features, the user was able to readily apply cut planes and manipulate the volume’s pose and scale.
Conclusions :
We have demonstrated the viability of HMDs for real-time visualization of OCT volumes and developed an interactive VR OCT volume viewer. VR OCT viewing improves upon intrasurgical heads-up displays with interactivity, full field of view display, and unrestricted head position and orientation.
This is an abstract that was submitted for the 2017 ARVO Annual Meeting, held in Baltimore, MD, May 7-11, 2017.