Application of 3D Gaussian Splatting for Cinematic Anatomy on Consumer Class Devices

Simon Niedermayr1, Christoph Neuhauser1, Kaloian Petkov2, Klaus Engel2, RĂ¼diger Westermann1
1Technical University of Munich, 2Siemens Healthineers

Abstract

Interactive photorealistic rendering of 3D anatomy is used in medical education to explain the structure of the human body. It is currently restricted to frontal teaching scenarios, where even with a powerful GPU and high-speed access to a large storage device where the data set is hosted, interactive demonstrations can hardly be achieved. We present the use of novel view synthesis via compressed 3D Gaussian Splatting (3DGS) to overcome this restriction, and to even enable students to perform cinematic anatomy on lightweight and mobile devices.
Our proposed pipeline first finds a set of camera poses that captures all potentially seen structures in the data. High-quality images are then generated with path tracing and converted into a compact 3DGS representation, consuming < 70 MB even for data sets of multiple GBs. This allows for real-time photorealistic novel view synthesis that recovers structures up to the voxel resolution and is almost indistinguishable from the path-traced images.

Results

Real time novel view synthesis results for different scenes and presets.

Method

Our pipeline starts with reading a medical dataset, for which then one or multiple so-called presets are selected by the user. A preset includes the transfer function setting as well as material classifications and fixed clip planes that are used to reveal certain anatomical structures. For each preset, a set of views capturing all potentially seen structures in the data at varying resolution are computed. In this way, also structures which are not seen when generating images with camera positions on a surrounding sphere are recovered in the final object representation.

These views are handed over to a physically-based renderer, i.e., a volumetric path tracer, which renders one image for every view using the corresponding preset. Once all images for a selected preset have been rendered, 3DGS is used to generate a set of 3D Gaussians with shape and appearance attributes so that their rendering matches the given images. Once the Gaussians are optimized via differentiable rendering, they are compressed using sensitivity-aware vector quantization and entropy encoding.

The final compressed 3DGS representation is rendered with WebGPU using GPU sorting and rasterization of projected 2D splats, with a pixel shader that evaluates and blends the 2D projections in image space.


Interactive Web Demo

Examples

Image comparisons for the path traced images and our reconstruction. All images are from the test set.

Ours
69 MB
Ground Truth
36.4 GB
Ours
31 MB
Ground Truth
3.6 GB
Ours
7.8 MB
Ground Truth
200 MB
Ours
2.1 MB
Ground Truth
64 MB

BibTeX

@misc{niedermayr2024novel,
    title={Application of 3D Gaussian Splatting for Cinematic Anatomy on Consumer Class Devices},
    author={Simon Niedermayr and Christoph Neuhauser and Kaloian Petkov and Klaus Engel and RĂ¼diger Westermann},
    year={2024},
    eprint={2404.11285},
    archivePrefix={arXiv},
    primaryClass={cs.GR}
}