Immersive Light Field Video
with a Layered Mesh Representation


SIGGRAPH 2020 Technical Paper

Michael Broxton*, John Flynn*, Ryan Overbeck*, Daniel Erickson*, Peter Hedman, Matthew DuVall, Jason Dourgarian, Jay Busch, Matt Whalen, Paul Debevec

* - Denotes equal contribution.

Google LLC

Download .pdf
Download .pdf
Download .pdf
Download .pdf

Click to download a PDF of the paper.

Abstract

We present a system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. We record immersive light fields using a custom array of 46 time-synchronized cameras distributed on the surface of a hemispherical, 92cm diameter dome. From this data we produce 6DOF volumetric videos with a wide 80-cm viewing baseline, 10 pixels per degree angular resolution, and a wide field of view (>220 degrees), at 30fps video frame rates. Even though the cameras are placed 18cm apart on average, our system can reconstruct objects as close as 20cm to the camera rig. We accomplish this by leveraging the recently introduced DeepView view interpolation algorithm, replacing its underlying multi-plane image (MPI) scene representation with a collection of spherical shells which are better suited for representing panoramic light field content. We further process this data to reduce the large number of shell layers to a small, fixed number of RGBA+depth layers without significant loss in visual quality. The resulting RGB, alpha, and depth channels in these layers are then compressed using conventional texture atlasing and video compression techniques. The final, compressed representation is lightweight and can be rendered on mobile VR/AR platforms or in a web browser.

A single frame from a video light field showing a geometrically complex workshop scene with reflections and sparks.
Scroll down to the example scenes at the bottom of this page to view the full video light field in your browser.

Example Light Fields

Below are several light field videos that can be explored interactively in your web browser. Click on a scene's thumbnail to view the light field in motion. Additionally, the links below the thumbnail allow you to explore the intermediate scene representations used in our light field video processing pipeline. These include:

MSI: a Multi Sphere Image.
LM: a Layered Mesh with individual layer textures.
LM-TA: a Layered Mesh with a texture atlas.

Please see the paper manuscript for more information about these representations.
Note: in order to reduce download times the examples below are at lower resolution than the main results shown in our paper. Please see our video for full resolution results.

All examples are available in the following resolutions:

Low-Res: a web and mobile friendly resolution.
High-Res: better for workstations and laptops with beefy GPUs and a high bandwidth internet connection.

BibTeX
@article{broxton2020immersive,
  title={Immersive Light Field Video with a Layered Mesh Representation},
  author={Michael Broxton and John Flynn and Ryan Overbeck and Daniel Erickson and Peter Hedman and Matthew DuVall and Jason Dourgarian and Jay Busch and Matt Whalen and Paul Debevec},
  booktitle={ACM Transactions on Graphics (Proc. SIGGRAPH)},
  publisher = {ACM},
  volume = {39},
  number = {4},
  pages = {86:1--86:15},
  year={2020}
}