Immersive Light Field Video
with a Layered Mesh Representation


SIGGRAPH 2020 Technical Paper

Michael Broxton*, John Flynn*, Ryan Overbeck*, Daniel Erickson*, Peter Hedman, Matthew DuVall, Jason Dourgarian, Jay Busch, Matt Whalen, Paul Debevec

* - Denotes equal contribution.

Google LLC

Download .pdf
Download .pdf
Download .pdf
Download .pdf

Click to download a PDF of the paper.

Abstract

We present a system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. We record immersive light fields using a custom array of 46 time-synchronized cameras distributed on the surface of a hemispherical, 92cm diameter dome. From this data we produce 6DOF volumetric videos with a wide 80-cm viewing baseline, 10 pixels per degree angular resolution, and a wide field of view (>220 degrees), at 30fps video frame rates. Even though the cameras are placed 18cm apart on average, our system can reconstruct objects as close as 20cm to the camera rig. We accomplish this by leveraging the recently introduced DeepView view interpolation algorithm, replacing its underlying multi-plane image (MPI) scene representation with a collection of spherical shells which are better suited for representing panoramic light field content. We further process this data to reduce the large number of shell layers to a small, fixed number of RGBA+depth layers without significant loss in visual quality. The resulting RGB, alpha, and depth channels in these layers are then compressed using conventional texture atlasing and video compression techniques. The final, compressed representation is lightweight and can be rendered on mobile VR/AR platforms or in a web browser.

A single frame from a video light field showing a geometrically complex workshop scene with reflections and sparks.
Scroll down to the example scenes at the bottom of this page to view the full video light field in your browser.

VR Headset Demo

Our VR headset demo is currently available for PC-based VR headsets (e.g. Oculus Rift, Oculus Rift S, Oculus Quest using Oculus Link, HTC Vive, or Valve Index).

Click to download the VR demo.

To run, extract the .zip file and execute the contained DeepViewVideo.exe

Be sure to extract to a directory without spaces in the directory path. We have some reports that spaces in the directory path cause the demo to only render a black screen.

Web Examples

Below are several light field videos that can be explored interactively in your web browser. Click on a scene's thumbnail to view the light field in motion. Additionally, the links below the thumbnail allow you to explore the intermediate scene representations used in our light field video processing pipeline. These include:

MSI: a Multi Sphere Image.
LM: a Layered Mesh with individual layer textures.
LM-TA: a Layered Mesh with a texture atlas.

Please see the paper manuscript for more information about these representations.
Note: in order to reduce download times the examples below are at lower resolution than the main results shown in our paper. Please see our video for full resolution results.

All examples are available in the following resolutions:

Low-Res: a web and mobile friendly resolution.
High-Res: better for workstations and laptops with beefy GPUs and a high bandwidth internet connection.

For each scene we have also made the raw video data and camera models from our 46 camera rig available to download. Click here to learn more. If you want to learn how to write your own Javascript viewer for our layered mesh format, take a look at this Simple Viewer. Using the Chrome web browser, open the developer console where you can browse and live-edit the Javascript code.

We presented a new higher-resolution 24-camera hemispherical light field camera rig called Brutus at the CVPR 2021 Workshop on Computational Cameras and Displays; the two-page abstract is here:

Jay Busch, Peter Hedman, Matthew DuVall, Matt Whalen, Michael Broxton, John Flynn, Ryan Overbeck, Daniel Erickson and Paul Debevec. Brutus: A Mid-Range Multi-Camera Array for Immersive Light Field Video Capture. CVPR Workshop on Computational Cameras and Displays, June 2021.

BibTeX
@article{broxton2020immersive,
  title={Immersive Light Field Video with a Layered Mesh Representation},
  author={Michael Broxton and John Flynn and Ryan Overbeck and Daniel Erickson and Peter Hedman and Matthew DuVall and Jason Dourgarian and Jay Busch and Matt Whalen and Paul Debevec},
  booktitle={ACM Transactions on Graphics (Proc. SIGGRAPH)},
  publisher = {ACM},
  volume = {39},
  number = {4},
  pages = {86:1--86:15},
  year={2020}
}