Paper
PDF.
Supplemental
PDF.
Poster
PDF.
Network diagram
PDF.
Testing Data from our paper coming soon.
Abstract
We present a learning-based method to infer plausible high dynamic range (HDR), omnidirectional illumination given an unconstrained, low dynamic range (LDR) image from a mobile phone camera with a limited field of view (FOV). For training data, we collect videos of various reflective spheres placed within the camera's FOV, leaving most of the background unoccluded, leveraging that materials with diverse reflectance functions reveal different lighting cues in a single exposure. We train a deep neural network to regress from the LDR background image to HDR lighting by matching the LDR ground truth sphere images to those rendered with the predicted illumination using image-based relighting, which is differentiable. Our inference runs at interactive frame rates on a mobile device, enabling realistic rendering of virtual objects into real scenes for mobile mixed reality. Training on automatically exposed and white-balanced videos, we improve the realism of rendered objects compared to the state-of-the art methods for both indoor and outdoor scenes.
Example Images
Some images from the paper in higher resolution
Example 01 Diffuse
unseen input
real object
ground truth HDR IBL
ours
Hold-Geoffroy et al
Example 01 Metallic
unseen input
real object
ground truth HDR IBL
ours
Hold-Geoffroy et al
Example 02 Diffuse
unseen input
real object
ground truth HDR IBL
ours
Hold-Geoffroy et al
Example 02 Metallic
unseen input
real object
ground truth HDR IBL
ours
Hold-Geoffroy et al
Example 03 Diffuse
unseen input
real object
ground truth HDR IBL
ours
Gardner et al
Example 03 Metallic
unseen input
real object
ground truth HDR IBL
ours
Gardner et al
DeepLight at work
Google I/O Conference: Increasing AR Realism with Lighting with Wan-Chun Ma, Konstantine Tsotsos