Paul Debevec
YOU?
Author Swipe
View article: Survey of Video Diffusion Models: Foundations, Implementations, and Applications
Survey of Video Diffusion Models: Foundations, Implementations, and Applications Open
Recent advances in diffusion models have revolutionized video generation, offering superior temporal consistency and visual quality compared to traditional generative adversarial networks-based approaches. While this emerging field shows t…
View article: FlashDepth: Real-time Streaming Video Depth Estimation at 2K Resolution
FlashDepth: Real-time Streaming Video Depth Estimation at 2K Resolution Open
A versatile video depth estimation model should (1) be accurate and consistent across frames, (2) produce high-resolution depth maps, and (3) support real-time streaming. We propose FlashDepth, a method that satisfies all three requirement…
View article: Self-Calibrating Gaussian Splatting for Large Field of View Reconstruction
Self-Calibrating Gaussian Splatting for Large Field of View Reconstruction Open
In this paper, we present a self-calibrating framework that jointly optimizes camera parameters, lens distortion and 3D Gaussian representations, enabling accurate and efficient scene reconstruction. In particular, our technique enables hi…
View article: Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using Real-Time Warped Noise
Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using Real-Time Warped Noise Open
Generative modeling aims to transform random noise into structured outputs. In this work, we enhance video diffusion models by allowing motion control via structured latent noise sampling. This is achieved by just a change in data: we pre-…
View article: DifFRelight: Diffusion-Based Facial Performance Relighting
DifFRelight: Diffusion-Based Facial Performance Relighting Open
We present a novel framework for free-viewpoint facial performance relighting\nusing diffusion-based image-to-image translation. Leveraging a subject-specific\ndataset containing diverse facial expressions captured under various lighting\n…
View article: Fitting Spherical Gaussians to Dynamic HDRI Sequences
Fitting Spherical Gaussians to Dynamic HDRI Sequences Open
We present a technique for fitting high dynamic range illumination (HDRI)\nsequences using anisotropic spherical Gaussians (ASGs) while preserving\ntemporal consistency in the compressed HDRI maps. Our approach begins with an\noptimization…
View article: Magenta Green Screen: Spectrally Multiplexed Alpha Matting with Deep Colorization
Magenta Green Screen: Spectrally Multiplexed Alpha Matting with Deep Colorization Open
We introduce Magenta Green Screen, a novel machine learning--enabled matting technique for recording the color image of a foreground actor and a simultaneous high-quality alpha channel without requiring a special camera or manual keying te…
View article: Jointly Optimizing Color Rendition and In-Camera Backgrounds in an RGB Virtual Production Stage
Jointly Optimizing Color Rendition and In-Camera Backgrounds in an RGB Virtual Production Stage Open
While the LED panels used in virtual production systems can display vibrant imagery with a wide color gamut, they produce problematic color shifts when used as lighting due to their peaky spectral output from narrow-band red, green, and bl…
View article: HDR Lighting Dilation for Dynamic Range Reduction on Virtual Production Stages
HDR Lighting Dilation for Dynamic Range Reduction on Virtual Production Stages Open
We present a technique to reduce the dynamic range of an HDRI lighting environment map in an efficient, energy-preserving manner by spreading out the light of concentrated light sources. This allows us to display a reasonable approximation…
View article: NeRFactor
NeRFactor Open
We address the problem of recovering the shape and spatially-varying reflectance of an object from multi-view images (and their camera poses) of an object illuminated by one unknown lighting condition. This enables the rendering of novel v…
View article: Baking Neural Radiance Fields for Real-Time View Synthesis
Baking Neural Radiance Fields for Real-Time View Synthesis Open
Neural volumetric representations such as Neural Radiance Fields (NeRF) have emerged as a compelling technique for learning to represent 3D scenes from images with the goal of rendering photorealistic images of the scene from unobserved vi…
View article: Total relighting
Total relighting Open
We propose a novel system for portrait relighting and background replacement, which maintains high-frequency boundary details and accurately synthesizes the subject's appearance as lit by novel illumination, thereby producing realistic com…
View article: NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination
NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination Open
We address the problem of recovering the shape and spatially-varying reflectance of an object from multi-view images (and their camera poses) of an object illuminated by one unknown lighting condition. This enables the rendering of novel v…
View article: A New Dimension in Testimony: Relighting Video with Reflectance Field Exemplars
A New Dimension in Testimony: Relighting Video with Reflectance Field Exemplars Open
We present a learning-based method for estimating 4D reflectance field of a person given video footage illuminated under a flat-lit environment of the same subject. For training data, we use one light at a time to illuminate the subject an…
View article: Light stage super-resolution
Light stage super-resolution Open
The light stage has been widely used in computer graphics for the past two decades, primarily to enable the relighting of human faces. By capturing the appearance of the human subject under different light sources, one obtains the light tr…
View article: Deep relightable textures
Deep relightable textures Open
The increasing demand for 3D content in augmented and virtual reality has motivated the development of volumetric performance capture systemsnsuch as the Light Stage. Recent advances are pushing free viewpoint relightable videos of dynamic…
View article: Learning Illumination from Diverse Portraits
Learning Illumination from Diverse Portraits Open
We present a learning-based technique for estimating high dynamic range (HDR), omnidirectional illumination from a single low dynamic range (LDR) portrait image captured under arbitrary indoor or outdoor lighting conditions. We train our m…
View article: Light Stage Super-Resolution: Continuous High-Frequency Relighting
Light Stage Super-Resolution: Continuous High-Frequency Relighting Open
The light stage has been widely used in computer graphics for the past two decades, primarily to enable the relighting of human faces. By capturing the appearance of the human subject under different light sources, one obtains the light tr…
View article: Immersive light field video with a layered mesh representation
Immersive light field video with a layered mesh representation Open
We present a system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. We accomplish this by leveraging the recently introduced DeepView view interpolation algorithm, replacing its underlyin…
View article: Deep Relightable Textures Volumetric Performance Capture with Neural Rendering
Deep Relightable Textures Volumetric Performance Capture with Neural Rendering Open
The increasing demand for 3D content in augmented and virtual reality has motivated the development of volumetric performance capture systemsnsuch as the Light Stage. Recent advances are pushing free viewpoint relightable videos of dynamic…
View article: The relightables
The relightables Open
We present "The Relightables", a volumetric capture system for photorealistic and high quality relightable full-body performance capture. While significant progress has been made on volumetric capture systems, focusing on 3D geometric reco…
View article: Deep reflectance fields
Deep reflectance fields Open
We present a novel technique to relight images of human faces by learning a model of facial reflectance from a database of 4D reflectance field data of several subjects in a variety of expressions and viewpoints. Using our learned model, a…
View article: Single image portrait relighting
Single image portrait relighting Open
Lighting plays a central role in conveying the essence and depth of the subject in a portrait photograph. Professional photographers will carefully control the lighting in their studio to manipulate the appearance of their subject, while c…
View article: DeepView: View Synthesis with Learned Gradient Descent
DeepView: View Synthesis with Learned Gradient Descent Open
We present a novel approach to view synthesis using multiplane images (MPIs). Building on recent advances in learned gradient descent, our algorithm generates an MPI from a set of sparse camera viewpoints. The resulting method incorporates…
View article: DeepLight: Learning Illumination for Unconstrained Mobile Mixed Reality
DeepLight: Learning Illumination for Unconstrained Mobile Mixed Reality Open
We present a learning-based method to infer plausible high dynamic range (HDR), omnidirectional illumination given an unconstrained, low dynamic range (LDR) image from a mobile phone camera with a limited field of view (FOV). For training …
View article: A system for acquiring, processing, and rendering panoramic light field stills for virtual reality
A system for acquiring, processing, and rendering panoramic light field stills for virtual reality Open
We present a system for acquiring, processing, and rendering panoramic light field still photography for display in Virtual Reality (VR). We acquire spherical light field datasets with two novel light field camera rigs designed for portabl…