Peter Hedman
YOU?
Author Swipe
View article: 4‐LEGS: 4D Language Embedded Gaussian Splatting
4‐LEGS: 4D Language Embedded Gaussian Splatting Open
The emergence of neural representations has revolutionized our means for digitally viewing a wide range of 3D scenes, enabling the synthesis of photorealistic images rendered from novel views. Recently, several techniques have been propose…
View article: 4-LEGS: 4D Language Embedded Gaussian Splatting
4-LEGS: 4D Language Embedded Gaussian Splatting Open
The emergence of neural representations has revolutionized our means for digitally viewing a wide range of 3D scenes, enabling the synthesis of photorealistic images rendered from novel views. Recently, several techniques have been propose…
View article: EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis
EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis Open
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering. Unlike recent rasterization based approach by 3D Gaussian Splatting (3DGS), our primitive based representation al…
View article: Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering
Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering Open
State-of-the-art techniques for 3D reconstruction are largely based on volumetric scene representations, which require sampling multiple points to compute the color arriving along a ray. Using these representations for more general inverse…
View article: SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration
SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration Open
Recent techniques for real-time view synthesis have rapidly advanced in fidelity and speed, and modern methods are capable of rendering near-photorealistic scenes at interactive frame rates. At the same time, a tension has arisen between e…
View article: InterNeRF: Scaling Radiance Fields via Parameter Interpolation
InterNeRF: Scaling Radiance Fields via Parameter Interpolation Open
Neural Radiance Fields (NeRFs) have unmatched fidelity on large, real-world scenes. A common approach for scaling NeRFs is to partition the scene into regions, each of which is assigned its own parameters. When implemented naively, such an…
View article: NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections
NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections Open
Neural Radiance Fields (NeRFs) typically struggle to reconstruct and render highly specular objects, whose appearance varies quickly with changes in viewpoint. Recent works have improved NeRF's ability to render detailed specular appearanc…
View article: Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis
Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis Open
While surface-based view synthesis algorithms are appealing due to their low computational requirements, they often struggle to reproduce thin structures. In contrast, more expensive methods that model the scene's geometry as a volumetric …
View article: SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration
SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration Open
Recent techniques for real-time view synthesis have rapidly advanced in fidelity and speed, and modern methods are capable of rendering near-photorealistic scenes at interactive frame rates. At the same time, a tension has arisen between e…
View article: Inpaint3D: 3D Scene Content Generation using 2D Inpainting Diffusion
Inpaint3D: 3D Scene Content Generation using 2D Inpainting Diffusion Open
This paper presents a novel approach to inpainting 3D regions of a scene, given masked multi-view images, by distilling a 2D diffusion model into a learned 3D scene representation (e.g. a NeRF). Unlike 3D generative methods that explicitly…
View article: BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis
BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis Open
We present a method for reconstructing high-quality meshes of large unbounded real-world scenes suitable for photorealistic novel view synthesis. We first optimize a hybrid neural volume-surface scene representation designed to have well-b…
View article: Eclipse: Disambiguating Illumination and Materials using Unintended Shadows
Eclipse: Disambiguating Illumination and Materials using Unintended Shadows Open
Decomposing an object's appearance into representations of its materials and the surrounding illumination is difficult, even when the object's 3D shape is known beforehand. This problem is especially challenging for diffuse objects: it is …
View article: Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields Open
Neural Radiance Field training can be accelerated through the use of grid-based representations in NeRF's learned mapping from spatial coordinates to colors and volumetric density. However, these grid-based approaches lack an explicit unde…
View article: Vox-E: Text-guided Voxel Editing of 3D Objects
Vox-E: Text-guided Voxel Editing of 3D Objects Open
Large scale text-guided diffusion models have garnered significant attention due to their ability to synthesize diverse images that convey complex visual concepts. This generative power has more recently been leveraged to perform text-to-3…
View article: BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis
BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis Open
We present a method for reconstructing high-quality meshes of large unbounded real-world scenes suitable for photorealistic novel view synthesis. We first optimize a hybrid neural volume-surface scene representation designed to have well-b…
View article: MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes
MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes Open
Neural radiance fields enable state-of-the-art photorealistic view synthesis. However, existing radiance field representations are either too compute-intensive for real-time rendering or require too much memory to scale to large scenes. We…
View article: AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training
AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware Training Open
Neural Radiance Fields (NeRFs) are a powerful representation for modeling a 3D scene as a continuous function. Though NeRF is able to render complex 3D scenes with view-dependent effects, few efforts have been devoted to exploring its limi…
View article: MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures
MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures Open
Neural Radiance Fields (NeRFs) have demonstrated amazing ability to synthesize images of 3D scenes from novel views. However, they rely upon specialized volumetric rendering algorithms based on ray marching that are mismatched to the capab…
View article: Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields
Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields Open
Neural Radiance Fields (NeRF) is a popular view synthesis technique that represents a scene as a continuous volumetric function, parameterized by multilayer perceptrons that provide the volume density and view-dependent emitted radiance at…
View article: HyperNeRF
HyperNeRF Open
Neural Radiance Fields (NeRF) are able to reconstruct scenes with unprecedented fidelity, and various recent works have extended NeRF to handle dynamic scenes. A common approach to reconstruct such non-rigid scenes is through the use of a …
View article: NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images
NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images Open
Neural Radiance Fields (NeRF) is a technique for high quality novel view synthesis from a collection of posed input images. Like most view synthesis methods, NeRF uses tonemapped low dynamic range (LDR) as input; these images have been pro…
View article: Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields
Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields Open
Though neural radiance fields (NeRF) have demonstrated impressive view synthesis results on objects and small bounded regions of space, they struggle on "unbounded" scenes, where the camera may point in any direction and content may exist …
View article: Baking Neural Radiance Fields for Real-Time View Synthesis
Baking Neural Radiance Fields for Real-Time View Synthesis Open
Neural volumetric representations such as Neural Radiance Fields (NeRF) have emerged as a compelling technique for learning to represent 3D scenes from images with the goal of rendering photorealistic images of the scene from unobserved vi…
View article: Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields
Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields Open
The rendering procedure used by neural radiance fields (NeRF) samples a scene with a single ray per pixel and may therefore produce renderings that are excessively blurred or aliased when training or testing images observe scene content at…
View article: HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields
HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields Open
Neural Radiance Fields (NeRF) are able to reconstruct scenes with unprecedented fidelity, and various recent works have extended NeRF to handle dynamic scenes. A common approach to reconstruct such non-rigid scenes is through the use of a …
View article: Immersive light field video with a layered mesh representation
Immersive light field video with a layered mesh representation Open
We present a system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. We accomplish this by leveraging the recently introduced DeepView view interpolation algorithm, replacing its underlyin…
View article: Image-Based Rendering of Cars using Semantic Labels and Approximate Reflection Flow
Image-Based Rendering of Cars using Semantic Labels and Approximate Reflection Flow Open
Image-Based Rendering (IBR) has made impressive progress towards highly realistic, interactive 3D navigation for many scenes, including cityscapes. However, cars are ubiquitous in such scenes; multi-view stereo reconstruction provides prox…
View article: Deep blending for free-viewpoint image-based rendering
Deep blending for free-viewpoint image-based rendering Open
Free-viewpoint image-based rendering (IBR) is a standing challenge. IBR methods combine warped versions of input photos to synthesize a novel view. The image quality of this combination is directly affected by geometric inaccuracies of mul…
View article: Instant 3D photography
Instant 3D photography Open
We present an algorithm for constructing 3D panoramas from a sequence of aligned color-and-depth image pairs. Such sequences can be conveniently captured using dual lens cell phone cameras that reconstruct depth maps from synchronized ster…