Tim Dockhorn
YOU?
Author Swipe
View article: FLUX.1 Kontext: Flow Matching for In-Context Image Generation and Editing in Latent Space
FLUX.1 Kontext: Flow Matching for In-Context Image Generation and Editing in Latent Space Open
We present evaluation results for FLUX.1 Kontext, a generative flow matching model that unifies image generation and editing. The model generates novel output views by incorporating semantic context from text and image inputs. Using a simp…
View article: test1
test1 Open
We present Stable Video Diffusion - a latent video diffusion model for high-resolution, state-of-the-art text-to-video and image-to-video generation. Recently, latent diffusion models trained for 2D image synthesis have been turned into ge…
View article: Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion Distillation
Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion Distillation Open
Diffusion models are the main driver of progress in image and video synthesis, but suffer from slow inference speed. Distillation methods, like the recently introduced adversarial diffusion distillation (ADD) aim to shift the model from ma…
View article: Scaling Rectified Flow Transformers for High-Resolution Image Synthesis
Scaling Rectified Flow Transformers for High-Resolution Image Synthesis Open
Diffusion models create data from noise by inverting the forward paths of data towards noise and have emerged as a powerful generative modeling technique for high-dimensional, perceptual data such as images and videos. Rectified flow is a …
View article: SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis
SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis Open
We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention b…
View article: Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models
Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models Open
Latent Diffusion Models (LDMs) enable high-quality image synthesis while avoiding excessive compute demands by training a diffusion model in a compressed lower-dimensional latent space. Here, we apply the LDM paradigm to high-resolution vi…
View article: Latent Space Diffusion Models of Cryo-EM Structures
Latent Space Diffusion Models of Cryo-EM Structures Open
Cryo-electron microscopy (cryo-EM) is unique among tools in structural biology in its ability to image large, dynamic protein complexes. Key to this ability is image processing algorithms for heterogeneous cryo-EM reconstruction, including…
View article: Differentially Private Diffusion Models
Differentially Private Diffusion Models Open
While modern machine learning models rely on increasingly large training datasets, data is often limited in privacy-sensitive domains. Generative models trained with differential privacy (DP) on sensitive data can sidestep this challenge, …
View article: GENIE: Higher-Order Denoising Diffusion Solvers
GENIE: Higher-Order Denoising Diffusion Solvers Open
Denoising diffusion models (DDMs) have emerged as a powerful class of generative models. A forward diffusion process slowly perturbs the data, while a deep model learns to gradually denoise. Synthesis amounts to solving a differential equa…
View article: Score-Based Generative Modeling with Critically-Damped Langevin Diffusion
Score-Based Generative Modeling with Critically-Damped Langevin Diffusion Open
Score-based generative models (SGMs) have demonstrated remarkable synthesis quality. SGMs rely on a diffusion process that gradually perturbs the data towards a tractable distribution, while the generative model learns to denoise. The comp…
View article: Score-Based Generative Modeling with Critically-Damped Langevin\n Diffusion
Score-Based Generative Modeling with Critically-Damped Langevin\n Diffusion Open
Score-based generative models (SGMs) have demonstrated remarkable synthesis\nquality. SGMs rely on a diffusion process that gradually perturbs the data\ntowards a tractable distribution, while the generative model learns to denoise.\nThe c…
View article: Demystifying and Generalizing BinaryConnect
Demystifying and Generalizing BinaryConnect Open
BinaryConnect (BC) and its many variations have become the de facto standard for neural network quantization. However, our understanding of the inner workings of BC is still quite limited. We attempt to close this gap in four different asp…
View article: Density Deconvolution with Normalizing Flows
Density Deconvolution with Normalizing Flows Open
Density deconvolution is the task of estimating a probability density function given only noise-corrupted samples. We can fit a Gaussian mixture model to the underlying density by maximum likelihood if the noise is normally distributed, bu…
View article: Generative Modeling with Neural Ordinary Differential Equations
Generative Modeling with Neural Ordinary Differential Equations Open
Neural ordinary differential equations (NODEs) (Chen et al., 2018) are ordinary differential equations (ODEs) with their dynamics modeled by neural networks. Continuous normalizing flows (CNFs) (Chen et al., 2018; Grathwohl et al., 2018), …
View article: A Discussion on Solving Partial Differential Equations using Neural Networks
A Discussion on Solving Partial Differential Equations using Neural Networks Open
Can neural networks learn to solve partial differential equations (PDEs)? We investigate this question for two (systems of) PDEs, namely, the Poisson equation and the steady Navier--Stokes equations. The contributions of this paper are fiv…