Nick Johnston
YOU?
Author Swipe
View article: Advancing the Rate-Distortion-Computation Frontier for Neural Image Compression
Advancing the Rate-Distortion-Computation Frontier for Neural Image Compression Open
The rate-distortion performance of neural image compression models has\nexceeded the state-of-the-art for non-learned codecs, but neural codecs are\nstill far from widespread deployment and adoption. The largest obstacle is\nhaving efficie…
View article: The Need for Medically Aware Video Compression in Gastroenterology
The Need for Medically Aware Video Compression in Gastroenterology Open
Compression is essential to storing and transmitting medical videos, but the effect of compression on downstream medical tasks is often ignored. Furthermore, systems in practice rely on standard video codecs, which naively allocate bits be…
View article: LVAC: Learned volumetric attribute compression for point clouds using coordinate based networks
LVAC: Learned volumetric attribute compression for point clouds using coordinate based networks Open
We consider the attributes of a point cloud as samples of a vector-valued volumetric function at discrete positions. To compress the attributes given the positions, we compress the parameters of the volumetric function. We model the volume…
View article: LVAC: Learned Volumetric Attribute Compression for Point Clouds using Coordinate Based Networks
LVAC: Learned Volumetric Attribute Compression for Point Clouds using Coordinate Based Networks Open
We consider the attributes of a point cloud as samples of a vector-valued volumetric function at discrete positions. To compress the attributes given the positions, we compress the parameters of the volumetric function. We model the volume…
View article: Neural Video Compression using GANs for Detail Synthesis and Propagation
Neural Video Compression using GANs for Detail Synthesis and Propagation Open
We present the first neural video compression method based on generative adversarial networks (GANs). Our approach significantly outperforms previous neural and non-neural video compression methods in a user study, setting a new state-of-t…
View article: Towards Generative Video Compression
Towards Generative Video Compression Open
We present a neural video compression method based on generative adversarial networks (GANs) that outperforms previous neural video compression methods and is comparable to HEVC in a user study. We propose a technique to mitigate temporal …
View article: Nonlinear Transform Coding
Nonlinear Transform Coding Open
We review a class of methods that can be collected under the name nonlinear transform coding (NTC), which over the past few years have become competitive with the best linear transform codecs for images, and have superseded them in terms o…
View article: End-to-End Learning of Compressible Features
End-to-End Learning of Compressible Features Open
Pre-trained convolutional neural networks (CNNs) are powerful off-the-shelf feature generators and have been shown to perform very well on a variety of tasks. Unfortunately, the generated features are high dimensional and expensive to stor…
View article: Nonlinear Transform Coding
Nonlinear Transform Coding Open
We review a class of methods that can be collected under the name nonlinear transform coding (NTC), which over the past few years have become competitive with the best linear transform codecs for images, and have superseded them in terms o…
View article: Public Support for Renewables
Public Support for Renewables Open
The extent to which renewables gain public support and are able to attract adequate private or public investment is key to their further uptake. Although individuals and some groups have expressed concerns about specific renewable energy p…
View article: Computationally Efficient Neural Image Compression
Computationally Efficient Neural Image Compression Open
Image compression using neural networks have reached or exceeded non-neural methods (such as JPEG, WebP, BPG). While these networks are state of the art in ratedistortion performance, computational feasibility of these models remains a cha…
View article: Table-Based Neural Units: Fully Quantizing Networks for Multiply-Free Inference
Table-Based Neural Units: Fully Quantizing Networks for Multiply-Free Inference Open
In this work, we propose to quantize all parts of standard classification networks and replace the activation-weight--multiply step with a simple table-based lookup. This approach results in networks that are free of floating-point operati…
View article: Neural Image Decompression: Learning to Render Better Image Previews
Neural Image Decompression: Learning to Render Better Image Previews Open
A rapidly increasing portion of Internet traffic is dominated by requests from mobile devices with limited- and metered-bandwidth constraints. To satisfy these requests, it has become standard practice for websites to transmit small and ex…
View article: No Multiplication? No Floating Point? No Problem! Training Networks for Efficient Inference
No Multiplication? No Floating Point? No Problem! Training Networks for Efficient Inference Open
For successful deployment of deep neural networks on highly--resource-constrained devices (hearing aids, earbuds, wearables), we must simplify the types of operations and the memory/power resources used during inference. Completely avoidin…
View article: Towards a Semantic Perceptual Image Metric
Towards a Semantic Perceptual Image Metric Open
We present a full reference, perceptual image metric based on VGG-16, an artificial neural network trained on object classification. We fit the metric to a new database based on 140k unique images annotated with ground truth by human rater…
View article: Variational image compression with a scale hyperprior
Variational image compression with a scale hyperprior Open
We describe an end-to-end trainable model for image compression based on variational autoencoders. The model incorporates a hyperprior to effectively capture spatial dependencies in the latent representation. This hyperprior relates to sid…
View article: Variational image compression with a scale hyperprior
Variational image compression with a scale hyperprior Open
We describe an end-to-end trainable model for image compression based on variational autoencoders. The model incorporates a hyperprior to effectively capture spatial dependencies in the latent representation. This hyperprior relates to sid…
View article: Spatially adaptive image compression using a tiled deep network
Spatially adaptive image compression using a tiled deep network Open
Deep neural networks represent a powerful class of function approximators that can learn to compress and reconstruct images. Existing image compression algorithms based on neural networks learn quantized representations with a constant spa…
View article: Full Resolution Image Compression with Recurrent Neural Networks
Full Resolution Image Compression with Recurrent Neural Networks Open
This paper presents a set of full-resolution lossy image compression methods based on neural networks. Each of the architectures we describe can provide variable compression rates during deployment without requiring retraining of the netwo…
View article: The athlete monitoring cycle: a practical guide to interpreting and applying training monitoring data
The athlete monitoring cycle: a practical guide to interpreting and applying training monitoring data Open
Given the relationships among athlete workloads, injury1 and performance,2 athlete monitoring has become critical in the high-performance sporting environment. Sports medicine and science staff have a suite of monitoring tools available to…
View article: Target-Quality Image Compression with Recurrent, Convolutional Neural Networks
Target-Quality Image Compression with Recurrent, Convolutional Neural Networks Open
We introduce a stop-code tolerant (SCT) approach to training recurrent convolutional neural networks for lossy image compression. Our methods introduce a multi-pass training method to combine the training goals of high-quality reconstructi…
View article: Target-Quality Image Compression with Recurrent, Convolutional Neural\n Networks
Target-Quality Image Compression with Recurrent, Convolutional Neural\n Networks Open
We introduce a stop-code tolerant (SCT) approach to training recurrent\nconvolutional neural networks for lossy image compression. Our methods\nintroduce a multi-pass training method to combine the training goals of\nhigh-quality reconstru…
View article: Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks
Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks Open
We propose a method for lossy image compression based on recurrent, convolutional neural networks that outperforms BPG (4:2:0 ), WebP, JPEG2000, and JPEG as measured by MS-SSIM. We introduce three improvements over previous research that l…
View article: Improved Lossy Image Compression with Priming and Spatially Adaptive Bit\n Rates for Recurrent Networks
Improved Lossy Image Compression with Priming and Spatially Adaptive Bit\n Rates for Recurrent Networks Open
We propose a method for lossy image compression based on recurrent,\nconvolutional neural networks that outperforms BPG (4:2:0 ), WebP, JPEG2000,\nand JPEG as measured by MS-SSIM. We introduce three improvements over previous\nresearch tha…
View article: What's Cookin'? Interpreting Cooking Videos using Text, Speech and\n Vision
What's Cookin'? Interpreting Cooking Videos using Text, Speech and\n Vision Open
We present a novel method for aligning a sequence of instructions to a video\nof someone carrying out a task. In particular, we focus on the cooking domain,\nwhere the instructions correspond to the recipe. Our technique relies on an HMM\n…
View article: What’s Cookin’? Interpreting Cooking Videos using Text, Speech and Vision
What’s Cookin’? Interpreting Cooking Videos using Text, Speech and Vision Open
We present a novel method for aligning a sequence of instructions to a video of someone carrying out a task. In particular, we focus on the cooking domain, where the instructions correspond to the recipe. Our technique relies on an HMM to …