Jonathan Schwarz
YOU?
Author Swipe
View article: Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding
Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding Open
Power-law scaling indicates that large-scale training with uniform sampling is prohibitively slow. Active learning methods aim to increase data efficiency by prioritizing learning on the most relevant examples. Despite their appeal, these …
View article: Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding
Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding Open
Power-law scaling indicates that large-scale training with uniform sampling is prohibitively slow. Active learning methods aim to increase data efficiency by prioritizing learning on the most relevant examples. Despite their appeal, these …
View article: Quantum State Assignment Flows
Quantum State Assignment Flows Open
This paper introduces assignment flows for density matrices as state spaces for representation and analysis of data associated with vertices of an underlying weighted graph. Determining an assignment flow by geometric integration of the de…
View article: Quantum State Assignment Flows
Quantum State Assignment Flows Open
This paper introduces assignment flows for density matrices as state spaces for representing and analyzing data associated with vertices of an underlying weighted graph. Determining an assignment flow by geometric integration of the defini…
View article: Spatial Functa: Scaling Functa to ImageNet Classification and Generation
Spatial Functa: Scaling Functa to ImageNet Classification and Generation Open
Neural fields, also known as implicit neural representations, have emerged as a powerful means to represent complex signals of various modalities. Based on this Dupont et al. (2022) introduce a framework that views neural fields as data, t…
View article: Designing Urban Participation Platforms – Model for Goal-oriented Classification of Participation Mechanisms
Designing Urban Participation Platforms – Model for Goal-oriented Classification of Participation Mechanisms Open
Citizens are increasingly shaping their city selfdetermined. To do so, they use digital platforms to start projects, gain awareness or raise funds. These and other participation mechanisms enable citizens to participate in manifold ways. W…
View article: Powerpropagation: A sparsity inducing weight reparameterisation
Powerpropagation: A sparsity inducing weight reparameterisation Open
The training of sparse neural networks is becoming an increasingly important tool
\nfor reducing the computational footprint of models at training and evaluation, as
\nwell enabling the effective scaling up of models. Whereas much work ove…
View article: On the Correspondence between Replicator Dynamics and Assignment Flows
On the Correspondence between Replicator Dynamics and Assignment Flows Open
Assignment flows are smooth dynamical systems for data labeling on graphs. Although they exhibit structural similarities with the well‐studied class of replicator dynamics, it is nontrivial to apply existing tools to their analysis. We pro…
View article: Powerpropagation: A sparsity inducing weight reparameterisation
Powerpropagation: A sparsity inducing weight reparameterisation Open
The training of sparse neural networks is becoming an increasingly important tool for reducing the computational footprint of models at training and evaluation, as well enabling the effective scaling up of models. Whereas much work over th…
View article: Behavior Priors for Efficient Reinforcement Learning
Behavior Priors for Efficient Reinforcement Learning Open
As we deploy reinforcement learning agents to solve increasingly challenging problems, methods that allow us to inject prior knowledge about the structure of the world and effective solution strategies becomes increasingly important. In th…
View article: Functional Regularisation for Continual Learning with Gaussian Processes
Functional Regularisation for Continual Learning with Gaussian Processes Open
We introduce a framework for Continual Learning (CL) based on Bayesian inference over the function space rather than the parameters of a deep neural network. This method, referred to as functional regularisation for Continual Learning, avo…
View article: Information asymmetry in KL-regularized RL
Information asymmetry in KL-regularized RL Open
Many real world tasks exhibit rich structure that is repeated across different parts of the state space or in time. In this work we study the possibility of leveraging such repeated structure to speed up and regularize learning. We start f…
View article: Meta-Learning surrogate models for sequential decision making
Meta-Learning surrogate models for sequential decision making Open
We introduce a unified probabilistic framework for solving sequential decision making problems ranging from Bayesian optimisation to contextual bandits and reinforcement learning. This is accomplished by a probabilistic model-based approac…
View article: Functional Regularisation for Continual Learning
Functional Regularisation for Continual Learning Open
We introduce a framework for continual learning based on Bayesian inference over the function space rather than the parameters of a deep neural network. This method, referred to as functional regularisation for continual learning, avoids f…
View article: Functional Regularisation for Continual Learning with Gaussian Processes
Functional Regularisation for Continual Learning with Gaussian Processes Open
We introduce a framework for Continual Learning (CL) based on Bayesian inference over the function space rather than the parameters of a deep neural network. This method, referred to as functional regularisation for Continual Learning, avo…
View article: Attentive Neural Processes
Attentive Neural Processes Open
Neural Processes (NPs) (Garnelo et al 2018a;b) approach regression by learning to map a context set of observed input-output pairs to a distribution over regression functions. Each function models the distribution of the output given an in…
View article: Experience Replay for Continual Learning
Experience Replay for Continual Learning Open
Interacting with a complex world involves continual learning, in which tasks and data distributions change over time. A continual learning system should demonstrate both plasticity (acquisition of new knowledge) and stability (preservation…
View article: The NarrativeQA Reading Comprehension Challenge
The NarrativeQA Reading Comprehension Challenge Open
Reading comprehension (RC)—in contrast to information retrieval—requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC abili…
View article: Experience Replay for Continual Learning
Experience Replay for Continual Learning Open
Continual learning is the problem of learning new tasks or knowledge while protecting old knowledge and ideally generalizing from old experience to learn new tasks faster. Neural networks trained by stochastic gradient descent often degrad…
View article: Neural Processes
Neural Processes Open
A neural network (NN) is a parameterised function that can be tuned via gradient descent to approximate a labelled collection of data with high precision. A Gaussian process (GP), on the other hand, is a probabilistic model that defines a …
View article: Neural Processes
Neural Processes Open
A neural network (NN) is a parameterised function that can be tuned via gradient descent to approximate a labelled collection of data with high precision. A Gaussian process (GP), on the other hand, is a probabilistic model that defines a …
View article: Progress & Compress: A scalable framework for continual learning
Progress & Compress: A scalable framework for continual learning Open
We introduce a conceptually simple and scalable framework for continual learning domains where tasks are learned sequentially. Our method is constant in the number of parameters and is designed to preserve performance on previously encount…
View article: Progress & Compress: A scalable framework for continual learning
Progress & Compress: A scalable framework for continual learning Open
We introduce a conceptually simple and scalable framework for continual learning domains where tasks are learned sequentially. Our method is constant in the number of parameters and is designed to preserve performance on previously encount…
View article: A Recurrent Variational Autoencoder for Human Motion Synthesis
A Recurrent Variational Autoencoder for Human Motion Synthesis Open
We propose a novel generative model of human motion that can be trained using a large motion capture dataset, and allows users to produce animations from high-level control signals. As previous architectures struggle to predict motions far…