Henry Gouk
YOU?
Author Swipe
View article: Model Merging is Secretly Certifiable: Non-Vacuous Generalisation Bounds for Low-Shot Learning
Model Merging is Secretly Certifiable: Non-Vacuous Generalisation Bounds for Low-Shot Learning Open
Certifying the IID generalisation ability of deep networks is the first of many requirements for trusting AI in high-stakes applications from medicine to security. However, when instantiating generalisation bounds for deep networks it rema…
View article: Model Diffusion for Certifiable Few-shot Transfer Learning
Model Diffusion for Certifiable Few-shot Transfer Learning Open
In contemporary deep learning, a prevalent and effective workflow for solving low-data problems is adapting powerful pre-trained foundation models (FMs) to new tasks via parameter-efficient fine-tuning (PEFT). However, while empirically ef…
View article: Strategic Classification with Randomised Classifiers
Strategic Classification with Randomised Classifiers Open
We consider the problem of strategic classification, where a learner must build a model to classify agents based on features that have been strategically modified. Previous work in this area has concentrated on the case when the learner is…
View article: Selecting Pre-trained Models for Transfer Learning with Data-centric Meta-features
Selecting Pre-trained Models for Transfer Learning with Data-centric Meta-features Open
View article: Is Scaling Learned Optimizers Worth It? Evaluating The Value of VeLO's 4000 TPU Months
Is Scaling Learned Optimizers Worth It? Evaluating The Value of VeLO's 4000 TPU Months Open
We analyze VeLO (versatile learned optimizer), the largest scale attempt to train a general purpose "foundational" optimizer to date. VeLO was trained on thousands of machine learning tasks using over 4000 TPU months with the goal of produ…
View article: Evaluating the Evaluators: Are Current Few-Shot Learning Benchmarks Fit for Purpose?
Evaluating the Evaluators: Are Current Few-Shot Learning Benchmarks Fit for Purpose? Open
Numerous benchmarks for Few-Shot Learning have been proposed in the last decade. However all of these benchmarks focus on performance averaged over many tasks, and the question of how to reliably evaluate and tune models trained for indivi…
View article: Meta Omnium: A Benchmark for General-Purpose Learning-to-Learn
Meta Omnium: A Benchmark for General-Purpose Learning-to-Learn Open
Meta-learning and other approaches to few-shot learning are widely studied for image recognition, and are increasingly applied to other vision tasks such as pose estimation and dense prediction. This naturally raises the question of whethe…
View article: Effectiveness of Debiasing Techniques: An Indigenous Qualitative Analysis
Effectiveness of Debiasing Techniques: An Indigenous Qualitative Analysis Open
An indigenous perspective on the effectiveness of debiasing techniques for pre-trained language models (PLMs) is presented in this paper. The current techniques used to measure and debias PLMs are skewed towards the US racial biases and re…
View article: Amortised Invariance Learning for Contrastive Self-Supervision
Amortised Invariance Learning for Contrastive Self-Supervision Open
Contrastive self-supervised learning methods famously produce high quality transferable representations by learning invariances to different data augmentations. Invariances established during pre-training can be interpreted as strong induc…
View article: Attacking Adversarial Defences by Smoothing the Loss Landscape
Attacking Adversarial Defences by Smoothing the Loss Landscape Open
This paper investigates a family of methods for defending against adversarial attacks that owe part of their success to creating a noisy, discontinuous, or otherwise rugged loss landscape that adversaries find difficult to navigate. A comm…
View article: HyperInvariances: Amortizing Invariance Learning
HyperInvariances: Amortizing Invariance Learning Open
Providing invariances in a given learning task conveys a key inductive bias that can lead to sample-efficient learning and good generalisation, if correctly specified. However, the ideal invariances for many problems of interest are often …
View article: Lessons learned from the NeurIPS 2021 MetaDL challenge: Backbone fine-tuning without episodic meta-learning dominates for few-shot learning image classification
Lessons learned from the NeurIPS 2021 MetaDL challenge: Backbone fine-tuning without episodic meta-learning dominates for few-shot learning image classification Open
Although deep neural networks are capable of achieving performance superior to humans on various tasks, they are notorious for requiring large amounts of data and computing resources, restricting their success to domains where such resourc…
View article: Self-Supervised Representation Learning: Introduction, advances, and challenges
Self-Supervised Representation Learning: Introduction, advances, and challenges Open
Self-supervised representation learning methods aim to provide powerful deep\nfeature learning without the requirement of large annotated datasets, thus\nalleviating the annotation bottleneck that is one of the main barriers to\npractical …
View article: Table of Contents [Table of Contents]
Table of Contents [Table of Contents] Open
View article: Experiments in cross-domain few-shot learning for image classification
Experiments in cross-domain few-shot learning for image classification Open
Cross-domain few-shot learning has many practical applications. This paper attempts to shed light on suitable configurations of feature exactors and 'shallow' classifiers in this machine learning setting. We apply ResNet-based feature extr…
View article: Meta Mirror Descent: Optimiser Learning for Fast Convergence
Meta Mirror Descent: Optimiser Learning for Fast Convergence Open
Optimisers are an essential component for training machine learning models, and their design influences learning speed and generalisation. Several studies have attempted to learn more effective gradient-descent optimisers via solving a bi-…
View article: On the Limitations of General Purpose Domain Generalisation Methods
On the Limitations of General Purpose Domain Generalisation Methods Open
We investigate the fundamental performance limitations of learning algorithms in several Domain Generalisation (DG) settings. Motivated by the difficulty with which previously proposed methods have in reliably outperforming Empirical Risk …
View article: Why Do Self-Supervised Models Transfer? Investigating the Impact of\n Invariance on Downstream Tasks
Why Do Self-Supervised Models Transfer? Investigating the Impact of\n Invariance on Downstream Tasks Open
Self-supervised learning is a powerful paradigm for representation learning\non unlabelled images. A wealth of effective new methods based on instance\nmatching rely on data-augmentation to drive learning, and these have reached a\nrough a…
View article: Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks
Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks Open
Self-supervised learning is a powerful paradigm for representation learning on unlabelled images. A wealth of effective new methods based on instance matching rely on data-augmentation to drive learning, and these have reached a rough agre…
View article: Active Altruism Learning and Information Sufficiency for Autonomous Driving
Active Altruism Learning and Information Sufficiency for Autonomous Driving Open
Safe interaction between vehicles requires the ability to choose actions that reveal the preferences of the other vehicles. Since exploratory actions often do not directly contribute to their objective, an interactive vehicle must also abl…
View article: Shallow Bayesian Meta Learning for Real-World Few-Shot Recognition
Shallow Bayesian Meta Learning for Real-World Few-Shot Recognition Open
Many state-of-the-art few-shot learners focus on developing effective training procedures for feature representations, before using simple (e.g., nearest centroid) classifiers. We take an approach that is agnostic to the features used, and…
View article: Experiments in Cross-Domain Few-Shot Learning for Image Classification Reproducibility Package
Experiments in Cross-Domain Few-Shot Learning for Image Classification Reproducibility Package Open
Reproducibility package for the paper Experiments in Cross-Domain Few-Shot Learning for Image Classification
View article: Resolving Conflict in Decision-Making for Autonomous Driving
Resolving Conflict in Decision-Making for Autonomous Driving Open
Recent work on decision making and planning for autonomous driving has made use of game theoretic methods to model interaction between agents. We demonstrate that methods based on the Stackelberg game formulation of this problem are suscep…
View article: How Well Do Self-Supervised Models Transfer?
How Well Do Self-Supervised Models Transfer? Open
Self-supervised visual representation learning has seen huge progress recently, but no large scale evaluation has compared the many models now available. We evaluate the transfer performance of 13 top self-supervised models on 40 downstrea…
View article: Searching for Robustness: Loss Learning for Noisy Classification Tasks
Searching for Robustness: Loss Learning for Noisy Classification Tasks Open
We present a "learning to learn" approach for automatically constructing white-box classification loss functions that are robust to label noise in the training data. We parameterize a flexible family of loss functions using Taylor polynomi…
View article: Shallow Bayesian Meta Learning for Real-World Few-Shot Recognition
Shallow Bayesian Meta Learning for Real-World Few-Shot Recognition Open
Current state-of-the-art few-shot learners focus on developing effective training procedures for feature representations, before using simple, e.g. nearest centroid, classifiers. In this paper, we take an orthogonal approach that is agnost…
View article: Regularisation of neural networks by enforcing Lipschitz continuity
Regularisation of neural networks by enforcing Lipschitz continuity Open
View article: A Stochastic Neural Network for Attack-Agnostic Adversarial Robustness.
A Stochastic Neural Network for Attack-Agnostic Adversarial Robustness. Open
Stochastic Neural Networks (SNNs) that inject noise into their hidden layers have recently been shown to achieve strong robustness against adversarial attacks. However, existing SNNs are usually heuristically motivated, and further rely on…
View article: Weight-Covariance Alignment for Adversarially Robust Neural Networks
Weight-Covariance Alignment for Adversarially Robust Neural Networks Open
Stochastic Neural Networks (SNNs) that inject noise into their hidden layers have recently been shown to achieve strong robustness against adversarial attacks. However, existing SNNs are usually heuristically motivated, and often rely on a…
View article: Altruistic Decision-Making for Autonomous Driving with Sparse Rewards
Altruistic Decision-Making for Autonomous Driving with Sparse Rewards Open
In order to drive effectively, a driver must be aware of how they can expect other vehicles' behaviour to be affected by their decisions, and also how they are expected to behave by other drivers. One common family of methods for addressin…