David Klindt
YOU?
Author Swipe
View article: DIET-CP: Lightweight and Data Efficient Self Supervised Continued Pretraining
DIET-CP: Lightweight and Data Efficient Self Supervised Continued Pretraining Open
Continued pretraining offers a promising solution for adapting foundation models to a new target domain. However, in specialized domains, available datasets are often very small, limiting the applicability of SSL methods developed for larg…
View article: Position: An Empirically Grounded Identifiability Theory Will Accelerate Self-Supervised Learning Research
Position: An Empirically Grounded Identifiability Theory Will Accelerate Self-Supervised Learning Research Open
Self-Supervised Learning (SSL) powers many current AI systems. As research interest and investment grow, the SSL design space continues to expand. The Platonic view of SSL, following the Platonic Representation Hypothesis (PRH), suggests t…
View article: From superposition to sparse codes: interpretable representations in neural networks
From superposition to sparse codes: interpretable representations in neural networks Open
Understanding how information is represented in neural networks is a fundamental challenge in both neuroscience and artificial intelligence. Despite their nonlinear architectures, recent evidence suggests that neural networks encode featur…
View article: Latent computing by biological neural networks: A dynamical systems framework
Latent computing by biological neural networks: A dynamical systems framework Open
Although individual neurons and neural populations exhibit the phenomenon of representational drift, perceptual and behavioral outputs of many neural circuits can remain stable across time scales over which representational drift is substa…
View article: Compute Optimal Inference and Provable Amortisation Gap in Sparse Autoencoders
Compute Optimal Inference and Provable Amortisation Gap in Sparse Autoencoders Open
A recent line of work has shown promise in using sparse autoencoders (SAEs) to uncover interpretable features in neural network representations. However, the simple linear-nonlinear encoding mechanism in SAEs limits their ability to perfor…
View article: Cross-Entropy Is All You Need To Invert the Data Generating Process
Cross-Entropy Is All You Need To Invert the Data Generating Process Open
Supervised learning has become a cornerstone of modern machine learning, yet a comprehensive theory explaining its effectiveness remains elusive. Empirical phenomena, such as neural analogy-making and the linear representation hypothesis, …
View article: A chromatic feature detector in the retina signals visual context changes
A chromatic feature detector in the retina signals visual context changes Open
The retina transforms patterns of light into visual feature representations supporting behaviour. These representations are distributed across various types of retinal ganglion cells (RGCs), whose spatial and temporal tuning properties hav…
View article: Towards interpretable Cryo-EM: disentangling latent spaces of molecular conformations
Towards interpretable Cryo-EM: disentangling latent spaces of molecular conformations Open
Molecules are essential building blocks of life and their different conformations (i.e., shapes) crucially determine the functional role that they play in living organisms. Cryogenic Electron Microscopy (cryo-EM) allows for acquisition of …
View article: Uncovering 2-D toroidal representations in grid cell ensemble activity during 1-D behavior
Uncovering 2-D toroidal representations in grid cell ensemble activity during 1-D behavior Open
Minimal experiments, such as head-fixed wheel-running and sleep, offer experimental advantages but restrict the amount of observable behavior, making it difficult to classify functional cell types. Arguably, the grid cell, and its striking…
View article: Occam's Razor for Self Supervised Learning: What is Sufficient to Learn Good Representations?
Occam's Razor for Self Supervised Learning: What is Sufficient to Learn Good Representations? Open
Deep Learning is often depicted as a trio of data-architecture-loss. Yet, recent Self Supervised Learning (SSL) solutions have introduced numerous additional design choices, e.g., a projector network, positive views, or teacher-student net…
View article: Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning
Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning Open
While the impressive performance of modern neural networks is often attributed to their capacity to efficiently extract task-relevant features from data, the mechanisms underlying this rich feature learning regime remain elusive, with much…
View article: Towards Interpretable Cryo-EM: Disentangling Latent Spaces of Molecular Conformations
Towards Interpretable Cryo-EM: Disentangling Latent Spaces of Molecular Conformations Open
Molecules are essential building blocks of life and their different conformations (i.e., shapes) crucially determine the functional role that they play in living organisms. Cryogenic Electron Microscopy (cryo-EM) allows for acquisition of …
View article: Author response: A chromatic feature detector in the retina signals visual context changes
Author response: A chromatic feature detector in the retina signals visual context changes Open
View article: Identifying Interpretable Visual Features in Artificial and Biological Neural Systems
Identifying Interpretable Visual Features in Artificial and Biological Neural Systems Open
Single neurons in neural networks are often interpretable in that they represent individual, intuitively meaningful features. However, many neurons exhibit $\textit{mixed selectivity}$, i.e., they represent multiple unrelated features. A r…
View article: Efficient coding of natural scenes improves neural system identification
Efficient coding of natural scenes improves neural system identification Open
Neural system identification aims at learning the response function of neurons to arbitrary stimuli using experimentally recorded data, but typically does not leverage normative principles such as efficient coding of natural environments. …
View article: Uncovering 2-D toroidal representations in grid cell ensemble activity during 1-D behavior
Uncovering 2-D toroidal representations in grid cell ensemble activity during 1-D behavior Open
Neuroscience is pushing toward studying the brain during naturalistic behaviors with open-ended tasks. Grid cells are a classic example, where free behavior was key to observing their characteristic spatial representations in two-dimension…
View article: A chromatic feature detector in the retina signals visual context changes
A chromatic feature detector in the retina signals visual context changes Open
The retina transforms patterns of light into visual feature representations supporting behaviour. These representations are distributed across various types of retinal ganglion cells (RGCs), whose spatial and temporal tuning properties hav…
View article: Uncovering 2-D toroidal representations in grid cell ensemble activity during 1-D behavior
Uncovering 2-D toroidal representations in grid cell ensemble activity during 1-D behavior Open
Neuroscience is pushing toward studying the brain during naturalistic behaviors with open-ended tasks. Grid cells are a classic example, where free behavior was key to observing their characteristic spatial representations in two-dimension…
View article: Understanding Neural Coding on Latent Manifolds by Sharing Features and Dividing Ensembles
Understanding Neural Coding on Latent Manifolds by Sharing Features and Dividing Ensembles Open
Systems neuroscience relies on two complementary views of neural data, characterized by single neuron tuning curves and analysis of population activity. These two perspectives combine elegantly in neural latent variable models that constra…
View article: geomstats/challenge-iclr-2022: Published algorithms (final version)
geomstats/challenge-iclr-2022: Published algorithms (final version) Open
GitHub repository for the ICLR Computational Geometry & Topology Challenge 2021
View article: Efficient coding of natural scenes improves neural system identification
Efficient coding of natural scenes improves neural system identification Open
Neural system identification aims at learning the response function of neurons to arbitrary stimuli using experimentally recorded data, but typically does not leverage normative principles such as efficient coding of natural environments. …
View article: Removing Inter-Experimental Variability from Functional Data in Systems Neuroscience
Removing Inter-Experimental Variability from Functional Data in Systems Neuroscience Open
Integrating data from multiple experiments is common practice in systems neuroscience but it requires inter-experimental variability to be negligible compared to the biological signal of interest. This requirement is rarely fulfilled; syst…
View article: Score-Based Generative Classifiers
Score-Based Generative Classifiers Open
The tremendous success of generative models in recent years raises the question whether they can also be used to perform classification. Generative models have been used as adversarially robust classifiers on simple datasets such as MNIST,…
View article: Natural environment statistics in the upper and lower visual field are reflected in mouse retinal specializations
Natural environment statistics in the upper and lower visual field are reflected in mouse retinal specializations Open
View article: Mouse retinal specializations reflect knowledge of natural environment statistics
Mouse retinal specializations reflect knowledge of natural environment statistics Open
Summary Pressures for survival drive sensory circuit adaption to a species’ habitat, making it essential to statistically characterise natural scenes. Mice, a prominent visual system model, are dichromatic with enhanced sensitivity to gree…
View article: System Identification with Biophysical Constraints: A Circuit Model of the Inner Retina
System Identification with Biophysical Constraints: A Circuit Model of the Inner Retina Open
Visual processing in the retina has been studied in great detail at all levels such that a comprehensive picture of the retina’s cell types and the many neural circuits they form is emerging. However, the currently best performing models o…
View article: Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse\n Coding
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse\n Coding Open
We construct an unsupervised learning model that achieves nonlinear\ndisentanglement of underlying factors of variation in naturalistic videos.\nPrevious work suggests that representations can be disentangled if all but a\nfew factors in t…
View article: Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding Open
We construct an unsupervised learning model that achieves nonlinear disentanglement of underlying factors of variation in naturalistic videos. Previous work suggests that representations can be disentangled if all but a few factors in the …
View article: Natural Sprites
Natural Sprites Open
This csv consists of (x-position, y-position, area) tuples of three views (left, middle, right) of downscaled binary masks with aspect ratio kept (64 x 128) from the 2019 YouTube-VIS challenge, which can be found at https://competitions.co…
View article: Natural Sprites
Natural Sprites Open
This csv consists of (x-position, y-position, area) tuples of three views (left, middle, right) of downscaled binary masks with aspect ratio kept (64 x 128) from the 2019 YouTube-VIS challenge, which can be found at https://competitions.co…