Jeffrey S. Bowers
YOU?
Author Swipe
Does distilling Bayesian priors into language models support rapid language learning? Comment on McCoy and Griffiths (2025) Open
Language models (LMs) are typically trained on orders of magnitude more language data than children are exposed to.1 Although there have been several attempts to show that LMs with no language-specific priors can learn core aspects of lang…
The successes and failures of Artificial Neural Networks (ANNs) highlight the importance of innate linguistic priors for human language acquisition Open
Artificial Neural Networks (ANNs) equipped with general learning algorithms, but no linguistic knowledge, can learn to associate words with objects in naturalistic scenes when trained on head-mounted video recordings from a single child’s …
Centaur: A model without a theory Open
Binz et al. (2025, doi:10.1038/s41586-025-09215-4) developed a large language model (LLM) called Centaur that better predicts trial-by-trial human responses in 159 of 160 behavioural experiments compared to existing cognitive models. The a…
Centaur: A model without a theory Open
Binz et al. (in press; preprint at: arxiv.org/abs/2410.20268) developed a large language model (LLM) called Centaur that better predicts trial-by-trial human responses in 159 of 160 behavioural experiments compared to existing cognitive mo…
On the misuse of LLMs as models of mind: A case study of Centaur Open
In the field of NeuroAI, many strong claims are made about the similarity between artificial neural networks (ANNs) and the human mind.A recent study by (Binz et al., 2025) is a case in point. The authors transcribed a large database of ps…
Successes and Limitations of Object-centric Models at Compositional Generalisation Open
In recent years, it has been shown empirically that standard disentangled latent variable models do not support robust compositional learning in the visual domain. Indeed, in spite of being designed with the goal of factorising datasets in…
Adapting to time: Why nature may have evolved a diverse set of neurons Open
Brains have evolved diverse neurons with varying morphologies and dynamics that impact temporal information processing. In contrast, most neural network models use homogeneous units that vary only in spatial parameters (weights and biases)…
The successes and failures of Artificial Neural Networks (ANNs) highlight the importance of innate linguistic priors for human language acquisition Open
Artificial Neural Networks (ANNs) equipped with general learning algorithms, but no linguistic knowledge, can learn to associate words with objects in naturalistic scenes when trained on head-mounted video recordings from a single child’s …
Adapting to time: Why nature may have evolved a diverse set of neurons Open
Brains have evolved diverse neurons with varying morphologies and dynamics that impact temporal information processing. In contrast, most neural network models use homogeneous units that vary only in spatial parameters (weights and biases)…
Index Open
This book critically examines the range of policies and programmes that attempt to manage economic activity that contributes to political violence. Beginning with an overview of over a dozen policies aimed at transforming these activities …
Select bibliography Open
This book critically examines the range of policies and programmes that attempt to manage economic activity that contributes to political violence. Beginning with an overview of over a dozen policies aimed at transforming these activities …
MindSet: Vision. A toolbox for testing DNNs on key psychological experiments Open
Multiple benchmarks have been developed to assess the alignment between deep neural networks (DNNs) and human vision. In almost all cases these benchmarks are observational in the sense they are composed of behavioural and brain responses …
Visual Reasoning in Object-Centric Deep Neural Networks: A Comparative Cognition Approach Open
Achieving visual reasoning is a long-term goal of artificial intelligence. In the last decade, several studies have applied deep neural networks (DNNs) to the task of learning visual relations from images, with modest results in terms of g…
There is still little or no evidence that systematic phonics is more effective than common alternative methods of reading instruction: Response to Brooks (2023) Open
Brooks (2023) rejects Bowers' (2020) conclusion that there is little or no evidence that systematic phonics is more effective than alternative teaching methods common in schools. He makes his case based on challenging my analysis of 4 or t…
There is still little or no evidence that systematic phonics is more effective than common alternative methods of reading instruction: Response to Brooks (2023). Open
Brooks (2023) rejects Bowers’ (2020) conclusion that there is little or no evidence that systematic phonics is more effective than alternative teaching methods common in schools. He makes his case based on challenging my analysis of 4 or t…
Mixed Evidence for Gestalt Grouping in Deep Neural Networks Open
Gestalt psychologists have identified a range of conditions in which humans organize elements of a scene into a group or whole, and perceptual grouping principles play an essential role in scene perception and object identification. Recent…
Convolutional Neural Networks Trained to Identify Words Provide a Surprisingly Good Account of Visual Form Priming Effects Open
A wide variety of orthographic coding schemes and models of visual word identification have been developed to account for masked priming data that provide a measure of orthographic similarity between letter strings. These models tend to in…
On the importance of severely testing deep learning models of cognition Open
Researchers studying the correspondences between Deep Neural Networks (DNNs) and humans often give little consideration to severe testing when drawing conclusions from empirical findings, and this is impeding progress in building better mo…
Clarifying status of DNNs as models of human vision Open
On several key issues we agree with the commentators. Perhaps most importantly, everyone seems to agree that psychology has an important role to play in building better models of human vision, and (most) everyone agrees (including us) that…
Introducing the MindSet benchmark for comparing DNNs to human vision Open
We describe the MindSet benchmark designed to facilitate the testing of DNNs against controlled experiments reported in psychology. MindSet will focus on a range of low-, middle-, and high-level visual findings that provide important const…
The role of object-centric representations, guided attention, and external memory on generalizing visual relations Open
Visual reasoning is a long-term goal of vision research. In the last decade, several works have attempted to apply deep neural networks (DNNs) to the task of learning visual relations from images, with modest results in terms of the genera…
Successes and critical failures of neural networks in capturing human-like speech recognition Open
Natural and artificial audition can in principle acquire different solutions to a given problem. The constraints of the task, however, can nudge the cognitive science and engineering of audition to qualitatively converge, suggesting that a…
Convolutional Neural Networks Trained to Identify Words Provide a Surprisingly Good Account of Visual Form Priming Effects Open
A wide variety of orthographic coding schemes and models of visual word identification have been developed to account for masked priming data that provide a measure of orthographic similarity between letter strings. These models tend to in…
The role of capacity constraints in Convolutional Neural Networks for learning random versus natural data Open
Convolutional neural networks (CNNs) are often described as promising models of human vision, yet they show many differences from human abilities. We focus on a superhuman capacity of top-performing CNNs, namely, their ability to learn ver…
Convolutional Neural Networks Trained to Identify Words Provide a Good Account of Visual Form Priming Effects Open
A wide variety of orthographic coding schemes and models of visual word identification have been developed to account for masked priming data that provide a measure of orthographic similarity between letter strings. These models tend to in…
Can deep convolutional neural networks support relational reasoning in the same-different task? Open
Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This fact has lead neural networks researchers to test same-different classification on deep convolutional neural networks (DCNNs), which has resul…