Sashank Varma
YOU?
Author Swipe
View article: Graphical Perception: Alignment of Vision-Language Models to Human Performance
Graphical Perception: Alignment of Vision-Language Models to Human Performance Open
View article: A Neural Network Model of Complementary Learning Systems: Pattern Separation and Completion for Continual Learning
A Neural Network Model of Complementary Learning Systems: Pattern Separation and Completion for Continual Learning Open
Learning new information without forgetting prior knowledge is central to human intelligence. In contrast, neural network models suffer from catastrophic forgetting: a significant degradation in performance on previously learned tasks when…
View article: Modeling Understanding of Story-Based Analogies Using Large Language Models
Modeling Understanding of Story-Based Analogies Using Large Language Models Open
Recent advancements in Large Language Models (LLMs) have brought them closer to matching human cognition across a variety of tasks. How well do these models align with human performance in detecting and mapping analogies? Prior research ha…
View article: Computer Vision Models Show Human-Like Sensitivity to Geometric and Topological Concepts
Computer Vision Models Show Human-Like Sensitivity to Geometric and Topological Concepts Open
With the rapid improvement of machine learning (ML) models, cognitive scientists are increasingly asking about their alignment with how humans think. Here, we ask this question for computer vision models and human sensitivity to geometric …
View article: Alignment of CNN and Human Judgments of Geometric and Topological Concepts
Alignment of CNN and Human Judgments of Geometric and Topological Concepts Open
AI and ML are poised to provide new insights into mathematical cognition and development. Here, we focus on the domains of geometry and topology (GT). According to one prominent developmental perspective, infants possess core knowledge of …
View article: The potential -- and the pitfalls -- of using pre-trained language models as cognitive science theories
The potential -- and the pitfalls -- of using pre-trained language models as cognitive science theories Open
Many studies have evaluated the cognitive alignment of Pre-trained Language Models (PLMs), i.e., their correspondence to adult performance across a range of cognitive domains. Recently, the focus has expanded to the developmental alignment…
View article: Machine Learning in Government Applications: A Review
Machine Learning in Government Applications: A Review Open
View article: You Shall Know a Number by the Company it Keeps: Neighborhood Effects in Number Processing
You Shall Know a Number by the Company it Keeps: Neighborhood Effects in Number Processing Open
View article: What Makes a Good Theory? Interdisciplinary Perspectives
What Makes a Good Theory? Interdisciplinary Perspectives Open
View article: Understanding Graphical Perception in Data Visualization through Zero-shot Prompting of Vision-Language Models
Understanding Graphical Perception in Data Visualization through Zero-shot Prompting of Vision-Language Models Open
Vision Language Models (VLMs) have been successful at many chart comprehension tasks that require attending to both the images of charts and their accompanying textual descriptions. However, it is not well established how VLM performance p…
View article: Natural Mitigation of Catastrophic Interference: Continual Learning in Power-Law Learning Environments
Natural Mitigation of Catastrophic Interference: Continual Learning in Power-Law Learning Environments Open
Neural networks often suffer from catastrophic interference (CI): performance on previously learned tasks drops off significantly when learning a new task. This contrasts strongly with humans, who can continually learn new tasks without ap…
View article: Development of Cognitive Intelligence in Pre-trained Language Models
Development of Cognitive Intelligence in Pre-trained Language Models Open
Recent studies show evidence for emergent cognitive abilities in Large Pre-trained Language Models (PLMs). The increasing cognitive alignment of these models has made them candidates for cognitive science theories. Prior research into the …
View article: Incremental Comprehension of Garden-Path Sentences by Large Language Models: Semantic Interpretation, Syntactic Re-Analysis, and Attention
Incremental Comprehension of Garden-Path Sentences by Large Language Models: Semantic Interpretation, Syntactic Re-Analysis, and Attention Open
When reading temporarily ambiguous garden-path sentences, misinterpretations sometimes linger past the point of disambiguation. This phenomenon has traditionally been studied in psycholinguistic experiments using online measures such as re…
View article: How Well Do Deep Learning Models Capture Human Concepts? The Case of the Typicality Effect
How Well Do Deep Learning Models Capture Human Concepts? The Case of the Typicality Effect Open
How well do representations learned by ML models align with those of humans? Here, we consider concept representations learned by deep learning models and evaluate whether they show a fundamental behavioral signature of human concepts, the…
View article: Towards a Path Dependent Account of Category Fluency
Towards a Path Dependent Account of Category Fluency Open
Category fluency is a widely studied cognitive phenomenon, yet two conflicting accounts have been proposed as the underlying retrieval mechanism -- an optimal foraging process deliberately searching through memory (Hills et al., 2012) and …
View article: The psychological reality of the learned “p < .05” boundary
The psychological reality of the learned “p < .05” boundary Open
The .05 boundary within Null Hypothesis Statistical Testing (NHST) “has made a lot of people very angry and been widely regarded as a bad move” (to quote Douglas Adams). Here, we move past meta-scientific arguments and ask an empirical que…
View article: Cobweb: An Incremental and Hierarchical Model of Human-Like Category Learning
Cobweb: An Incremental and Hierarchical Model of Human-Like Category Learning Open
Cobweb, a human-like category learning system, differs from most cognitive science models in incrementally constructing hierarchically organized tree-like structures guided by the category utility measure. Prior studies have shown that Cob…
View article: Natural Mitigation of Catastrophic Interference: Continual Learning in Power-Law Learning Environments
Natural Mitigation of Catastrophic Interference: Continual Learning in Power-Law Learning Environments Open
Neural networks often suffer from catastrophic interference (CI): performance on previously learned tasks drops off significantly when learning a new task. This contrasts strongly with humans, who can continually learn new tasks without ap…
View article: Executive function predictors of science achievement in middle-school students
Executive function predictors of science achievement in middle-school students Open
Cognitive flexibility as measured by the Wisconsin Card Sort Task (WCST) has long been associated with frontal lobe function. More recently, this construct has been associated with executive function (EF), which shares overlapping neural c…
View article: Understanding the Countably Infinite: Neural Network Models of the Successor Function and its Acquisition
Understanding the Countably Infinite: Neural Network Models of the Successor Function and its Acquisition Open
As children enter elementary school, their understanding of the ordinal structure of numbers transitions from a memorized count list of the first 50-100 numbers to knowing the successor function and understanding the countably infinite. We…
View article: Pre-training LLMs using human-like development data corpus
Pre-training LLMs using human-like development data corpus Open
Pre-trained Large Language Models (LLMs) have shown success in a diverse set of language inference and understanding tasks. The pre-training stage of LLMs looks at a large corpus of raw textual data. The BabyLM shared task compares LLM pre…
View article: The role of executive function abilities in interleaved vs. blocked learning of science concepts
The role of executive function abilities in interleaved vs. blocked learning of science concepts Open
This study investigated the relative efficacy of interleaved versus blocked instruction and the role of executive function in governing learning from these instructional sequences. Eighth grade students learned about three rock concepts (i…
View article: The adventures of John Bransford: In memoriam
The adventures of John Bransford: In memoriam Open
View article: Human Behavioral Benchmarking: Numeric Magnitude Comparison Effects in Large Language Models
Human Behavioral Benchmarking: Numeric Magnitude Comparison Effects in Large Language Models Open
Large Language Models (LLMs) do not differentially represent numbers, which are pervasive in text. In contrast, neuroscience research has identified distinct neural representations for numbers and words. In this work, we investigate how we…
View article: Pre-training LLMs using human-like development data corpus
Pre-training LLMs using human-like development data corpus Open
Pre-trained Large Language Models (LLMs) have shown success in a diverse set of language inference and understanding tasks.The pretraining stage of LLMs looks at a large corpus of raw textual data.This shared task compares LLM pre-training…
View article: Numeric Magnitude Comparison Effects in Large Language Models
Numeric Magnitude Comparison Effects in Large Language Models Open
Large Language Models (LLMs) do not differentially represent numbers, which are pervasive in text. In contrast, neuroscience research has identified distinct neural representations for numbers and words. In this work, we investigate how we…
View article: Competing numerical magnitude codes in decimal comparison: whole number and rational number distance both impact performance
Competing numerical magnitude codes in decimal comparison: whole number and rational number distance both impact performance Open
A critical difference between decimal and whole numbers is that among whole numbers the number of digits provides reliable information about the size of the number, e.g., double-digit numbers are larger than single-digit numbers. However, …
View article: Graduate Students’ Effect Size Category Boundaries
Graduate Students’ Effect Size Category Boundaries Open
Statisticians increasingly decry ritualistic categorizations of statistical measures. The interpretation of effect sizes is often guided by benchmarks, i.e., Cohen’s d = .2 represents a ‘small’ effect size; .5 represents a ‘medium’ effect …
View article: A Place for Neuroscience in Teacher Knowledge and Education
A Place for Neuroscience in Teacher Knowledge and Education Open
The foundational contributions from neuroscience regarding how learning occurs in the brain reside within one of Shulman's seven components of teacher knowledge, Knowledge of Students. While Knowledge of Students combines inputs from multi…
View article: Decoding fact fluency and strategy flexibility in solving one-step algebra problems: An individual differences analysis
Decoding fact fluency and strategy flexibility in solving one-step algebra problems: An individual differences analysis Open
Algebraic thinking and strategy flexibility are essential to advanced mathematical thinking. Early algebra instruction uses ‘missing-operand’ problems (e.g., x – 7 = 2) solvable via two typical strategies: 1) direct retrieval of arithmetic…