Maria K. Eckstein
YOU?
Author Swipe
View article: Automated Discovery of Sparse and Interpretable Cognitive Equations
Automated Discovery of Sparse and Interpretable Cognitive Equations Open
Discovering computational models that explain human cognition and behavior remains a central goal of cognitive science, yet the reliance on hand-crafted equations limits the range of cognitive mechanisms that can be uncovered. We introduce…
View article: Reasoning with programs in replay.
Reasoning with programs in replay. Open
Reasoning flexibly composes known elements to solve novel problems. Recent theories suggest the brain uses the axis of time to compose elements for reasoning. In this view, elements are packaged into fast neural sequences, with each sequen…
View article: Low dimensional latent structure underlying the choices of mice
Low dimensional latent structure underlying the choices of mice Open
An impressive wealth of cognitive neuroscience tasks involves combining perceptual information with an estimate of the latent state of the environment to make a decision. Such tasks have driven the development of theoretically motivated co…
View article: Hybrid Neural-Cognitive Models Reveal Flexible Context-Dependent Information Processing in Reversal Learning
Hybrid Neural-Cognitive Models Reveal Flexible Context-Dependent Information Processing in Reversal Learning Open
Reversal learning tasks provide a key paradigm for studying behavioral flexibility, requiring individuals to update choices in response to shifting reward contingencies. While reinforcement learning (RL) models have been widely used to pro…
View article: Nucleus accumbens dopamine release reflects Bayesian inference during instrumental learning
Nucleus accumbens dopamine release reflects Bayesian inference during instrumental learning Open
Dopamine release in the nucleus accumbens has been hypothesized to signal the difference between observed and predicted reward, known as reward prediction error, suggesting a biological implementation for reinforcement learning. Rigorous t…
View article: A foundation model to predict and capture human cognition
A foundation model to predict and capture human cognition Open
Establishing a unified theory of cognition has been an important goal in psychology 1,2 . A first step towards such a theory is to create a computational model that can predict human behaviour in a wide range of settings. Here we introduce…
View article: Subgoals in Hierarchical Reinforcement Learning
Subgoals in Hierarchical Reinforcement Learning Open
Hierarchy is an important feature of efficient cognition, including learning and reasoning. In the hierarchical reinforcement learning framework, complex learning problems are decomposed into sub-components that lead to subgoals; subgoals …
View article: Discovering Symbolic Cognitive Models from Human and Animal Behavior
Discovering Symbolic Cognitive Models from Human and Animal Behavior Open
Symbolic models play a key role in cognitive science, expressing computationally precise hypotheses about how the brain implements a cognitive process. Identifying an appropriate model typically requires a great deal of effort and ingenuit…
View article: Centaur: a foundation model of human cognition
Centaur: a foundation model of human cognition Open
Establishing a unified theory of cognition has been a major goal of psychology. While there have been previous attempts to instantiate such theories by building computational models, we currently do not have one model that captures the hum…
View article: Centaur: a foundation model of human cognition
Centaur: a foundation model of human cognition Open
Establishing a unified theory of cognition has been a major goal of psychology. While there have been previous attempts to instantiate such theories by building computational models, we currently do not have one model that captures the hum…
View article: Centaur: a foundation model of human cognition
Centaur: a foundation model of human cognition Open
Establishing a unified theory of cognition has been a major goal of psychology. While there have been previous attempts to instantiate such theories by building computational models, we currently do not have one model that captures the hum…
View article: Hybrid Neural-Cognitive Models Reveal How Memory Shapes Human Reward Learning
Hybrid Neural-Cognitive Models Reveal How Memory Shapes Human Reward Learning Open
Human reward-guided learning is typically modeled with simple reinforcement learning algorithms. These models assume that choices depend on a handful of incrementally learned variables that summarize previous outcomes. Here, we scrutinize …
View article: Nucleus accumbens dopamine release reflects Bayesian inference during instrumental learning
Nucleus accumbens dopamine release reflects Bayesian inference during instrumental learning Open
Dopamine release in the nucleus accumbens has been hypothesized to signal reward prediction error, the difference between observed and predicted reward, suggesting a biological implementation for reinforcement learning. Rigorous tests of t…
View article: Cognitive Model Discovery via Disentangled RNNs
Cognitive Model Discovery via Disentangled RNNs Open
Computational cognitive models are a fundamental tool in behavioral neuroscience. They instantiate in software precise hypotheses about the cognitive mechanisms underlying a particular behavior. Constructing these models is typically a dif…
View article: Predictive and Interpretable: Combining Artificial Neural Networks and Classic Cognitive Models to Understand Human Learning and Decision Making
Predictive and Interpretable: Combining Artificial Neural Networks and Classic Cognitive Models to Understand Human Learning and Decision Making Open
Quantitative models of behavior are a fundamental tool in cognitive science. Typically, models are hand-crafted to implement specific cognitive mechanisms. Such “classic” models are interpretable by design, but may provide poor fit to expe…
View article: The role of subgoals in hierarchical reinforcement learning
The role of subgoals in hierarchical reinforcement learning Open
View article: The interpretation of computational model parameters depends on the context
The interpretation of computational model parameters depends on the context Open
Reinforcement Learning (RL) models have revolutionized the cognitive and brain sciences, promising to explain behavior from simple conditioning to complex problem solving, to shed light on developmental and individual differences, and to a…
View article: Author response: The interpretation of computational model parameters depends on the context
Author response: The interpretation of computational model parameters depends on the context Open
Article Figures and data Abstract Editor's evaluation Introduction Results Discussion Materials and methods Appendix 1 Appendix 2 Appendix 3 Appendix 4 Appendix 5 Appendix 6 Appendix 7 Appendix 8 Data availability References Decision lette…
View article: Reinforcement learning and Bayesian inference provide complementary models for the unique advantage of adolescents in stochastic reversal
Reinforcement learning and Bayesian inference provide complementary models for the unique advantage of adolescents in stochastic reversal Open
View article: What do reinforcement learning models measure? Interpreting model parameters in cognition and neuroscience
What do reinforcement learning models measure? Interpreting model parameters in cognition and neuroscience Open
View article: How the Mind Creates Structure: Hierarchical Learning of Action Sequences.
How the Mind Creates Structure: Hierarchical Learning of Action Sequences. Open
Humans have the astonishing capacity to quickly adapt to varying environmental demands and reach complex goals in the absence of extrinsic rewards. Part of what underlies this capacity is the ability to flexibly reuse and recombine previou…
View article: Modeling changes in probabilistic reinforcement learning during adolescence
Modeling changes in probabilistic reinforcement learning during adolescence Open
In the real world, many relationships between events are uncertain and probabilistic. Uncertainty is also likely to be a more common feature of daily experience for youth because they have less experience to draw from than adults. Some stu…
View article: The Interpretation of Computational Model Parameters Depends on the Context
The Interpretation of Computational Model Parameters Depends on the Context Open
Reinforcement Learning (RL) models have revolutionized the cognitive and brain sciences, promising to explain behavior from simple conditioning to complex problem solving, to shed light on developmental and individual differences, and to a…
View article: What do Reinforcement Learning Models Measure? Interpreting Model Parameters in Cognition and Neuroscience
What do Reinforcement Learning Models Measure? Interpreting Model Parameters in Cognition and Neuroscience Open
Reinforcement learning (RL) is a concept that has been invaluable to research fields including machine learning, neuroscience, and cognitive science. However, what RL entails partly differs between fields, leading to difficulties when inte…
View article: Modeling Changes in Probabilistic Reinforcement Learning during Adolescence
Modeling Changes in Probabilistic Reinforcement Learning during Adolescence Open
In the real world, many relationships between events are uncertain and probabilistic. Uncertainty is also likely to be a more common feature of daily experience for youth because they have less experience to draw from than adults. Some stu…
View article: Computational evidence for hierarchically structured reinforcement learning in humans
Computational evidence for hierarchically structured reinforcement learning in humans Open
Humans have the fascinating ability to achieve goals in a complex and constantly changing world, still surpassing modern machine-learning algorithms in terms of flexibility and learning speed. It is generally accepted that a crucial factor…
View article: Learning under uncertainty changes during adolescence
Learning under uncertainty changes during adolescence Open
As we transition from child to adult, we navigate the worlddifferently. In this world, many of the relationships betweenevents are unclear or uncertain because they are probabilisticin nature. We wanted to know how learning about probabili…
View article: Corrigendum to “Disentangling the systems contributing to changes in learning during adolescence” [Dev. Cogn. Neurosci. 41, 2020, 100732]
Corrigendum to “Disentangling the systems contributing to changes in learning during adolescence” [Dev. Cogn. Neurosci. 41, 2020, 100732] Open
View article: Reinforcement Learning and Bayesian Inference Provide Complementary Models for the Unique Advantage of Adolescents in Stochastic Reversal
Reinforcement Learning and Bayesian Inference Provide Complementary Models for the Unique Advantage of Adolescents in Stochastic Reversal Open
During adolescence, youth venture out, explore the wider world, and are challenged to learn how to navigate novel and uncertain environments. We investigated whether adolescents are uniquely adapted to this transition, compared to younger …
View article: Computational Models of Learning and Hierarchy
Computational Models of Learning and Hierarchy Open
The aim of this thesis is to create precise computational models of how humans create and use hierarchical representations when solving complex problems. In the process, the thesis aims to understand human learning more generally, and inve…