Jane X. Wang
YOU?
Author Swipe
View article: Can foundation models actively gather information in interactive environments to test hypotheses?
Can foundation models actively gather information in interactive environments to test hypotheses? Open
Foundation models excel at single-turn reasoning but struggle with multi-turn exploration in dynamic environments, a requirement for many real-world challenges. We evaluated these models on their ability to learn from experience, adapt, an…
View article: Scaling Instructable Agents Across Many Simulated Worlds
Scaling Instructable Agents Across Many Simulated Worlds Open
Building embodied AI systems that can follow arbitrary language instructions in any 3D environment is a key challenge for creating general AI. Accomplishing this goal requires learning to ground language in perception and embodied actions,…
View article: CogBench: a large language model walks into a psychology lab
CogBench: a large language model walks into a psychology lab Open
Large language models (LLMs) have significantly advanced the field of artificial intelligence. Yet, evaluating them comprehensively remains challenging. We argue that this is partly due to the predominant focus on performance metrics in mo…
View article: Zero-shot compositional reasoning in a reinforcement learning setting
Zero-shot compositional reasoning in a reinforcement learning setting Open
People can easily evoke previously learned concepts, compose them, and apply the result to solve novel tasks on the first attempt. The aim of this paper is to improve our understanding of how people make such zero-shot compositional infere…
View article: Passive learning of active causal strategies in agents and language models
Passive learning of active causal strategies in agents and language models Open
What can be learned about causality and experimentation from passive data? This question is salient given recent successes of passively-trained language models in interactive domains such as tool use. Passive learning is inherently limited…
View article: Meta-in-context learning in large language models
Meta-in-context learning in large language models Open
Large language models have shown tremendous performance in a variety of tasks. In-context learning -- the ability to improve at a task after being provided with a number of demonstrations -- is seen as one of the main contributors to their…
View article: Meta-Learned Models of Cognition
Meta-Learned Models of Cognition Open
Meta-learning is a framework for learning learning algorithms through repeated interactions with an environment as opposed to designing them by hand. In recent years, this framework has established itself as a promising tool for building m…
View article: Data Distributional Properties Drive Emergent In-Context Learning in Transformers
Data Distributional Properties Drive Emergent In-Context Learning in Transformers Open
Large transformer-based models are able to perform in-context few-shot learning, without being explicitly trained for it. This observation raises the question: what aspects of the training regime lead to this emergent behavior? Here, we sh…
View article: Semantic Exploration from Language Abstractions and Pretrained Representations
Semantic Exploration from Language Abstractions and Pretrained Representations Open
Effective exploration is a challenge in reinforcement learning (RL). Novelty-based exploration methods can suffer in high-dimensional state spaces, such as continuous partially-observable 3D environments. We address this challenge by defin…
View article: Can language models learn from explanations in context?
Can language models learn from explanations in context? Open
Language Models (LMs) can perform new tasks by adapting to a few in-context examples. For humans, explanations that connect examples to task principles can improve learning. We therefore investigate whether explanations of few-shot example…
View article: Tell me why! Explanations support learning relational and causal structure
Tell me why! Explanations support learning relational and causal structure Open
Inferring the abstract relational and causal structure of the world is a major challenge for reinforcement-learning (RL) agents. For humans, language--particularly in the form of explanations--plays a considerable role in overcoming this c…
View article: Alchemy: A benchmark and analysis toolkit for meta-reinforcement learning agents
Alchemy: A benchmark and analysis toolkit for meta-reinforcement learning agents Open
There has been rapidly growing interest in meta-learning as a method for increasing the flexibility and sample efficiency of reinforcement learning. One problem in this area of research, however, has been a scarcity of adequate benchmark t…
View article: Alchemy: A benchmark and analysis toolkit for meta-reinforcement\n learning agents
Alchemy: A benchmark and analysis toolkit for meta-reinforcement\n learning agents Open
There has been rapidly growing interest in meta-learning as a method for\nincreasing the flexibility and sample efficiency of reinforcement learning. One\nproblem in this area of research, however, has been a scarcity of adequate\nbenchmar…
View article: Alchemy: A structured task distribution for meta-reinforcement learning.
Alchemy: A structured task distribution for meta-reinforcement learning. Open
There has been rapidly growing interest in meta-learning as a method for increasing the flexibility and sample efficiency of reinforcement learning. One problem in this area of research, however, has been a scarcity of adequate benchmark t…
View article: Temporal Difference Uncertainties as a Signal for Exploration
Temporal Difference Uncertainties as a Signal for Exploration Open
An effective approach to exploration in reinforcement learning is to rely on an agent's uncertainty over the optimal policy, which can yield near-optimal exploration strategies in tabular settings. However, in non-tabular settings that inv…
View article: Amortized learning of neural causal representations
Amortized learning of neural causal representations Open
Causal models can compactly and efficiently encode the data-generating process under all interventions and hence may generalize better under changes in distribution. These models are often represented as Bayesian networks and learning them…
View article: Structural and Functional MRI Evidence for Distinct Medial Temporal and Prefrontal Roles in Context-dependent Relational Memory
Structural and Functional MRI Evidence for Distinct Medial Temporal and Prefrontal Roles in Context-dependent Relational Memory Open
Declarative memory is supported by distributed brain networks in which the medial-temporal lobes (MTLs) and pFC serve as important hubs. Identifying the unique and shared contributions of these regions to successful memory performance is a…
View article: Meta-learning of Sequential Strategies
Meta-learning of Sequential Strategies Open
In this report we review memory-based meta-learning as a tool for building sample-efficient strategies that learn from past experience to adapt to any task within a target class. Our goal is to equip the reader with the conceptual foundati…
View article: Reinforcement Learning, Fast and Slow
Reinforcement Learning, Fast and Slow Open
Deep reinforcement learning (RL) methods have driven impressive advances in artificial intelligence in recent years, exceeding human performance in domains ranging from Atari to Go to no-limit poker. This progress has drawn the attention o…
View article: Causal Reasoning from Meta-reinforcement Learning
Causal Reasoning from Meta-reinforcement Learning Open
Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents. Here we explore whether causal reasoning can emerge via meta-reinforcement learning. We train a recurrent network with model-…
View article: Evolving intrinsic motivations for altruistic behavior
Evolving intrinsic motivations for altruistic behavior Open
Multi-agent cooperation is an important feature of the natural world. Many tasks involve individual incentives that are misaligned with the common good, yet a wide range of organisms from bacteria to insects and humans are able to overcome…
View article: Been There, Done That: Meta-Learning with Episodic Recall
Been There, Done That: Meta-Learning with Episodic Recall Open
Meta-learning agents excel at rapidly learning new tasks from open-ended task distributions; yet, they forget what they learn about each task as soon as the next begins. When tasks reoccur - as they do in natural environments - metalearnin…
View article: Prefrontal Cortex as a Meta-Reinforcement Learning System
Prefrontal Cortex as a Meta-Reinforcement Learning System Open
Over the past twenty years, neuroscience research on reward-based learning has converged on a canonical model, under which the neurotransmitter dopamine ‘stamps in’ associations between situations, actions and rewards by modulating the str…
View article: Learning to reinforcement learn
Learning to reinforcement learn Open
In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A…
View article: Competition and Cooperation among Relational Memory Representations
Competition and Cooperation among Relational Memory Representations Open
Mnemonic processing engages multiple systems that cooperate and compete to support task performance. Exploring these systems' interaction requires memory tasks that produce rich data with multiple patterns of performance sensitive to diffe…
View article: High-Reproducibility and High-Accuracy Method for Automated Topic Classification
High-Reproducibility and High-Accuracy Method for Automated Topic Classification Open
Much of human knowledge sits in large databases of unstructured text. Leveraging this knowledge requires algorithms that extract and record metadata on unstructured text documents. Assigning topics to documents will enable intelligent sear…
View article: Hippocampal contribution to implicit configuration memory expressed via eye movements during scene exploration
Hippocampal contribution to implicit configuration memory expressed via eye movements during scene exploration Open
Although hippocampus unequivocally supports explicit/declarative memory, fewer findings have demonstrated its role in implicit expressions of memory. We tested for hippocampal contributions to an implicit expression of configural/relationa…