Kenneth O. Stanley
YOU?
Author Swipe
View article: Evolution and The Knightian Blindspot of Machine Learning
Evolution and The Knightian Blindspot of Machine Learning Open
This paper claims that machine learning (ML) largely overlooks an important facet of general intelligence: robustness to a qualitatively unknown future in an open world. Such robustness relates to Knightian uncertainty (KU) in economics, i…
View article: Automating the Search for Artificial Life with Foundation Models
Automating the Search for Artificial Life with Foundation Models Open
With the recent Nobel Prize awarded for radical advances in protein discovery, foundation models (FMs) for exploring large combinatorial spaces promise to revolutionize many scientific fields. Artificial Life (ALife) has not yet integrated…
View article: Quality-Diversity through AI Feedback
Quality-Diversity through AI Feedback Open
In many text-generation problems, users may prefer not only a single response, but a diverse range of high-quality outputs from which to choose. Quality-diversity (QD) search algorithms aim at such outcomes, by continually improving and di…
View article: OMNI: Open-endedness via Models of human Notions of Interestingness
OMNI: Open-endedness via Models of human Notions of Interestingness Open
Open-ended algorithms aim to learn new, interesting behaviors forever. That requires a vast environment search space, but there are thus infinitely many possible tasks. Even after filtering for tasks the current agent can learn (i.e., lear…
View article: Evolution through Large Models
Evolution through Large Models Open
This paper pursues the insight that large language models (LLMs) trained to generate code can vastly improve the effectiveness of mutation operators applied to programs in genetic programming (GP). Because such LLMs benefit from training d…
View article: Towards Consistent Predictive Confidence through Fitted Ensembles
Towards Consistent Predictive Confidence through Fitted Ensembles Open
Deep neural networks are behind many of the recent successes in machine learning applications. However, these models can produce overconfident decisions while encountering out-of-distribution (OOD) examples or making a wrong prediction. Th…
View article: Deep Innovation Protection: Confronting the Credit Assignment Problem in Training Heterogeneous Neural Architectures
Deep Innovation Protection: Confronting the Credit Assignment Problem in Training Heterogeneous Neural Architectures Open
Deep reinforcement learning approaches have shown impressive results in a variety of different domains, however, more complex heterogeneous architectures such as world models require the different neural components to be trained separately…
View article: The Amalthea Reu Program: Activities, Experiences, And Outcomes Of A Collaborative Summer Research Experience In Machine Learning
The Amalthea Reu Program: Activities, Experiences, And Outcomes Of A Collaborative Summer Research Experience In Machine Learning Open
NOTE: The first page of text has been automatically extracted and included below in lieu of an abstract The AMALTHEA REU Program: Activities, Experiences & Outcomes of a Collaborative Summer Research Experience in Machine Learning Abstract…
View article: Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search
Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search Open
Neural Architecture Search (NAS) explores a large space of architectural motifs -- a compute-intensive process that often involves ground-truth evaluation of each motif by instantiating it within a large network, and training and evaluatin…
View article: The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities
The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities Open
Evolution provides a creative fount of complex and subtle adaptations that often surprise the scientists who discover them. However, the creativity of evolution is not limited to the natural world: Artificial organisms evolving in computat…
View article: Fiber: A Platform for Efficient Development and Distributed Training for Reinforcement Learning and Population-Based Methods
Fiber: A Platform for Efficient Development and Distributed Training for Reinforcement Learning and Population-Based Methods Open
Recent advances in machine learning are consistently enabled by increasing amounts of computation. Reinforcement learning (RL) and population-based methods in particular pose unique challenges for efficiency and flexibility to the underlyi…
View article: Enhanced POET: Open-Ended Reinforcement Learning through Unbounded Invention of Learning Challenges and their Solutions
Enhanced POET: Open-Ended Reinforcement Learning through Unbounded Invention of Learning Challenges and their Solutions Open
Creating open-ended algorithms, which generate their own never-ending stream of novel and appropriately challenging learning opportunities, could help to automate and accelerate progress in machine learning. A recent step in this direction…
View article: Backpropamine: training self-modifying neural networks with\n differentiable neuromodulated plasticity
Backpropamine: training self-modifying neural networks with\n differentiable neuromodulated plasticity Open
The impressive lifelong learning in animal brains is primarily enabled by\nplastic changes in synaptic connectivity. Importantly, these changes are not\npassive, but are actively controlled by neuromodulation, which is itself under\nthe co…
View article: Backpropamine: training self-modifying neural networks with differentiable neuromodulated plasticity
Backpropamine: training self-modifying neural networks with differentiable neuromodulated plasticity Open
The impressive lifelong learning in animal brains is primarily enabled by plastic changes in synaptic connectivity. Importantly, these changes are not passive, but are actively controlled by neuromodulation, which is itself under the contr…
View article: Learning to Continually Learn
Learning to Continually Learn Open
Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it. Much work has gone towards preventing the default tendency of machine l…
View article: Improving Deep Neuroevolution via Deep Innovation Protection
Improving Deep Neuroevolution via Deep Innovation Protection Open
Evolutionary-based optimization approaches have recently shown promising results in domains such as Atari and robot locomotion but less so in solving 3D tasks directly from pixels. This paper presents a method called Deep Innovation Protec…
View article: Deep Innovation Protection: Confronting the Credit Assignment Problem in Training Heterogeneous Neural Architectures
Deep Innovation Protection: Confronting the Credit Assignment Problem in Training Heterogeneous Neural Architectures Open
Deep reinforcement learning approaches have shown impressive results in a variety of different domains, however, more complex heterogeneous architectures such as world models require the different neural components to be trained separately…
View article: Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data
Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data Open
This paper investigates the intriguing question of whether we can create learning algorithms that automatically generate training data, learning environments, and curricula in order to help AI agents rapidly learn. We show that such algori…
View article: Evolvability ES: Scalable and Direct Optimization of Evolvability
Evolvability ES: Scalable and Direct Optimization of Evolvability Open
Designing evolutionary algorithms capable of uncovering highly evolvable representations is an open challenge; such evolvability is important because it accelerates evolution and enables fast adaptation to changing circumstances. This pape…
View article: Evolvability ES
Evolvability ES Open
Designing evolutionary algorithms capable of uncovering highly evolvable representations is an open challenge in evolutionary computation; such evolvability is important in practice, because it accelerates evolution and enables fast adapta…
View article: Benchmarking open-endedness in minimal criterion coevolution
Benchmarking open-endedness in minimal criterion coevolution Open
Minimal criterion coevolution (MCC) was recently introduced to show that a very simple criterion can lead to an open-ended expansion of two coevolving populations. Inspired by the simplicity of striving to survive and reproduce in nature, …
View article: POET
POET Open
How can progress in machine learning and reinforcement learning be automated to generate its own never-ending curriculum of challenges without human intervention? The recent emergence of quality diversity (QD) algorithms offers a glimpse o…
View article: Deep neuroevolution of recurrent and discrete world models
Deep neuroevolution of recurrent and discrete world models Open
Neural architectures inspired by our own human cognitive system, such as the recently introduced world models, have been shown to outperform traditional deep reinforcement learning (RL) methods in a variety of different domains. Instead of…
View article: Go-Explore: a New Approach for Hard-Exploration Problems
Go-Explore: a New Approach for Hard-Exploration Problems Open
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games,…
View article: Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions
Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions Open
While the history of machine learning so far largely encompasses a series of problems posed by researchers and algorithms that learn their solutions, an important question is whether the problems themselves can be generated by the algorith…
View article: The Emergence of Canalization and Evolvability in an Open-Ended, Interactive Evolutionary System
The Emergence of Canalization and Evolvability in an Open-Ended, Interactive Evolutionary System Open
Many believe that an essential component for the discovery of the tremendous diversity in natural organisms was the evolution of evolvability, whereby evolution speeds up its ability to innovate by generating a more adaptive pool of offspr…
View article: Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks Open
View article: Safe mutations for deep and recurrent neural networks through output gradients
Safe mutations for deep and recurrent neural networks through output gradients Open
While neuroevolution (evolving neural networks) has a successful track record across a variety of domains from reinforcement learning to artificial life, it is rarely applied to large, deep neural networks. A central reason is that while r…
View article: ES is more than just a traditional finite-difference approximator
ES is more than just a traditional finite-difference approximator Open
An evolution strategy (ES) variant based on a simplification of a natural evolution strategy recently attracted attention because it performs surprisingly well in challenging deep reinforcement learning domains. It searches for neural netw…
View article: VINE: An Open Source Interactive Data Visualization Tool for Neuroevolution
VINE: An Open Source Interactive Data Visualization Tool for Neuroevolution Open
Recent advances in deep neuroevolution have demonstrated that evolutionary algorithms, such as evolution strategies (ES) and genetic algorithms (GA), can scale to train deep neural networks to solve difficult reinforcement learning (RL) pr…