Patrick Ferber
YOU?
Author Swipe
View article: Abstraction Heuristics for Factored Tasks
Abstraction Heuristics for Factored Tasks Open
One of the strongest approaches for optimal classical planning is A* search with heuristics based on abstractions of the planning task. Abstraction heuristics are well studied in planning formalisms without conditional effects such as SAS+…
View article: Neural Network Heuristic Functions: Taking Confidence into Account
Neural Network Heuristic Functions: Taking Confidence into Account Open
Neural networks (NN) are increasingly investigated in AI Planning, and are used successfully to learn heuristic functions. NNs commonly not only predict a value, but also output a confidence in this prediction. From the perspective of heur…
View article: Learning and Exploiting Progress States in Greedy Best-First Search
Learning and Exploiting Progress States in Greedy Best-First Search Open
Previous work introduced the concept of progress states. After expanding a progress state, a greedy best-first search (GBFS) will only expand states with lower heuristic values. Current methods can identify progress states only for a singl…
View article: Explainable Planner Selection for Classical Planning
Explainable Planner Selection for Classical Planning Open
Since no classical planner consistently outperforms all others, it is important to select a planner that works well for a given classical planning task. The two strongest approaches for planner selection use image and graph convolutional n…
View article: Code, benchmarks and experiment data for the AAAI 2019 paper "Deep Learning for Cost-Optimal Planning: Task-Dependent Planner Selection"
Code, benchmarks and experiment data for the AAAI 2019 paper "Deep Learning for Cost-Optimal Planning: Task-Dependent Planner Selection" Open
This bundle contains code, scripts and benchmarks for reproducing all experiments reported in the paper. It also contains the data generated for the paper. Except for the code base, it contains the same files as the Zenodo entry for our IP…
View article: Code, benchmarks and experiment data for the AAAI 2019 paper "Deep Learning for Cost-Optimal Planning: Task-Dependent Planner Selection"
Code, benchmarks and experiment data for the AAAI 2019 paper "Deep Learning for Cost-Optimal Planning: Task-Dependent Planner Selection" Open
This bundle contains code, scripts and benchmarks for reproducing all experiments reported in the paper. It also contains the data generated for the paper. Except for the code base, it contains the same files as the Zenodo entry for our IP…
View article: Code, Benchmarks and Experiment Data for the ICAPS 2022 HSDIP workshop paper "A Comparison of Abstraction Heuristics for Rubik's Cube"
Code, Benchmarks and Experiment Data for the ICAPS 2022 HSDIP workshop paper "A Comparison of Abstraction Heuristics for Rubik's Cube" Open
This is a collection of code, data, and benchmarks for reproducing all experiments reported in the paper. `buechner-et-al-icaps2022wshsdip-code.zip` contains the implementation, which is based on Scorpion (in turn based on Fast Downward 21…
View article: Neural Network Heuristic Functions for Classical Planning: Bootstrapping and Comparison to Other Methods
Neural Network Heuristic Functions for Classical Planning: Bootstrapping and Comparison to Other Methods Open
How can we train neural network (NN) heuristic functions for classical planning, using only states as the NN input? Prior work addressed this question by (a) per-instance imitation learning and/or (b) per-domain learning. The former limits…
View article: Debugging a Policy: Automatic Action-Policy Testing in AI Planning
Debugging a Policy: Automatic Action-Policy Testing in AI Planning Open
Testing is a promising way to gain trust in neural action policies π. Previous work on policy testing in sequential decision making targeted environment behavior leading to failure conditions. But if the failure is unavoidable given that b…
View article: Neural Network Heuristic Functions for Classical Planning: Reinforcement Learning and Comparison to Other Methods
Neural Network Heuristic Functions for Classical Planning: Reinforcement Learning and Comparison to Other Methods Open
How can we train neural network (NN) heuristic functions for classical planning, using only states as the NN input? Prior work addressed this question by (a) supervised learning and/or (b) per-domain learning generalizing over problem in- …
View article: Online Planner Selection with Graph Neural Networks and Adaptive Scheduling
Online Planner Selection with Graph Neural Networks and Adaptive Scheduling Open
Automated planning is one of the foundational areas of AI. Since no single planner can work well for all tasks and domains, portfolio-based techniques have become increasingly popular in recent years. In particular, deep learning emerges a…
View article: Reinforcement Learning for Planning Heuristics
Reinforcement Learning for Planning Heuristics Open
Informed heuristics are essential for the success of heuristic search algorithms. But, it is difficult to develop a new heuris- tic which is informed on various tasks. Instead, we propose a framework that trains a neural network as heurist…
View article: Neural Network Heuristics for Classical Planning: A Study of Hyperparameter Space
Neural Network Heuristics for Classical Planning: A Study of Hyperparameter Space Open
Neural networks (NN) have been shown to be powerful state-value predictors in several complex games. Can similar suc- cesses be achieved in classical planning? Towards a systematic ex- ploration of that question, we contribute a study of h…
View article: Explainable Planner Selection
Explainable Planner Selection Open
Since no classical planner consistently outperforms all oth ers, it is important to select a planner that works well for a given classical planning task. The two strongest approaches for planner selection use image and graph convolutional …
View article: Simplified Planner Selection
Simplified Planner Selection Open
There exists no planning algorithm that outperforms all oth- ers. Therefore, it is important to know which algorithm works well on a task. A recently published approach uses either im- age or graph convolutional neural networks to solve th…
View article: Deep Learning for Cost-Optimal Planning: Task-Dependent Planner Selection
Deep Learning for Cost-Optimal Planning: Task-Dependent Planner Selection Open
As classical planning is known to be computationally hard, no single planner is expected to work well across many planning domains. One solution to this problem is to use online portfolio planners that select a planner for a given task. Th…
View article: IPC: A Benchmark Data Set for Learning with Graph-Structured Data
IPC: A Benchmark Data Set for Learning with Graph-Structured Data Open
Benchmark data sets are an indispensable ingredient of the evaluation of graph-based machine learning methods. We release a new data set, compiled from International Planning Competitions (IPC), for benchmarking graph classification, regre…
View article: Online Planner Selection with Graph Neural Networks and Adaptive Scheduling
Online Planner Selection with Graph Neural Networks and Adaptive Scheduling Open
Automated planning is one of the foundational areas of AI. Since no single planner can work well for all tasks and domains, portfolio-based techniques have become increasingly popular in recent years. In particular, deep learning emerges a…