Learning stochastic decision trees Article Swipe
Related Concepts
Decision tree
Tree (set theory)
Mathematics
Bayes' theorem
Set (abstract data type)
Combinatorics
Noise (video)
Algorithm
Discrete mathematics
Computer science
Artificial intelligence
Statistics
Bayesian probability
Image (mathematics)
Programming language
Guy Blanc
,
Jane Lange
,
Li-Yang Tan
·
YOU?
·
· 2021
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2105.03594
· OA: W3161760939
YOU?
·
· 2021
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.2105.03594
· OA: W3161760939
We give a quasipolynomial-time algorithm for learning stochastic decision trees that is optimally resilient to adversarial noise. Given an $η$-corrupted set of uniform random samples labeled by a size-$s$ stochastic decision tree, our algorithm runs in time $n^{O(\log(s/\varepsilon)/\varepsilon^2)}$ and returns a hypothesis with error within an additive $2η+ \varepsilon$ of the Bayes optimal. An additive $2η$ is the information-theoretic minimum. Previously no non-trivial algorithm with a guarantee of $O(η) + \varepsilon$ was known, even for weaker noise models. Our algorithm is furthermore proper, returning a hypothesis that is itself a decision tree; previously no such algorithm was known even in the noiseless setting.
Related Topics
Finding more related topics…