Janne Leppä-aho
YOU?
Author Swipe
View article: Quotient Normalized Maximum Likelihood Criterion for Learning Bayesian Network Structures
Quotient Normalized Maximum Likelihood Criterion for Learning Bayesian Network Structures Open
We introduce an information theoretic criterion for Bayesian network structure learning which we call quotient normalized maximum likelihood (qNML). In contrast to the closely related factorized normalized maximum likelihood criterion, qNM…
View article: Methods for Learning Directed and Undirected Graphical Models
Methods for Learning Directed and Undirected Graphical Models Open
Probabilistic graphical models provide a general framework for modeling relationships between multiple random variables. The main tool in this framework is a mathematical object called graph which visualizes the assertions of conditional i…
View article: Bayesian network Fisher kernel for categorical feature spaces
Bayesian network Fisher kernel for categorical feature spaces Open
We address the problem of defining similarity between vectors of possibly dependent categorical variables by deriving formulas for the Fisher kernel for Bayesian networks. While both Bayesian networks and Fisher kernels are established tec…
View article: Learning non-parametric Markov networks with mutual information
Learning non-parametric Markov networks with mutual information Open
We propose a method for learning Markov network structures for continuous data without invoking any assumptions about the distribution of the variables. The method makes use of previous work on a non-parametric estimator for mutual informa…
View article: Learning non-parametric Markov networks with mutual information
Learning non-parametric Markov networks with mutual information Open
We propose a method for learning Markov network structures for continuous data without invoking any assumptions about the distribution of the variables. The method makes use of previous work on a non-parametric estimator for mutual informa…
View article: On the inconsistency of ℓ 1-penalised sparse precision matrix estimation
On the inconsistency of ℓ 1-penalised sparse precision matrix estimation Open
Our results demonstrate that ℓ 1-penalised undirected network structure learning methods are unable to reliably learn many sparse bipartite graph structures, which arise often in gene expression data. Users of such methods should be aware …
View article: On the inconsistency of $\ell_1$-penalised sparse precision matrix estimation
On the inconsistency of $\ell_1$-penalised sparse precision matrix estimation Open
Various $\ell_1$-penalised estimation methods such as graphical lasso and CLIME are widely used for sparse precision matrix estimation. Many of these methods have been shown to be consistent under various quantitative assumptions about the…