Runar Helin
YOU?
Author Swipe
View article: A Methodology for Transparent Logic-Based Classification Using a Multi-Task Convolutional Tsetlin Machine
A Methodology for Transparent Logic-Based Classification Using a Multi-Task Convolutional Tsetlin Machine Open
The Tsetlin Machine (TM) is a novel machine learning paradigm that employs finite-state automata for learning and utilizes propositional logic to represent patterns. Due to its simplistic approach, TMs are inherently more interpretable tha…
View article: The Tsetlin Machine Goes Deep: Logical Learning and Reasoning With Graphs
The Tsetlin Machine Goes Deep: Logical Learning and Reasoning With Graphs Open
Pattern recognition with concise and flat AND-rules makes the Tsetlin Machine (TM) both interpretable and efficient, while the power of Tsetlin automata enables accuracy comparable to deep learning on an increasing number of datasets. We i…
View article: Uncertainty Quantification in the Tsetlin Machine
Uncertainty Quantification in the Tsetlin Machine Open
Data modeling using Tsetlin machines (TMs) is all about building logical rules from the data features. The decisions of the model are based on a combination of these logical rules. Hence, the model is fully transparent and it is possible t…
View article: Non-linear shrinking of linear model errors
Non-linear shrinking of linear model errors Open
The paper presents a residual modelling scheme using modern neural network architectures. Furthermore, two novel extensions of residual modelling for classification tasks are proposed. The study is seen as a step towards explainable AI, wi…
View article: On the possible benefits of deep learning for spectral preprocessing
On the possible benefits of deep learning for spectral preprocessing Open
Preprocessing is a mandatory step in most types of spectroscopy and spectrometry. The choice of preprocessing method depends on the data being analysed, and to get the preprocessing right, domain knowledge or trial and error is required. G…
View article: Multiblock-Networks: A Neural Network Analog to Component Based Methods for Multi-Source Data.
Multiblock-Networks: A Neural Network Analog to Component Based Methods for Multi-Source Data. Open
Training predictive models on datasets from multiple sources is a common, yet challenging setup in applied machine learning. Even though model interpretation has attracted more attention in recent years, many modeling approaches still focu…
View article: Ranking Feature-Block Importance in Artificial Multiblock Neural Networks
Ranking Feature-Block Importance in Artificial Multiblock Neural Networks Open
In artificial neural networks, understanding the contributions of input features on the prediction fosters model explainability and delivers relevant information about the dataset. While typical setups for feature importance ranking assess…
View article: Existence and detectability of very fast oscillations in spiking network models
Existence and detectability of very fast oscillations in spiking network models Open
Extremely high frequency oscillations are present in dynamics of different simulated neuronal networks, but are not yet observed in experimental recordings. This raises the question about the origin of these high frequency oscillations. Fo…