Tabea E. Röber
YOU?
Author Swipe
View article: Why we do need explainable AI for healthcare
Why we do need explainable AI for healthcare Open
The recent uptake in certified Artificial Intelligence (AI) tools for healthcare applications has renewed the debate around their adoption. Explainable AI, the sub-discipline promising to render AI devices more transparent and trustworthy,…
View article: Why we do need explainable AI for healthcare
Why we do need explainable AI for healthcare Open
The recent uptake in certified Artificial Intelligence (AI) tools for healthcare applications has renewed the debate around their adoption. Explainable AI, the sub-discipline promising to render AI devices more transparent and trustworthy,…
View article: Rule generation for classification: Scalability, interpretability, and fairness
Rule generation for classification: Scalability, interpretability, and fairness Open
We introduce a new rule-based optimization method for classification with constraints. The proposed method leverages column generation for linear programming, and hence, is scalable to large datasets. The resulting pricing subproblem is sh…
View article: Clinicians' Voice: Fundamental Considerations for XAI in Healthcare
Clinicians' Voice: Fundamental Considerations for XAI in Healthcare Open
Explainable AI (XAI) holds the promise of advancing the implementation and adoption of AI-based tools in practice, especially in high-stakes environments like healthcare. However, most of the current research lacks input from end users, an…
View article: Fixing confirmation bias in feature attribution methods via semantic match
Fixing confirmation bias in feature attribution methods via semantic match Open
Feature attribution methods have become a staple method to disentangle the complex behavior of black box models. Despite their success, some scholars have argued that such methods suffer from a serious flaw: they do not allow a reliable in…
View article: Finding Regions of Counterfactual Explanations via Robust Optimization
Finding Regions of Counterfactual Explanations via Robust Optimization Open
Counterfactual explanations play an important role in detecting bias and improving the explainability of data-driven classification models. A counterfactual explanation (CE) is a minimal perturbed data point for which the decision of the m…
View article: Semantic match: Debugging feature attribution methods in XAI for healthcare
Semantic match: Debugging feature attribution methods in XAI for healthcare Open
The recent spike in certified Artificial Intelligence (AI) tools for healthcare has renewed the debate around adoption of this technology. One thread of such debate concerns Explainable AI (XAI) and its promise to render AI devices more tr…
View article: Counterfactual Explanations Using Optimization With Constraint Learning
Counterfactual Explanations Using Optimization With Constraint Learning Open
To increase the adoption of counterfactual explanations in practice, several criteria that these should adhere to have been put forward in the literature. We propose counterfactual explanations using optimization with constraint learning (…
View article: Why we do need Explainable AI for Healthcare
Why we do need Explainable AI for Healthcare Open
The recent spike in certified Artificial Intelligence (AI) tools for healthcare has renewed the debate around adoption of this technology. One thread of such debate concerns Explainable AI and its promise to render AI devices more transpar…
View article: Rule Generation for Classification: Scalability, Interpretability, and Fairness
Rule Generation for Classification: Scalability, Interpretability, and Fairness Open
We introduce a new rule-based optimization method for classification with constraints. The proposed method leverages column generation for linear programming, and hence, is scalable to large datasets. The resulting pricing subproblem is sh…