Marcel Robeer
YOU?
Author Swipe
View article: Explainable AI Is No Silver Bullet: Towards a Contextual Understanding of Appropriate Reliance on AI in Law Enforcement
Explainable AI Is No Silver Bullet: Towards a Contextual Understanding of Appropriate Reliance on AI in Law Enforcement Open
Police officers increasingly rely on AI recommendations in their decision-making processes. This chapter delves into the potential and challenges of AI in law enforcement, focusing on the role of Explainable AI (XAI) on appropriate relianc…
View article: The Explabox: Model-Agnostic Machine Learning Transparency & Analysis
The Explabox: Model-Agnostic Machine Learning Transparency & Analysis Open
We present the Explabox: an open-source toolkit for transparent and responsible machine learning (ML) model development and usage. Explabox aids in achieving explainable, fair and robust models by employing a four-step strategy: explore, e…
View article: ‘Just like I thought’: Street‐level bureaucrats trust <scp>AI</scp> recommendations if they confirm their professional judgment
‘Just like I thought’: Street‐level bureaucrats trust <span>AI</span> recommendations if they confirm their professional judgment Open
Artificial Intelligence is increasingly used to support and improve street‐level decision‐making, but empirical evidence on how street‐level bureaucrats' work is affected by AI technologies is scarce. We investigate how AI recommendations …
View article: Generating Realistic Natural Language Counterfactuals
Generating Realistic Natural Language Counterfactuals Open
Counterfactuals are a valuable means for understanding decisions made by ML systems. However, the counterfactuals generated by the methods currently available for natural language text are either unrealistic or introduce imperceptible chan…
View article: Contrastive Explanations with Local Foil Trees
Contrastive Explanations with Local Foil Trees Open
Recent advances in interpretable Machine Learning (iML) and eXplainable AI (XAI) construct explanations based on the importance of features in classification tasks. However, in a high-dimensional feature space this approach may become unfe…
View article: Contrastive Explanation for Machine Learning
Contrastive Explanation for Machine Learning Open
Introduction. Recent advances in Interpretable Machine Learning (iML) and Explainable Artificial Intelligence (XAI) have shown promising approaches that are able to provide human-understandable explanations. However, these approaches have …
View article: Extracting conceptual models from user stories with Visual Narrator
Extracting conceptual models from user stories with Visual Narrator Open
Extracting conceptual models from natural language requirements can help identify dependencies, redundancies, and conflicts between requirements via a holistic and easy-to-understand view that is generated from lengthy textual specificatio…