Elize Herrewijnen
YOU?
Author Swipe
View article: The Explabox: Model-Agnostic Machine Learning Transparency & Analysis
The Explabox: Model-Agnostic Machine Learning Transparency & Analysis Open
We present the Explabox: an open-source toolkit for transparent and responsible machine learning (ML) model development and usage. Explabox aids in achieving explainable, fair and robust models by employing a four-step strategy: explore, e…
View article: Requirements and Attitudes towards Explainable AI in Law Enforcement
Requirements and Attitudes towards Explainable AI in Law Enforcement Open
Decision-making aided by Artifcial Intelligence in high-stakes domains such as law enforcement must be informed and accountable. Thus, designing explainable artifcial intelligence (XAI) for such settings is a key social concern. Yet, expla…
View article: Human-annotated rationales and explainable text classification: a survey
Human-annotated rationales and explainable text classification: a survey Open
Asking annotators to explain “why” they labeled an instance yields annotator rationales: natural language explanations that provide reasons for classifications. In this work, we survey the collection and use of annotator rationales. Human-…
View article: Machine-translated texts from English to Polish show a potential for typological explanations in Source Language Identification
Machine-translated texts from English to Polish show a potential for typological explanations in Source Language Identification Open
This work examines a case study that investigates (1) the achievability of extracting typological features from Polish texts, and (2) their contrastive power to discriminate between machine-translated texts from English. The findings indic…
View article: Machine-annotated rationales: faithfully explaining machine learning models for text classification
Machine-annotated rationales: faithfully explaining machine learning models for text classification Open
Artificial intelligence is not always interpretable to humans at first sight. Especially machine learning models with hidden states or high complexity remain difficult to understand. \nExplanations for such machine learning models can be f…