Oliver Eberle
YOU?
Author Swipe
View article: Position: We Need An Algorithmic Understanding of Generative AI
Position: We Need An Algorithmic Understanding of Generative AI Open
What algorithms do LLMs actually learn and use to solve problems? Studies addressing this question are sparse, as research priorities are focused on improving performance through scale, leaving a theoretical and empirical gap in understand…
View article: Trick or Neat: Adversarial Ambiguity and Language Model Evaluation
Trick or Neat: Adversarial Ambiguity and Language Model Evaluation Open
Detecting ambiguity is important for language understanding, including uncertainty estimation, humour detection, and processing garden path sentences. We assess language models' sensitivity to ambiguity by introducing an adversarial ambigu…
View article: Historical insights at scale: A corpus-wide machine learning analysis of early modern astronomic tables
Historical insights at scale: A corpus-wide machine learning analysis of early modern astronomic tables Open
Understanding the evolution and dissemination of human knowledge over time faces challenges due to the abundance of historical materials and limited specialist resources. However, the digitization of historical archives presents an opportu…
View article: Comparing zero-shot self-explanations with human rationales in text classification
Comparing zero-shot self-explanations with human rationales in text classification Open
Instruction-tuned LLMs are able to provide an explanation about their output to users by generating self-explanations. These do not require gradient computations or the application of possibly complex XAI methods. In this paper, we analyse…
View article: MambaLRP: Explaining Selective State Space Sequence Models
MambaLRP: Explaining Selective State Space Sequence Models Open
Recent sequence modeling approaches using selective state space sequence models, referred to as Mamba models, have seen a surge of interest. These models allow efficient processing of long sequences in linear time and are rapidly being ado…
View article: xMIL: Insightful Explanations for Multiple Instance Learning in Histopathology
xMIL: Insightful Explanations for Multiple Instance Learning in Histopathology Open
Multiple instance learning (MIL) is an effective and widely used approach for weakly supervised machine learning. In histopathology, MIL models have achieved remarkable success in tasks like tumor detection, biomarker prediction, and outco…
View article: Explaining Text Similarity in Transformer Models
Explaining Text Similarity in Transformer Models Open
As Transformers have become state-of-the-art models for natural language processing (NLP) tasks, the need to understand and explain their predictions is increasingly apparent. Especially in unsupervised applications, such as information re…
View article: Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale Annotations
Evaluating Webcam-based Gaze Data as an Alternative for Human Rationale Annotations Open
Rationales in the form of manually annotated input spans usually serve as ground truth when evaluating explainability methods in NLP. They are, however, time-consuming and often biased by the annotation process. In this paper, we debate wh…
View article: Rather a Nurse than a Physician -- Contrastive Explanations under Investigation
Rather a Nurse than a Physician -- Contrastive Explanations under Investigation Open
Contrastive explanations, where one decision is explained in contrast to another, are supposed to be closer to how humans explain a decision than non-contrastive explanations, where the decision is not necessarily referenced to an alternat…
View article: Insightful analysis of historical sources at scales beyond human capabilities using unsupervised Machine Learning and XAI
Insightful analysis of historical sources at scales beyond human capabilities using unsupervised Machine Learning and XAI Open
Historical materials are abundant. Yet, piecing together how human knowledge has evolved and spread both diachronically and synchronically remains a challenge that can so far only be very selectively addressed. The vast volume of materials…
View article: Explainability and transparency in the realm of digital humanities: toward a historian XAI
Explainability and transparency in the realm of digital humanities: toward a historian XAI Open
The recent advancements in the field of Artificial Intelligence (AI) translated to an increased adoption of AI technology in the humanities, which is often challenged by the limited amount of annotated data, as well as its heterogeneity. D…
View article: Rather a Nurse than a Physician - Contrastive Explanations under Investigation
Rather a Nurse than a Physician - Contrastive Explanations under Investigation Open
Contrastive explanations, where one decision is explained *in contrast to another*, are supposed to be closer to how humans explain a decision than non-contrastive explanations, where the decision is not necessarily referenced to an altern…
View article: An Ever-Expanding Humanities Knowledge Graph: The Sphaera Corpus at the Intersection of Humanities, Data Management, and Machine Learning
An Ever-Expanding Humanities Knowledge Graph: The Sphaera Corpus at the Intersection of Humanities, Data Management, and Machine Learning Open
The Sphere project stands at the intersection of the humanities and information sciences. The project aims to better understand the evolution of knowledge in the early modern period by studying a collection of 359 textbook editions publish…
View article: Do Transformer Models Show Similar Attention Patterns to Task-Specific Human Gaze?
Do Transformer Models Show Similar Attention Patterns to Task-Specific Human Gaze? Open
Learned self-attention functions in state-of-the-art NLP models often correlate with human attention. We investigate whether self-attention in large-scale pre-trained language models is as predictive of human eye fixation patterns during t…
View article: XAI for Transformers: Better Explanations through Conservative Propagation
XAI for Transformers: Better Explanations through Conservative Propagation Open
Transformers have become an important workhorse of machine learning, with numerous applications. This necessitates the development of reliable methods for increasing their transparency. Multiple interpretability methods, often based on gra…
View article: Do Transformer Models Show Similar Attention Patterns to Task-Specific Human Gaze?
Do Transformer Models Show Similar Attention Patterns to Task-Specific Human Gaze? Open
Learned self-attention functions in state-of-the-art NLP models often correlate with human attention. We investigate whether self-attention in large-scale pre-trained language models is as predictive of human eye fixation patterns during t…
View article: Sacrobosco Tables Dataset
Sacrobosco Tables Dataset Open
This dataset contains images of pages containing tables from the Sphaera Corpus. Each image is named according to the following standard: bookID_author_partOfTitle_publicationYear_page.jpg Selected metadata is located in the sphaera_tables…
View article: Sacrobosco Tables Dataset
Sacrobosco Tables Dataset Open
This dataset contains images of pages containing tables from the Sphaera Corpus. Each image is named according to the following standard: bookID_author_partOfTitle_publicationYear_page.jpg Selected metadata is located in the sphaera_tables…
View article: Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Higher-Order Explanations of Graph Neural Networks via Relevant Walks Open
Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data. As GNNs tightly entangle the input graph into the neural network structure, common explainable AI approaches are not applicable. To a large extent, G…
View article: Building and Interpreting Deep Similarity Models
Building and Interpreting Deep Similarity Models Open
Many learning algorithms such as kernel machines, nearest neighbors, clustering, or anomaly detection, are based on distances or similarities. Before similarities are used for training an actual machine learning model, we would like to ver…
View article: XAI for Graphs: Explaining Graph Neural Network Predictions by Identifying Relevant Walks
XAI for Graphs: Explaining Graph Neural Network Predictions by Identifying Relevant Walks Open
Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data. As GNNs tightly entangle the input graph into the neural network structure, common explainable AI (XAI) approaches are not applicable. To a large ext…