Christopher J. Anders
YOU?
Author Swipe
Software for dataset-wide XAI: From local explanations to global insights with Zennit, CoRelAy, and ViRelAy Open
The predictive capabilities of Deep Neural Networks (DNNs) are well-established, yet the underlying mechanisms driving these predictions often remain opaque. The advent of Explainable Artificial Intelligence (XAI) has introduced novel meth…
View article: Umbilical venous catheter and peripherally inserted central catheter malposition and tip migration in neonates: A mixed methods cost analysis
Umbilical venous catheter and peripherally inserted central catheter malposition and tip migration in neonates: A mixed methods cost analysis Open
The migration and malposition of peripherally inserted central catheters and umbilical venous catheters has significant costs and consequences. These should be targeted for evidence-based and innovative solutions to improve neonatal vascul…
View article: Bayesian Parameter Shift Rule in Variational Quantum Eigensolvers
Bayesian Parameter Shift Rule in Variational Quantum Eigensolvers Open
Parameter shift rules (PSRs) are key techniques for efficient gradient estimation in variational quantum eigensolvers (VQEs). In this paper, we propose its Bayesian variant, where Gaussian processes with appropriate kernels are used to est…
View article: Adaptive Observation Cost Control for Variational Quantum Eigensolvers
Adaptive Observation Cost Control for Variational Quantum Eigensolvers Open
The objective to be minimized in the variational quantum eigensolver (VQE) has a restricted form, which allows a specialized sequential minimal optimization (SMO) that requires only a few observations in each iteration. However, the SMO it…
From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space Open
Deep Neural Networks are prone to learning spurious correlations embedded in the training data, leading to potentially biased predictions. This poses risks when deploying these models for high-stake decision-making, such as in medical appl…
View article: Physics-Informed Bayesian Optimization of Variational Quantum Circuits
Physics-Informed Bayesian Optimization of Variational Quantum Circuits Open
In this paper, we propose a novel and powerful method to harness Bayesian optimization for Variational Quantum Eigensolvers (VQEs) -- a hybrid quantum-classical protocol used to approximate the ground state of a quantum Hamiltonian. Specif…
View article: NeuLat: a toolbox for neural samplers in lattice field theories
NeuLat: a toolbox for neural samplers in lattice field theories Open
The application of normalizing flows for sampling in lattice field theory has garnered considerable attention in recent years. Despite the growing community at the intersection of machine learning (ML) and lattice field theory, there is cu…
View article: Detecting and mitigating mode-collapse for flow-based sampling of lattice field theories
Detecting and mitigating mode-collapse for flow-based sampling of lattice field theories Open
We study the consequences of mode-collapse of normalizing flows in the context of lattice field theory. Normalizing flows allow for independent sampling. For this reason, it is hoped that they can avoid the tunneling problem of local-updat…
View article: Towards Fixing Clever-Hans Predictors with Counterfactual Knowledge Distillation
Towards Fixing Clever-Hans Predictors with Counterfactual Knowledge Distillation Open
This paper introduces a novel technique called counterfactual knowledge distillation (CFKD) to detect and remove reliance on confounders in deep learning models with the help of human expert feedback. Confounders are spurious features that…
From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space Open
Deep Neural Networks are prone to learning spurious correlations embedded in the training data, leading to potentially biased predictions. This poses risks when deploying these models for high-stake decision-making, such as in medical appl…
View article: Detecting and mitigating mode-collapse for flow-based sampling of lattice field theories
Detecting and mitigating mode-collapse for flow-based sampling of lattice field theories Open
We study the consequences of mode-collapse of normalizing flows in the context of lattice field theory. Normalizing flows allow for independent sampling. For this reason, it is hoped that they can avoid the tunneling problem of local-updat…
Machine Learning of Thermodynamic Observables in the Presence of Mode Collapse Open
Estimating the free energy, as well as other thermodynamic observables, is a key task in lattice field theories. Recently, it has been pointed out that deep generative models can be used in this context [1]. Crucially, these models allow f…
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence Open
With a growing interest in understanding neural network prediction strategies, Concept Activation Vectors (CAVs) have emerged as a popular tool for modeling human-understandable concepts in the latent space. Commonly, CAVs are computed by …
Machine Learning of Thermodynamic Observables in the Presence of Mode Collapse Open
Estimating the free energy, as well as other thermodynamic observables, is a key task in lattice field theories. Recently, it has been pointed out that deep generative models can be used in this context [1]. Crucially, these models allow f…
Towards robust explanations for deep neural networks Open
Explanation methods shed light on the decision process of black-box classifiers such as deep neural networks. But their usefulness can be compromised because they are susceptible to manipulations. With this work, we aim to enhance the resi…
View article: Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications Open
With the broader and highly successful usage of machine learning in industry\nand the sciences, there has been a growing demand for Explainable AI.\nInterpretability and explanation methods for gaining a better understanding\nabout the pro…
Estimation of Thermodynamic Observables in Lattice Field Theories with Deep Generative Models Open
In this Letter, we demonstrate that applying deep generative machine learning models for lattice field theory is a promising route for solving problems where Markov chain Monte Carlo (MCMC) methods are problematic. More specifically, we sh…
Towards Robust Explanations for Deep Neural Networks Open
Explanation methods shed light on the decision process of black-box classifiers such as deep neural networks. But their usefulness can be compromised because they are susceptible to manipulations. With this work, we aim to enhance the resi…
Fairwashing Explanations with Off-Manifold Detergent Open
Explanation methods promise to make black-box classifiers more transparent. As a result, it is hoped that they can act as proof for a sensible, fair and trustworthy decision-making process of the algorithm and thereby increase its acceptan…
Fairwashing Explanations with Off-Manifold Detergent Open
Explanation methods promise to make black-box classifiers more transparent. As a result, it is hoped that they can act as proof for a sensible, fair and trustworthy decision-making process of the algorithm and thereby increase its acceptan…
Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond Open
With the broader and highly successful usage of machine learning in industry and the sciences, there has been a growing demand for explainable AI. Interpretability and explanation methods for gaining a better understanding about the proble…
Finding and Removing Clever Hans: Using Explanation Methods to Debug and\n Improve Deep Models Open
Contemporary learning models for computer vision are typically trained on\nvery large (benchmark) datasets with millions of samples. These may, however,\ncontain biases, artifacts, or errors that have gone unnoticed and are\nexploitable by…
Analyzing ImageNet with Spectral Relevance Analysis: Towards ImageNet un-Hans'ed. Open
Today's machine learning models for computer vision are typically trained on very large (benchmark) data sets with millions of samples. These may, however, contain biases, artifacts, or errors that have gone unnoticed and are exploited by …
Explanations can be manipulated and geometry is to blame Open
Explanation methods aim to make neural networks more trustworthy and interpretable. In this paper, we demonstrate a property of explanation methods which is disconcerting for both of these purposes. Namely, we show that explanations can be…
Understanding Patch-Based Learning by Explaining Predictions Open
Deep networks are able to learn highly predictive models of video data. Due to video length, a common strategy is to train them on small video snippets. We apply the deep Taylor / LRP technique to understand the deep network's classificati…