Moritz Knolle
YOU?
Author Swipe
View article: Sensitivity, Specificity, and Consistency: A Tripartite Evaluation of Privacy Filters for Synthetic Data Generation
Sensitivity, Specificity, and Consistency: A Tripartite Evaluation of Privacy Filters for Synthetic Data Generation Open
The generation of privacy-preserving synthetic datasets is a promising avenue for overcoming data scarcity in medical AI research. Post-hoc privacy filtering techniques, designed to remove samples containing personally identifiable informa…
View article: Heterogeneity-driven phenotypic plasticity and treatment response in branched-organoid models of pancreatic ductal adenocarcinoma
Heterogeneity-driven phenotypic plasticity and treatment response in branched-organoid models of pancreatic ductal adenocarcinoma Open
In patients with pancreatic ductal adenocarcinoma (PDAC), intratumoural and intertumoural heterogeneity increases chemoresistance and mortality rates. However, such morphological and phenotypic diversities are not typically captured by org…
View article: Visual Privacy Auditing with Diffusion Models
Visual Privacy Auditing with Diffusion Models Open
Data reconstruction attacks on machine learning models pose a substantial threat to privacy, potentially leaking sensitive information. Although defending against such attacks using differential privacy (DP) provides theoretical guarantees…
View article: (Predictable) performance bias in unsupervised anomaly detection
(Predictable) performance bias in unsupervised anomaly detection Open
European Research Council Deep4MI.
View article: SoK: Memorisation in machine learning
SoK: Memorisation in machine learning Open
Quantifying the impact of individual data samples on machine learning models is an open research problem. This is particularly relevant when complex and high-dimensional relationships have to be learned from a limited sample of the data ge…
View article: (Predictable) Performance Bias in Unsupervised Anomaly Detection
(Predictable) Performance Bias in Unsupervised Anomaly Detection Open
Background: With the ever-increasing amount of medical imaging data, the demand for algorithms to assist clinicians has amplified. Unsupervised anomaly detection (UAD) models promise to aid in the crucial first step of disease detection. W…
View article: Bias-Aware Minimisation: Understanding and Mitigating Estimator Bias in Private SGD
Bias-Aware Minimisation: Understanding and Mitigating Estimator Bias in Private SGD Open
Differentially private SGD (DP-SGD) holds the promise of enabling the safe and responsible application of machine learning to sensitive datasets. However, DP-SGD only provides a biased, noisy estimate of a mini-batch gradient. This renders…
View article: A distinct stimulatory cDC1 subpopulation amplifies CD8+ T cell responses in tumors for protective anti-cancer immunity
A distinct stimulatory cDC1 subpopulation amplifies CD8+ T cell responses in tumors for protective anti-cancer immunity Open
Type 1 conventional dendritic cells (cDC1) can support T cell responses within tumors but whether this determines protective versus ineffective anti-cancer immunity is poorly understood. Here, we use imaging-based deep learning to identify…
View article: Tumor-derived prostaglandin E2 programs cDC1 dysfunction to impair intratumoral orchestration of anti-cancer T cell responses
Tumor-derived prostaglandin E2 programs cDC1 dysfunction to impair intratumoral orchestration of anti-cancer T cell responses Open
Type 1 conventional dendritic cells (cDC1s) are critical for anti-cancer immunity. Protective anti-cancer immunity is thought to require cDC1s to sustain T cell responses within tumors, but it is poorly understood how this function is regu…
View article: How Do Input Attributes Impact the Privacy Loss in Differential Privacy?
How Do Input Attributes Impact the Privacy Loss in Differential Privacy? Open
Differential privacy (DP) is typically formulated as a worst-case privacy guarantee over all individuals in a database. More recently, extensions to individual subjects or their attributes, have been introduced. Under the individual/per-in…
View article: Unified Interpretation of the Gaussian Mechanism for Differential Privacy Through the Sensitivity Index
Unified Interpretation of the Gaussian Mechanism for Differential Privacy Through the Sensitivity Index Open
The Gaussian mechanism (GM) represents a universally employed tool for achieving differential privacy (DP), and a large body of work has been devoted to its analysis. We argue that the three prevailing interpretations of the GM, namely eps…
View article: Differentially private federated deep learning for multi-site medical image segmentation
Differentially private federated deep learning for multi-site medical image segmentation Open
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer. Recent initiatives have demonstrated that segmentation models trained with FL can…
View article: A unified interpretation of the Gaussian mechanism for differential privacy through the sensitivity index
A unified interpretation of the Gaussian mechanism for differential privacy through the sensitivity index Open
The Gaussian mechanism (GM) represents a universally employed tool for achieving differential privacy (DP), and a large body of work has been devoted to its analysis. We argue that the three prevailing interpretations of the GM, namely $(\…
View article: An automatic differentiation system for the age of differential privacy
An automatic differentiation system for the age of differential privacy Open
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML). Optimal noise calibration in this setting requires efficient Jacobian matrix computations and ti…
View article: Partial sensitivity analysis in differential privacy
Partial sensitivity analysis in differential privacy Open
Differential privacy (DP) allows the quantification of privacy loss when the data of individuals is subjected to algorithmic processing such as machine learning, as well as the provision of objective privacy guarantees. However, while tech…
View article: Efficient, high-performance semantic segmentation using multi-scale feature extraction
Efficient, high-performance semantic segmentation using multi-scale feature extraction Open
The success of deep learning in recent years has arguably been driven by the availability of large datasets for training powerful predictive algorithms. In medical applications however, the sensitive nature of the data limits the collectio…
View article: NeuralDP Differentially private neural networks by design
NeuralDP Differentially private neural networks by design Open
The application of differential privacy to the training of deep neural networks holds the promise of allowing large-scale (decentralized) use of sensitive data while providing rigorous privacy guarantees to the individual. The predominant …
View article: Sensitivity analysis in differentially private machine learning using\n hybrid automatic differentiation
Sensitivity analysis in differentially private machine learning using\n hybrid automatic differentiation Open
In recent years, formal methods of privacy protection such as differential\nprivacy (DP), capable of deployment to data-driven tasks such as machine\nlearning (ML), have emerged. Reconciling large-scale ML with the closed-form\nreasoning r…
View article: Sensitivity analysis in differentially private machine learning using hybrid automatic differentiation
Sensitivity analysis in differentially private machine learning using hybrid automatic differentiation Open
In recent years, formal methods of privacy protection such as differential privacy (DP), capable of deployment to data-driven tasks such as machine learning (ML), have emerged. Reconciling large-scale ML with the closed-form reasoning requ…
View article: Differentially private training of neural networks with Langevin\n dynamics for calibrated predictive uncertainty
Differentially private training of neural networks with Langevin\n dynamics for calibrated predictive uncertainty Open
We show that differentially private stochastic gradient descent (DP-SGD) can\nyield poorly calibrated, overconfident deep learning models. This represents a\nserious issue for safety-critical applications, e.g. in medical diagnosis. We\nhi…
View article: Differentially private training of neural networks with Langevin dynamics for calibrated predictive uncertainty
Differentially private training of neural networks with Langevin dynamics for calibrated predictive uncertainty Open
We show that differentially private stochastic gradient descent (DP-SGD) can yield poorly calibrated, overconfident deep learning models. This represents a serious issue for safety-critical applications, e.g. in medical diagnosis. We highl…
View article: Differentially private federated deep learning for multi-site medical image segmentation
Differentially private federated deep learning for multi-site medical image segmentation Open
Collaborative machine learning techniques such as federated learning (FL) enable the training of models on effectively larger datasets without data transfer. Recent initiatives have demonstrated that segmentation models trained with FL can…