Meenatchi Sundaram Muthu Selva Annamalai
YOU?
Author Swipe
View article: Beyond the Worst Case: Extending Differential Privacy Guarantees to Realistic Adversaries
Beyond the Worst Case: Extending Differential Privacy Guarantees to Realistic Adversaries Open
Differential Privacy (DP) is a family of definitions that bound the worst-case privacy leakage of a mechanism. One important feature of the worst-case DP guarantee is it naturally implies protections against adversaries with less prior inf…
View article: The Hitchhiker's Guide to Efficient, End-to-End, and Tight DP Auditing
The Hitchhiker's Guide to Efficient, End-to-End, and Tight DP Auditing Open
In this paper, we systematize research on auditing Differential Privacy (DP) techniques, aiming to identify key insights and open challenges. First, we introduce a comprehensive framework for reviewing work in the field and establish three…
View article: Understanding the Impact of Data Domain Extraction on Synthetic Data Privacy
Understanding the Impact of Data Domain Extraction on Synthetic Data Privacy Open
Privacy attacks, particularly membership inference attacks (MIAs), are widely used to assess the privacy of generative models for tabular synthetic data, including those with Differential Privacy (DP) guarantees. These attacks often exploi…
View article: Beyond the Crawl: Unmasking Browser Fingerprinting in Real User Interactions
Beyond the Crawl: Unmasking Browser Fingerprinting in Real User Interactions Open
Browser fingerprinting is a pervasive online tracking technique used increasingly often for profiling and targeted advertising. Prior research on the prevalence of fingerprinting heavily relied on automated web crawls, which inherently str…
View article: To Shuffle or not to Shuffle: Auditing DP-SGD with Shuffling
To Shuffle or not to Shuffle: Auditing DP-SGD with Shuffling Open
The Differentially Private Stochastic Gradient Descent (DP-SGD) algorithm supports the training of machine learning (ML) models with formal Differential Privacy (DP) guarantees. Traditionally, DP-SGD processes training data in batches usin…
View article: It's Our Loss: No Privacy Amplification for Hidden State DP-SGD With Non-Convex Loss
It's Our Loss: No Privacy Amplification for Hidden State DP-SGD With Non-Convex Loss Open
Differentially Private Stochastic Gradient Descent (DP-SGD) is a popular iterative algorithm used to train machine learning models while formally guaranteeing the privacy of users. However, the privacy analysis of DP-SGD makes the unrealis…
View article: The Elusive Pursuit of Reproducing PATE-GAN: Benchmarking, Auditing, Debugging
The Elusive Pursuit of Reproducing PATE-GAN: Benchmarking, Auditing, Debugging Open
Synthetic data created by differentially private (DP) generative models is increasingly used in real-world settings. In this context, PATE-GAN has emerged as one of the most popular algorithms, combining Generative Adversarial Networks (GA…
View article: Nearly Tight Black-Box Auditing of Differentially Private Machine Learning
Nearly Tight Black-Box Auditing of Differentially Private Machine Learning Open
This paper presents an auditing procedure for the Differentially Private Stochastic Gradient Descent (DP-SGD) algorithm in the black-box threat model that is substantially tighter than prior work. The main intuition is to craft worst-case …
View article: "What do you want from theory alone?" Experimenting with Tight Auditing of Differentially Private Synthetic Data Generation
"What do you want from theory alone?" Experimenting with Tight Auditing of Differentially Private Synthetic Data Generation Open
Differentially private synthetic data generation (DP-SDG) algorithms are used to release datasets that are structurally and statistically similar to sensitive data while providing formal bounds on the information they leak. However, bugs i…
View article: FP-Fed: Privacy-Preserving Federated Detection of Browser Fingerprinting
FP-Fed: Privacy-Preserving Federated Detection of Browser Fingerprinting Open
Browser fingerprinting often provides an attractive alternative to third-party cookies for tracking users across the web.In fact, the increasing restrictions on third-party cookies placed by common web browsers and recent regulations like …
View article: FP-Fed: Privacy-Preserving Federated Detection of Browser Fingerprinting
FP-Fed: Privacy-Preserving Federated Detection of Browser Fingerprinting Open
Browser fingerprinting often provides an attractive alternative to third-party cookies for tracking users across the web. In fact, the increasing restrictions on third-party cookies placed by common web browsers and recent regulations like…
View article: Pool Inference Attacks on Local Differential Privacy: Quantifying the Privacy Guarantees of Apple's Count Mean Sketch in Practice
Pool Inference Attacks on Local Differential Privacy: Quantifying the Privacy Guarantees of Apple's Count Mean Sketch in Practice Open
Behavioral data generated by users' devices, ranging from emoji use to pages visited, are collected at scale to improve apps and services. These data, however, contain fine-grained records and can reveal sensitive information about individ…
View article: A Linear Reconstruction Approach for Attribute Inference Attacks against Synthetic Data
A Linear Reconstruction Approach for Attribute Inference Attacks against Synthetic Data Open
Recent advances in synthetic data generation (SDG) have been hailed as a solution to the difficult problem of sharing sensitive data while protecting privacy. SDG aims to learn statistical properties of real data in order to generate "arti…
View article: CoVnita: An End-to-end Privacy-preserving Framework for SARS-CoV-2 Classification
CoVnita: An End-to-end Privacy-preserving Framework for SARS-CoV-2 Classification Open
Classification of viral strains is essential in monitoring and managing the COVID-19 pandemic, but patient privacy and data security concerns often limit the extent of the open sharing of full viral genome sequencing data. We propose a fra…
View article: Communication-Efficient Secure Federated Statistical Tests from Multiparty Homomorphic Encryption
Communication-Efficient Secure Federated Statistical Tests from Multiparty Homomorphic Encryption Open
The power and robustness of statistical tests are strongly tied to the amount of data available for testing. However, much of the collected data today is siloed amongst various data owners due to privacy concerns, thus limiting the utility…
View article: Differentially Private, Federated Learning on Gene Expression Data for Tumour Classification
Differentially Private, Federated Learning on Gene Expression Data for Tumour Classification Open
Over recent years, machine learning (ML) methods have enabled considerable progress to be made within a variety of data-rich research domains.Genomics is one prominent field, with ML-based approaches achieving exciting results in a range o…
View article: Privacy-Preserving Collective Learning With Homomorphic Encryption
Privacy-Preserving Collective Learning With Homomorphic Encryption Open
Deep learning models such as long short-term memory (LSTM) are valuable classifiers for time series data like hourly clinical statistics. However, access to health data is challenging due to privacy and legal issues. Homomorphic encryption…