Deepesh Data
YOU?
Author Swipe
View article: Must the Communication Graph of MPC Protocols be an Expander?
Must the Communication Graph of MPC Protocols be an Expander? Open
View article: A Generative Framework for Personalized Learning and Estimation: Theory, Algorithms, and Privacy
A Generative Framework for Personalized Learning and Estimation: Theory, Algorithms, and Privacy Open
A distinguishing characteristic of federated learning is that the (local) client data could have statistical heterogeneity. This heterogeneity has motivated the design of personalized learning, where individual (personalized) models are tr…
View article: Renyi Differential Privacy of The Subsampled Shuffle Model In Distributed Learning
Renyi Differential Privacy of The Subsampled Shuffle Model In Distributed Learning Open
We study privacy in a distributed learning framework, where clients collaboratively build a learning model iteratively through interactions with a server from whom we need privacy. Motivated by stochastic optimization and the federated lea…
View article: On the Rényi Differential Privacy of the Shuffle Model
On the Rényi Differential Privacy of the Shuffle Model Open
The central question studied in this paper is Renyi Differential Privacy (RDP) guarantees for general discrete local mechanisms in the shuffle privacy model. In the shuffle model, each of the $n$ clients randomizes its response using a loc…
View article: Flexible Accuracy for Differential Privacy
Flexible Accuracy for Differential Privacy Open
Differential Privacy (DP) has become a gold standard in privacy-preserving data analysis. While it provides one of the most rigorous notions of privacy, there are many settings where its applicability is limited. Our main contribution is i…
View article: SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization
SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization Open
In this paper, we propose and analyze SQuARM-SGD, a communication-efficient algorithm for decentralized training of large-scale machine learning models over a network. In SQuARM-SGD, each node performs a fixed number of local SGD steps usi…
View article: QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning
QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning Open
Traditionally, federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server. Two natural challenges that FL algorithms face are heterogeneity in data across clients and collaboration…
View article: QuPeD: Quantized Personalization via Distillation with Applications to\n Federated Learning
QuPeD: Quantized Personalization via Distillation with Applications to\n Federated Learning Open
Traditionally, federated learning (FL) aims to train a single global model\nwhile collaboratively using multiple clients and a server. Two natural\nchallenges that FL algorithms face are heterogeneity in data across clients and\ncollaborat…
View article: Renyi Differential Privacy of the Subsampled Shuffle Model in Distributed Learning
Renyi Differential Privacy of the Subsampled Shuffle Model in Distributed Learning Open
We study privacy in a distributed learning framework, where clients collaboratively build a learning model iteratively through interactions with a server from whom we need privacy. Motivated by stochastic optimization and the federated lea…
View article: A Field Guide to Federated Optimization
A Field Guide to Federated Optimization Open
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data, motivated by and designed for privacy protection. The distributed learning process can be formulated a…
View article: Byzantine-Resilient SGD in High Dimensions on Heterogeneous Data
Byzantine-Resilient SGD in High Dimensions on Heterogeneous Data Open
We study distributed stochastic gradient descent (SGD) in the master-worker architecture under Byzantine attacks. We consider the heterogeneous data model, where different workers may have different local datasets, and we do not make any p…
View article: QuPeL: Quantized Personalization with Applications to Federated Learning
QuPeL: Quantized Personalization with Applications to Federated Learning Open
Traditionally, federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server. Two natural challenges that FL algorithms face are heterogeneity in data across clients and collaboration…
View article: Data Encoding for Byzantine-Resilient Distributed Optimization
Data Encoding for Byzantine-Resilient Distributed Optimization Open
We study distributed optimization in the presence of Byzantine adversaries, where both data and computation are distributed among $m$ worker machines, $t$ of which may be corrupt. The compromised nodes may collaboratively and arbitrarily d…
View article: Successive Refinement of Privacy
Successive Refinement of Privacy Open
This work examines a novel question: how much randomness is needed to achieve local differential privacy (LDP)? A motivating scenario is providing {\em multiple levels of privacy} to multiple analysts, either for distribution or for heavy-…
View article: Shuffled Model of Federated Learning: Privacy, Communication and Accuracy Trade-offs
Shuffled Model of Federated Learning: Privacy, Communication and Accuracy Trade-offs Open
We consider a distributed empirical risk minimization (ERM) optimization problem with communication efficiency and privacy requirements, motivated by the federated learning (FL) framework. Unique challenges to the traditional ERM problem i…
View article: Shuffled Model of Federated Learning: Privacy, Communication and\n Accuracy Trade-offs
Shuffled Model of Federated Learning: Privacy, Communication and\n Accuracy Trade-offs Open
We consider a distributed empirical risk minimization (ERM) optimization\nproblem with communication efficiency and privacy requirements, motivated by\nthe federated learning (FL) framework. Unique challenges to the traditional ERM\nproble…
View article: Byzantine-Resilient High-Dimensional Federated Learning
Byzantine-Resilient High-Dimensional Federated Learning Open
We study stochastic gradient descent (SGD) with local iterations in the presence of malicious/Byzantine clients, motivated by the federated learning. The clients, instead of communicating with the central server in every iteration, maintai…
View article: Qsparse-Local-SGD: Distributed SGD With Quantization, Sparsification, and Local Computations
Qsparse-Local-SGD: Distributed SGD With Quantization, Sparsification, and Local Computations Open
Communication bottleneck has been identified as a significant issue in distributed optimization of large-scale learning models. Recently, several approaches to mitigate this problem have been proposed, including different forms of gradient…
View article: SPARQ-SGD: Event-Triggered and Compressed Communication in Decentralized Stochastic Optimization
SPARQ-SGD: Event-Triggered and Compressed Communication in Decentralized Stochastic Optimization Open
In this paper, we propose and analyze SPARQ-SGD, which is an event-triggered and compressed algorithm for decentralized training of large-scale machine learning models. Each node can locally compute a condition (event) which triggers a com…
View article: SPARQ-SGD: Event-Triggered and Compressed Communication in Decentralized\n Stochastic Optimization
SPARQ-SGD: Event-Triggered and Compressed Communication in Decentralized\n Stochastic Optimization Open
In this paper, we propose and analyze SPARQ-SGD, which is an event-triggered\nand compressed algorithm for decentralized training of large-scale machine\nlearning models. Each node can locally compute a condition (event) which\ntriggers a …
View article: Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification,\n and Local Computations
Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification,\n and Local Computations Open
Communication bottleneck has been identified as a significant issue in\ndistributed optimization of large-scale learning models. Recently, several\napproaches to mitigate this problem have been proposed, including different\nforms of gradi…
View article: Secure computation of randomized functions: Further results
Secure computation of randomized functions: Further results Open
We consider secure computation of randomized functions between two users, where both the users (Alice and Bob) have inputs, Alice sends a message to Bob over a rate-limited, noise-free link, and then Bob produces the output. We study two c…
View article: Secure computation of randomized functions
Secure computation of randomized functions Open
Two user secure computation of randomized functions is considered, where only\none user computes the output. Both the users are semi-honest; and computation\nis such that no user learns any additional information about the other user's\nin…
View article: Communication and Randomness Lower Bounds for Secure Computation
Communication and Randomness Lower Bounds for Secure Computation Open
In secure multiparty computation (MPC), mutually distrusting users\ncollaborate to compute a function of their private data without revealing any\nadditional information about their data to other users. While it is known that\ninformation …