Nirupam Gupta
YOU?
Author Swipe
View article: Robust Federated Inference
Robust Federated Inference Open
Federated inference, in the form of one-shot federated learning, edge ensembles, or federated ensembles, has emerged as an attractive solution to combine predictions from multiple models. This paradigm enables each model to remain local an…
View article: Generalization under Byzantine & Poisoning Attacks: Tight Stability Bounds in Robust Distributed Learning
Generalization under Byzantine & Poisoning Attacks: Tight Stability Bounds in Robust Distributed Learning Open
Robust distributed learning algorithms aim to maintain good performance in distributed and federated settings, even in the presence of misbehaving workers. Two primary threat models have been studied: Byzantine attacks, where misbehaving w…
View article: Revisiting Ensembling in One-Shot Federated Learning
Revisiting Ensembling in One-Shot Federated Learning Open
Federated learning (FL) is an appealing approach to training machine learning models without sharing raw data. However, standard FL algorithms are iterative and thus induce a significant communication cost. One-shot federated learning (OFL…
View article: Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients
Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients Open
Federated learning (FL) is an appealing paradigm that allows a group of machines (a.k.a. clients) to learn collectively while keeping their data local. However, due to the heterogeneity between the clients' data distributions, the model ob…
View article: Brief Announcement: A Case for Byzantine Machine Learning
Brief Announcement: A Case for Byzantine Machine Learning Open
The success of machine learning (ML) has been intimately linked with the availability of large amounts of data, typically collected from heterogeneous sources and processed on vast networks of computing devices (also called workers). Beyon…
View article: Adaptive Gradient Clipping for Robust Federated Learning
Adaptive Gradient Clipping for Robust Federated Learning Open
Robust federated learning aims to maintain reliable performance despite the presence of adversarial or misbehaving workers. While state-of-the-art (SOTA) robust distributed gradient descent (Robust-DGD) methods were proven theoretically op…
View article: On the Relevance of Byzantine Robust Optimization Against Data Poisoning
On the Relevance of Byzantine Robust Optimization Against Data Poisoning Open
The success of machine learning (ML) has been intimately linked with the availability of large amounts of data, typically collected from heterogeneous sources and processed on vast networks of computing devices (also called {\em workers}).…
View article: Byzantine-Robust Federated Learning: Impact of Client Subsampling and Local Updates
Byzantine-Robust Federated Learning: Impact of Client Subsampling and Local Updates Open
The possibility of adversarial (a.k.a., {\em Byzantine}) clients makes federated learning (FL) prone to arbitrary manipulation. The natural approach to robustify FL against adversarial clients is to replace the simple averaging operation a…
View article: Chat GPT: From Natural Language Processing to Responsible AI - Implications, Challenges, and Future Developments
Chat GPT: From Natural Language Processing to Responsible AI - Implications, Challenges, and Future Developments Open
This research paper provides a comprehensive overview of Chat GPT, a cutting- edge natural language processing technology that has rapidly gained popularity recently. With the ability to generate human-like responses and a growing capacity…
View article: Robust and Private Federated Learning on LLMs
Robust and Private Federated Learning on LLMs Open
Large Language Models (LLMs) have gained significant attention in recent years due to their potential to revolutionize various industries and sectors. However, scaling LLMs further requires access to substantial linguistic resources that a…
View article: Robust Distributed Learning: Tight Error Bounds and Breakdown Point under Data Heterogeneity
Robust Distributed Learning: Tight Error Bounds and Breakdown Point under Data Heterogeneity Open
The theory underlying robust distributed learning algorithms, designed to resist adversarial machines, matches empirical observations when data is homogeneous. Under data heterogeneity however, which is the norm in practical scenarios, est…
View article: Byzantine Machine Learning: A Primer
Byzantine Machine Learning: A Primer Open
The problem of Byzantine resilience in distributed machine learning, a.k.a. Byzantine machine learning , consists of designing distributed algorithms that can train an accurate model despite the presence of Byzantine nodes—that is, nodes w…
View article: On the Privacy-Robustness-Utility Trilemma in Distributed Learning
On the Privacy-Robustness-Utility Trilemma in Distributed Learning Open
The ubiquity of distributed machine learning (ML) in sensitive public domain applications calls for algorithms that protect data privacy, while being robust to faults and adversarial behaviors. Although privacy and robustness have been ext…
View article: Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity
Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity Open
Byzantine machine learning (ML) aims to ensure the resilience of distributed learning algorithms to misbehaving (or Byzantine) machines. Although this problem received significant attention, prior works often assume the data held by the ma…
View article: Byzantine Fault-Tolerance in Federated Local SGD Under $2f$-Redundancy
Byzantine Fault-Tolerance in Federated Local SGD Under $2f$-Redundancy Open
In this article, we study the problem of Byzantine fault-tolerance in a federated optimization setting, where there is a group of agents communicating with a centralized coordinator. We allow up to $f$ Byzantine-faulty agents, which may no…
View article: Impact of Redundancy on Resilience in Distributed Optimization and Learning
Impact of Redundancy on Resilience in Distributed Optimization and Learning Open
This paper considers the problem of resilient distributed optimization and stochastic learning in a server-based architecture. The system comprises a server and multiple agents, where each agent has its own local cost function. The agents …
View article: Impact of Redundancy on Resilience in Distributed Optimization and Learning
Impact of Redundancy on Resilience in Distributed Optimization and Learning Open
This report considers the problem of resilient distributed optimization and stochastic learning in a server-based architecture. The system comprises a server and multiple agents, where each agent has its own local cost function. The agents…
View article: On the Impossible Safety of Large AI Models
On the Impossible Safety of Large AI Models Open
Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance. However they have been empirically found to pose serious security issues. This paper systematizes our know…
View article: Robust Collaborative Learning with Linear Gradient Overhead
Robust Collaborative Learning with Linear Gradient Overhead Open
Collaborative learning algorithms, such as distributed SGD (or D-SGD), are prone to faulty machines that may deviate from their prescribed algorithm because of software or hardware bugs, poisoned data or malicious behaviors. While many sol…
View article: Democratizing Machine Learning: Resilient Distributed Learning with Heterogeneous Participants
Democratizing Machine Learning: Resilient Distributed Learning with Heterogeneous Participants Open
The increasing prevalence of personal devices motivates the design of algorithms that can leverage their computing power, together with the data they generate, in order to build privacy-preserving and effective machine learning models. How…
View article: Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums
Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums Open
Byzantine resilience emerged as a prominent topic within the distributed machine learning community. Essentially, the goal is to enhance distributed optimization algorithms, such as distributed SGD, in a way that guarantees convergence des…
View article: On Preconditioning of Decentralized Gradient-Descent When Solving a System of Linear Equations
On Preconditioning of Decentralized Gradient-Descent When Solving a System of Linear Equations Open
This article considers solving an overdetermined system of linear equations in peer-to-peer multiagent networks. The network is assumed to be synchronous and strongly connected. Each agent has a set of local data points, and their goal is …
View article: Utilizing Redundancy in Cost Functions for Resilience in Distributed Optimization and Learning
Utilizing Redundancy in Cost Functions for Resilience in Distributed Optimization and Learning Open
This paper considers the problem of resilient distributed optimization and stochastic machine learning in a server-based architecture. The system comprises a server and multiple agents, where each agent has a local cost function. The agent…
View article: Combining Differential Privacy and Byzantine Resilience in Distributed\n SGD
Combining Differential Privacy and Byzantine Resilience in Distributed\n SGD Open
Privacy and Byzantine resilience (BR) are two crucial requirements of\nmodern-day distributed machine learning. The two concepts have been extensively\nstudied individually but the question of how to combine them effectively\nremains unans…
View article: Combining Differential Privacy and Byzantine Resilience in Distributed SGD
Combining Differential Privacy and Byzantine Resilience in Distributed SGD Open
Privacy and Byzantine resilience (BR) are two crucial requirements of modern-day distributed machine learning. The two concepts have been extensively studied individually but the question of how to combine them effectively remains unanswer…
View article: Byzantine Fault-Tolerance in Federated Local SGD under 2f-Redundancy
Byzantine Fault-Tolerance in Federated Local SGD under 2f-Redundancy Open
We consider the problem of Byzantine fault-tolerance in federated machine learning. In this problem, the system comprises multiple agents each with local data, and a trusted centralized coordinator. In fault-free setting, the agents collab…
View article: On Accelerating Distributed Convex Optimizations
On Accelerating Distributed Convex Optimizations Open
This paper studies a distributed multi-agent convex optimization problem. The system comprises multiple agents in this problem, each with a set of local data points and an associated local cost function. The agents are connected to a serve…
View article: Approximate Byzantine Fault-Tolerance in Distributed Optimization
Approximate Byzantine Fault-Tolerance in Distributed Optimization Open
This paper considers the problem of Byzantine fault-tolerance in distributed multi-agent optimization. In this problem, each agent has a local cost function, and in the fault-free case, the goal is to design a distributed algorithm that al…