Christopher G. Brinton
YOU?
Author Swipe
View article: Collaborative Device-Cloud LLM Inference through Reinforcement Learning
Collaborative Device-Cloud LLM Inference through Reinforcement Learning Open
Device-cloud collaboration has emerged as a promising paradigm for deploying large language models (LLMs), combining the efficiency of lightweight on-device inference with the superior performance of powerful cloud LLMs. An essential probl…
View article: Federated Foundation Models in Harsh Wireless Environments: Prospects, Challenges, and Future Directions
Federated Foundation Models in Harsh Wireless Environments: Prospects, Challenges, and Future Directions Open
Foundation models (FMs) have shown remarkable capabilities in generalized intelligence, multimodal understanding, and adaptive learning across a wide range of domains. However, their deployment in harsh or austere environments -- character…
View article: AoI-based Scheduling of Correlated Sources for Timely Inference
AoI-based Scheduling of Correlated Sources for Timely Inference Open
We investigate a real-time remote inference system where multiple correlated sources transmit observations over a communication channel to a receiver. The receiver utilizes these observations to infer multiple time-varying targets. Due to …
View article: Error Analysis for Over-the-Air Federated Learning under Misaligned and Time-Varying Channels
Error Analysis for Over-the-Air Federated Learning under Misaligned and Time-Varying Channels Open
This paper investigates an OFDM-based over-the-air federated learning (OTA-FL) system, where multiple mobile devices, e.g., unmanned aerial vehicles (UAVs), transmit local machine learning (ML) models to a central parameter server (PS) for…
View article: Physics-based Generative Models for Geometrically Consistent and Interpretable Wireless Channel Synthesis
Physics-based Generative Models for Geometrically Consistent and Interpretable Wireless Channel Synthesis Open
In recent years, machine learning (ML) methods have become increasingly popular in wireless communication systems for several applications. A critical bottleneck for designing ML systems for wireless communications is the availability of r…
View article: RCCDA: Adaptive Model Updates in the Presence of Concept Drift under a Constrained Resource Budget
RCCDA: Adaptive Model Updates in the Presence of Concept Drift under a Constrained Resource Budget Open
Machine learning (ML) algorithms deployed in real-world environments are often faced with the challenge of adapting models to concept drift, where the task data distributions are shifting over time. The problem becomes even more difficult …
View article: Learning-Based Two-Way Communications: Algorithmic Framework and Comparative Analysis
Learning-Based Two-Way Communications: Algorithmic Framework and Comparative Analysis Open
Machine learning (ML)-based feedback channel coding has garnered significant research interest in the past few years. However, there has been limited research exploring ML approaches in the so-called "two-way" setting where two users joint…
View article: Rethinking the Starting Point: Collaborative Pre-Training for Federated Downstream Tasks
Rethinking the Starting Point: Collaborative Pre-Training for Federated Downstream Tasks Open
A few recent studies have shown the benefits of using centrally pre-trained models to initialize federated learning (FL). However, existing methods do not generalize well when faced with an arbitrary set of downstream FL tasks. Specificall…
View article: Communication-Efficient Cooperative Localization: A Graph Neural Network Approach
Communication-Efficient Cooperative Localization: A Graph Neural Network Approach Open
Cooperative localization leverages noisy inter-node distance measurements and exchanged wireless messages to estimate node positions in a wireless network. In communication-constrained environments, however, transmitting large messages bec…
View article: Decentralized Domain Generalization with Style Sharing: Formal Model and Convergence Analysis
Decentralized Domain Generalization with Style Sharing: Formal Model and Convergence Analysis Open
Much of federated learning (FL) focuses on settings where local dataset statistics remain the same between training and testing. However, this assumption often does not hold in practice due to distribution shifts, motivating the developmen…
View article: A Primal-Dual Gradient Descent Approach to the Connectivity Constrained Sensor Coverage Problem
A Primal-Dual Gradient Descent Approach to the Connectivity Constrained Sensor Coverage Problem Open
Sensor networks play a critical role in many situational awareness applications. In this paper, we study the problem of determining sensor placements to balance coverage and connectivity objectives over a target region. Leveraging algebrai…
View article: Timely Trajectory Reconstruction in Finite Buffer Remote Tracking Systems
Timely Trajectory Reconstruction in Finite Buffer Remote Tracking Systems Open
Remote tracking systems play a critical role in applications such as IoT, monitoring, surveillance and healthcare. In such systems, maintaining both real-time state awareness (for online decision making) and accurate reconstruction of hist…
View article: Physics-Informed Generative Approaches for Wireless Channel Modeling
Physics-Informed Generative Approaches for Wireless Channel Modeling Open
In recent years, machine learning (ML) methods have become increasingly popular in wireless communication systems for several applications. A critical bottleneck for designing ML systems for wireless communications is the availability of r…
View article: DPZV: Elevating the Tradeoff between Privacy and Utility in Zeroth-Order Vertical Federated Learning
DPZV: Elevating the Tradeoff between Privacy and Utility in Zeroth-Order Vertical Federated Learning Open
Vertical Federated Learning (VFL) enables collaborative training with feature-partitioned data, yet remains vulnerable to privacy leakage through gradient transmissions. Standard differential privacy (DP) techniques such as DP-SGD are diff…
View article: Local-Cloud Inference Offloading for LLMs in Multi-Modal, Multi-Task, Multi-Dialogue Settings
Local-Cloud Inference Offloading for LLMs in Multi-Modal, Multi-Task, Multi-Dialogue Settings Open
Compared to traditional machine learning models, recent large language models (LLMs) can exhibit multi-task-solving capabilities through multiple dialogues and multi-modal data sources. These unique characteristics of LLMs, together with t…
View article: Gradient Correction in Federated Learning with Adaptive Optimization
Gradient Correction in Federated Learning with Adaptive Optimization Open
In federated learning (FL), model training performance is strongly impacted by data heterogeneity across clients. Client-drift compensation methods have recently emerged as a solution to this issue, introducing correction terms into local …
View article: Federated Sketching LoRA: A Flexible Framework for Heterogeneous Collaborative Fine-Tuning of LLMs
Federated Sketching LoRA: A Flexible Framework for Heterogeneous Collaborative Fine-Tuning of LLMs Open
Fine-tuning large language models (LLMs) on resource-constrained clients remains a challenging problem. Recent works have fused low-rank adaptation (LoRA) techniques with federated fine-tuning to mitigate challenges associated with client …
View article: Serving Long-Context LLMs at the Mobile Edge: Test-Time Reinforcement Learning-based Model Caching and Inference Offloading
Serving Long-Context LLMs at the Mobile Edge: Test-Time Reinforcement Learning-based Model Caching and Inference Offloading Open
Large Language Models (LLMs) can perform zero-shot learning on unseen tasks and few-shot learning on complex reasoning tasks. However, resource-limited mobile edge networks struggle to support long-context LLM serving for LLM agents during…
View article: Cooperative Decentralized Backdoor Attacks on Vertical Federated Learning
Cooperative Decentralized Backdoor Attacks on Vertical Federated Learning Open
Federated learning (FL) is vulnerable to backdoor attacks, where adversaries alter model behavior on target classification labels by embedding triggers into data samples. While these attacks have received considerable attention in horizont…
View article: Computation and Communication Co-scheduling for Multi-Task Remote Inference
Computation and Communication Co-scheduling for Multi-Task Remote Inference Open
In multi-task remote inference systems, an intelligent receiver (e.g., command center) performs multiple inference tasks (e.g., target detection) using data features received from several remote sources (e.g., edge devices). Key challenges…
View article: Federated Learning for Cyber Physical Systems: A Comprehensive Survey
Federated Learning for Cyber Physical Systems: A Comprehensive Survey Open
The integration of machine learning (ML) in cyber physical systems (CPS) is a complex task due to the challenges that arise in terms of real-time decision making, safety, reliability, device heterogeneity, and data privacy. There are also …
View article: Key Focus Areas and Enabling Technologies for 6G
Key Focus Areas and Enabling Technologies for 6G Open
We provide a taxonomy of a dozen enabling network architectures, protocols, and technologies that will define the evolution from 5G to 6G. These technologies span the network protocol stack, different target deployment environments, and va…
View article: Digitally Mediated Therapeutic Relationships in Primary Care
Digitally Mediated Therapeutic Relationships in Primary Care Open
CONTEXT: Therapeutic relationships have been demonstrated as fundamental to primary care delivery. The rapid adoption of digital technologies since the onset of COVID-19 has led health care systems to consider or adopt a “digital-first” pr…
View article: Using Diffusion Models as Generative Replay in Continual Federated Learning -- What will Happen?
Using Diffusion Models as Generative Replay in Continual Federated Learning -- What will Happen? Open
Federated learning (FL) has become a cornerstone in decentralized learning, where, in many scenarios, the incoming data distribution will change dynamically over time, introducing continuous learning (CL) problems. This continual federated…
View article: Enhanced Real-Time Threat Detection in 5G Networks: A Self-Attention RNN Autoencoder Approach for Spectral Intrusion Analysis
Enhanced Real-Time Threat Detection in 5G Networks: A Self-Attention RNN Autoencoder Approach for Spectral Intrusion Analysis Open
In the rapidly evolving landscape of 5G technology, safeguarding Radio Frequency (RF) environments against sophisticated intrusions is paramount, especially in dynamic spectrum access and management. This paper presents an enhanced experim…
View article: Deep Learning Aided Broadcast Codes with Feedback
Deep Learning Aided Broadcast Codes with Feedback Open
Deep learning aided codes have been shown to improve code performance in feedback codes in high noise regimes due to the ability to leverage non-linearity in code design. In the additive white Gaussian broadcast channel (AWGN-BC), the addi…
View article: Federated Learning with Dynamic Client Arrival and Departure: Convergence and Rapid Adaptation via Initial Model Construction
Federated Learning with Dynamic Client Arrival and Departure: Convergence and Rapid Adaptation via Initial Model Construction Open
Most federated learning (FL) approaches assume a fixed client set. However, real-world scenarios often involve clients dynamically joining or leaving the system based on their needs or interest in specific tasks. This dynamic setting intro…
View article: Hierarchical Federated Learning with Multi-Timescale Gradient Correction
Hierarchical Federated Learning with Multi-Timescale Gradient Correction Open
While traditional federated learning (FL) typically focuses on a star topology where clients are directly connected to a central server, real-world distributed systems often exhibit hierarchical architectures. Hierarchical FL (HFL) has eme…
View article: A Hierarchical Gradient Tracking Algorithm for Mitigating Subnet-Drift in Fog Learning Networks
A Hierarchical Gradient Tracking Algorithm for Mitigating Subnet-Drift in Fog Learning Networks Open
Federated learning (FL) encounters scalability challenges when implemented over fog networks that do not follow FL's conventional star topology architecture. Semi-decentralized FL (SD-FL) has proposed a solution for device-to-device (D2D) …
View article: Unlocking the Potential of Model Calibration in Federated Learning
Unlocking the Potential of Model Calibration in Federated Learning Open
Over the past several years, various federated learning (FL) methodologies have been developed to improve model accuracy, a primary performance metric in machine learning. However, to utilize FL in practical decision-making scenarios, beyo…