Feijie Wu
YOU?
Author Swipe
View article: Metal layer depletion during the super substorm on 4 November 2021
Metal layer depletion during the super substorm on 4 November 2021 Open
Metal layer forms as a result of meteoric ablation and exist as a layer of metal elements between approximately 80 and 105 km altitude, and it provides information about the physics and chemistry of the boundary between the atmosphere and …
View article: Single‐Cell Transcriptome Atlas and Dynamic Regulatory Mechanisms of Anther Development in Alfalfa ( <i>Medicago sativa</i> L.)
Single‐Cell Transcriptome Atlas and Dynamic Regulatory Mechanisms of Anther Development in Alfalfa ( <i>Medicago sativa</i> L.) Open
Anthers consist of various specialised cell types and play a significant role in plant reproduction. Although the molecular mechanisms underlying anther development and regulation have been extensively studied, the single‐cell transcriptio…
View article: Metal Layer Depletion during the Super Substorm on 4 November 2021
Metal Layer Depletion during the Super Substorm on 4 November 2021 Open
Metal layer forms as a result of meteoric ablation and exist as a layer of metal elements between approximately 80 and 105 km altitude, and it provides information about the physics and chemistry of the boundary between the atmosphere and …
View article: FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model
FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model Open
Large language models (LLMs) show amazing performance on many domain-specific tasks after fine-tuning with some appropriate data. However, many domain-specific data are privately distributed across multiple owners. Thus, this dilemma raise…
View article: FIARSE: Model-Heterogeneous Federated Learning via Importance-Aware Submodel Extraction
FIARSE: Model-Heterogeneous Federated Learning via Importance-Aware Submodel Extraction Open
In federated learning (FL), accommodating clients' varied computational capacities poses a challenge, often limiting the participation of those with constrained resources in global model training. To address this issue, the concept of mode…
View article: Evaluating the Factuality of Large Language Models using Large-Scale Knowledge Graphs
Evaluating the Factuality of Large Language Models using Large-Scale Knowledge Graphs Open
The advent of Large Language Models (LLMs) has significantly transformed the AI landscape, enhancing machine learning and AI capabilities. Factuality issue is a critical concern for LLMs, as they may generate factually incorrect responses.…
View article: Towards Poisoning Fair Representations
Towards Poisoning Fair Representations Open
Fair machine learning seeks to mitigate model prediction bias against certain demographic subgroups such as elder and female. Recently, fair representation learning (FRL) trained by deep neural networks has demonstrated superior performanc…
View article: Macular: A Multi-Task Adversarial Framework for Cross-Lingual Natural Language Understanding
Macular: A Multi-Task Adversarial Framework for Cross-Lingual Natural Language Understanding Open
Cross-lingual natural language understanding~(NLU) aims to train NLU models on a source language and apply the models to NLU tasks in target languages, and is a fundamental task for many cross-language applications. Most of the existing cr…
View article: GlueFL: Reconciling Client Sampling and Model Masking for Bandwidth Efficient Federated Learning
GlueFL: Reconciling Client Sampling and Model Masking for Bandwidth Efficient Federated Learning Open
Federated learning (FL) is an effective technique to directly involve edge devices in machine learning training while preserving client privacy. However, the substantial communication overhead of FL makes training challenging when edge dev…
View article: Anchor Sampling for Federated Learning with Partial Client Participation
Anchor Sampling for Federated Learning with Partial Client Participation Open
Compared with full client participation, partial client participation is a more practical scenario in federated learning, but it may amplify some challenges in federated learning, such as data heterogeneity. The lack of inactive clients' u…
View article: Sign Bit is Enough: A Learning Synchronization Framework for Multi-hop All-reduce with Ultimate Compression
Sign Bit is Enough: A Learning Synchronization Framework for Multi-hop All-reduce with Ultimate Compression Open
Traditional one-bit compressed stochastic gradient descent can not be directly employed in multi-hop all-reduce, a widely adopted distributed training paradigm in network-intensive high-performance computing systems such as public clouds. …
View article: From Deterioration to Acceleration: A Calibration Approach to Rehabilitating Step Asynchronism in Federated Optimization
From Deterioration to Acceleration: A Calibration Approach to Rehabilitating Step Asynchronism in Federated Optimization Open
In the setting of federated optimization, where a global model is aggregated periodically, step asynchronism occurs when participants conduct model training by efficiently utilizing their computational resources. It is well acknowledged th…
View article: Parameterized Knowledge Transfer for Personalized Federated Learning
Parameterized Knowledge Transfer for Personalized Federated Learning Open
In recent years, personalized federated learning (pFL) has attracted increasing attention for its potential in dealing with statistical heterogeneity among clients. However, the state-of-the-art pFL methods rely on model parameters aggrega…
View article: On the Convergence of Quantized Parallel Restarted SGD for Serverless Learning
On the Convergence of Quantized Parallel Restarted SGD for Serverless Learning Open
With the growing data volume and the increasing concerns of data privacy, Stochastic Gradient Decent (SGD) based distributed training of deep neural network has been widely recognized as a promising approach. Compared with server-based arc…
View article: On the Convergence of Quantized Parallel Restarted SGD for Central Server Free Distributed Training
On the Convergence of Quantized Parallel Restarted SGD for Central Server Free Distributed Training Open
Communication is a crucial phase in the context of distributed training. Because parameter server (PS) frequently experiences network congestion, recent studies have found that training paradigms without a centralized server outperform the…