Backdoor ≈ Backdoor
View article
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks Open
Lack of transparency in deep neural networks (DNNs) make them susceptible to backdoor attacks, where hidden associations or triggers override normal classification to produce unexpected results. For example, a model with a backdoor always …
View article
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning Open
Deep learning models have achieved high performance on many tasks, and thus have been applied to many security-critical scenarios. For example, deep learning-based face recognition systems have been used to authenticate users to access man…
View article
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain Open
Deep learning-based techniques have achieved state-of-the-art performance on a wide variety of recognition and classification tasks. However, these networks are typically computationally expensive to train, requiring weeks of computation o…
View article
BadNets: Evaluating Backdooring Attacks on Deep Neural Networks Open
Deep learning-based techniques have achieved state-of-the-art performance on a wide variety of recognition and classification tasks. However, these networks are typically computationally expensive to train, requiring weeks of computation o…
View article
How To Backdoor Federated Learning Open
Federated learning enables thousands of participants to construct a deep learning model without sharing their private training data with each other. For example, multiple smartphones can jointly train a next-word predictor for keyboards wi…
View article
Machine Learning-Based Network Vulnerability Analysis of Industrial Internet of Things Open
It is critical to secure the Industrial Internet of Things (IIoT) devices because of potentially devastating consequences in case of an attack. Machine learning (ML) and big data analytics are the two powerful leverages for analyzing and s…
View article
Can You Really Backdoor Federated Learning? Open
The decentralized nature of federated learning makes detecting and defending against adversarial attacks a challenging task. This paper focuses on backdoor attacks in the federated learning setting, where the goal of the adversary is to re…
View article
Latent Backdoor Attacks on Deep Neural Networks Open
Recent work proposed the concept of backdoor attacks on deep neural networks (DNNs), where misclassification rules are hidden inside normal models, only to be triggered by very specific inputs. However, these "traditional" backdoors assume…
View article
DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks Open
Deep Neural Networks (DNNs) are vulnerable to Neural Trojan (NT) attacks where the adversary injects malicious behaviors during DNN training. This type of ‘backdoor’ attack is activated when the input is stamped with the trigger pattern sp…
View article
A Backdoor Attack Against LSTM-Based Text Classification Systems Open
With the widespread use of deep learning system in many applications, the adversary has strong incentive to explore vulnerabilities of deep neural networks and manipulate them. Backdoor attacks against deep neural networks have been report…
View article
Detecting Backdoor Attacks on Deep Neural Networks by Activation\n Clustering Open
While machine learning (ML) models are being increasingly trusted to make\ndecisions in different and varying areas, the safety of systems using such\nmodels has become an increasing concern. In particular, ML models are often\ntrained on …
View article
Spectral Signatures in Backdoor Attacks Open
A recent line of work has uncovered a new form of data poisoning: so-called \emph{backdoor} attacks. These attacks are particularly dangerous because they do not affect a network's behavior on typical, benign data. Rather, the network only…
View article
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering Open
While machine learning (ML) models are being increasingly trusted to make decisions in different and varying areas, the safety of systems using such models has become an increasing concern. In particular, ML models are often trained on dat…
View article
Dynamic Backdoor Attacks Against Machine Learning Models Open
Machine learning (ML) has made tremendous progress during the past decade and is being adopted in various critical real-world applications. However, recent research has shown that ML models are vulnerable to multiple security and privacy a…
View article
Backdoor Attacks to Graph Neural Networks Open
In this work, we propose the first backdoor attack to graph neural networks (GNN). Specifically, we propose a subgraph based backdoor attack to GNN for graph classification. In our backdoor attack, a GNN classifier predicts an attacker-cho…
View article
Learning to Detect Malicious Clients for Robust Federated Learning Open
Federated learning systems are vulnerable to attacks from malicious clients. As the central server in the system cannot govern the behaviors of the clients, a rogue client may initiate an attack by sending malicious model updates to the se…
View article
Alternative (backdoor) androgen production and masculinization in the human fetus Open
Masculinization of the external genitalia in humans is dependent on formation of 5α-dihydrotestosterone (DHT) through both the canonical androgenic pathway and an alternative (backdoor) pathway. The fetal testes are essential for canonical…
View article
Poisoning Attacks on Federated Learning-based IoT Intrusion Detection System Open
Federated Learning (FL) is an appealing method for applying machine learning to large scale systems due to the privacy and efficiency advantages that its training mechanism provides.One important field for FL deployment is emerging IoT app…
View article
ONION: A Simple and Effective Defense Against Textual Backdoor Attacks Open
Backdoor attacks are a kind of emergent training-time threat to deep neural networks (DNNs). They can manipulate the output of DNNs and possess high insidiousness. In the field of natural language processing, some attack methods have been …
View article
Input-Aware Dynamic Backdoor Attack Open
In recent years, neural backdoor attack has been considered to be a potential security threat to deep learning systems. Such systems, while achieving the state-of-the-art performance on clean data, perform abnormally on inputs with predefi…
View article
Narcissus: A Practical Clean-Label Backdoor Attack with Limited Information Open
Backdoor attacks introduce manipulated data into a machine learning model's training set, causing the model to misclassify inputs with a trigger during testing to achieve a desired outcome by the attacker. For backdoor attacks to bypass hu…
View article
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements Open
Deep neural network (DNN) has progressed rapidly during the past decade and DNN models have been deployed in various real-world applications. Meanwhile, DNN models have been shown to be vulnerable to security and privacy attacks. One such …
View article
Deep learning based Sequential model for malware analysis using Windows exe API Calls Open
Malware development has seen diversity in terms of architecture and features. This advancement in the competencies of malware poses a severe threat and opens new research dimensions in malware detection. This study is focused on metamorphi…
View article
Cybersecurity:risks, vulnerabilities and countermeasures to prevent social engineering attacks Open
Social engineering, also known as human hacking, is the art of tricking employees and consumers into disclosing their credentials and then using them to gain access to networks or accounts.It is a hacker's tricky use of deception or manipu…
View article
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks Open
Deep neural networks (DNNs) are known vulnerable to backdoor attacks, a training time attack that injects a trigger pattern into a small proportion of training data so as to control the model's prediction at the test time. Backdoor attacks…
View article
TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems Open
A trojan backdoor is a hidden pattern typically implanted in a deep neural network. It could be activated and thus forces that infected model behaving abnormally only when an input data sample with a particular trigger present is fed to th…
View article
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review Open
This work provides the community with a timely comprehensive review of backdoor attacks and countermeasures on deep learning. According to the attacker's capability and affected stage of the machine learning pipeline, the attack surfaces a…
View article
DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection Open
Federated Learning (FL) allows multiple clients to collaboratively train a\nNeural Network (NN) model on their private data without revealing the data.\nRecently, several targeted poisoning attacks against FL have been introduced.\nThese a…
View article
WaNet -- Imperceptible Warping-based Backdoor Attack Open
With the thriving of deep learning and the widespread practice of using pre-trained networks, backdoor attacks have become an increasing security threat drawing many research interests in recent years. A third-party model can be poisoned i…
View article
Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger Open
Fanchao Qi, Mukai Li, Yangyi Chen, Zhengyan Zhang, Zhiyuan Liu, Yasheng Wang, Maosong Sun. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Langu…