Detecting Backdoor Attacks on Deep Neural Networks by Activation\n Clustering Article Swipe
YOU?
·
· 2018
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.1811.03728
· OA: W4289300166
While machine learning (ML) models are being increasingly trusted to make\ndecisions in different and varying areas, the safety of systems using such\nmodels has become an increasing concern. In particular, ML models are often\ntrained on data from potentially untrustworthy sources, providing adversaries\nwith the opportunity to manipulate them by inserting carefully crafted samples\ninto the training set. Recent work has shown that this type of attack, called a\npoisoning attack, allows adversaries to insert backdoors or trojans into the\nmodel, enabling malicious behavior with simple external backdoor triggers at\ninference time and only a blackbox perspective of the model itself. Detecting\nthis type of attack is challenging because the unexpected behavior occurs only\nwhen a backdoor trigger, which is known only to the adversary, is present.\nModel users, either direct users of training data or users of pre-trained model\nfrom a catalog, may not guarantee the safe operation of their ML-based system.\nIn this paper, we propose a novel approach to backdoor detection and removal\nfor neural networks. Through extensive experimental results, we demonstrate its\neffectiveness for neural networks classifying text and images. To the best of\nour knowledge, this is the first methodology capable of detecting poisonous\ndata crafted to insert backdoors and repairing the model that does not require\na verified and trusted dataset.\n