Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using\n Generative Models Article Swipe
YOU?
·
· 2018
· Open Access
·
· DOI: https://doi.org/10.48550/arxiv.1805.06605
· OA: W2964197269
In recent years, deep neural network approaches have been widely adopted for\nmachine learning tasks, including classification. However, they were shown to\nbe vulnerable to adversarial perturbations: carefully crafted small\nperturbations can cause misclassification of legitimate images. We propose\nDefense-GAN, a new framework leveraging the expressive capability of generative\nmodels to defend deep neural networks against such attacks. Defense-GAN is\ntrained to model the distribution of unperturbed images. At inference time, it\nfinds a close output to a given image which does not contain the adversarial\nchanges. This output is then fed to the classifier. Our proposed method can be\nused with any classification model and does not modify the classifier structure\nor training procedure. It can also be used as a defense against any attack as\nit does not assume knowledge of the process for generating the adversarial\nexamples. We empirically show that Defense-GAN is consistently effective\nagainst different attack methods and improves on existing defense strategies.\nOur code has been made publicly available at\nhttps://github.com/kabkabm/defensegan\n