Dual Adversarial Attacks: Fooling Humans and Classifiers

Schneider, J., & Apruzzese, G., Journal of Information Security and Applications, 2023 Journal
Oneliner: We extend the [DLS22] paper and we also carry out a user-study!

Abstract. Adversarial samples mostly aim at fooling machine learning (ML) models. They often involve minor pixel-based perturbations that are imperceptible to human observers. In this work, adversarial samples should fool both humans and ML models, which is important in two-stage decision processes. We perform changes on a higher abstraction level so that a target sample exhibits properties of a desired sample. Technically, we contribute by deriving a regularization scheme for autoencoders incorporating a classifier loss for smoothly interpolating between wildly different samples. The realism and effectiveness of generated samples are confirmed with a user study and other evaluations. Our experiments consider neural networks of four architectures, assessed on MNIST, FashionMNIST, QuickDraw and CIFAR-10. Results show that our scheme leads to superior performance compared to existing interpolation techniques: on average, other methods have an 11% higher failure rate when producing a sample that is of any of two interpolated classes. Furthermore, our attacks work in both white- and black-box settings. Finally, humans are very confused by the samples generated with our method (p<0.0001).

Paper PDF Cite ScienceDirect