Concept-based Adversarial Attacks: Tricking Humans and Classifiers Alike
Schneider, J., & Apruzzese, G., IEEE Symposium on Security and Privacy – Deep Learning and Security Workshop, 2022 Workshop
Oneliner: What's the point of minimal perturbations if we want to fool humans?
Abstract. We propose to generate adversarial samples by modifying activations of upper layers encoding semantically meaningful concepts. The original sample is shifted towards a target sample, yielding an adversarial sample, by using the modified activations to reconstruct the original sample. A human might (and possibly should) notice differences between the original and the adversarial sample. Depending on the attacker-provided constraints, an adversarial sample can exhibit subtle differences or appear like a “forged” sample from another class. Our approach and goal are in stark contrast to common attacks involving perturbations of single pixels that are not recognizable by humans. Our approach is relevant in, e.g., multi-stage processing of inputs, where both humans and machines are involved in decision-making because invisible perturbations will not fool a human. Our evaluation focuses on deep neural networks. We also show the transferability of our adversarial examples among networks.