Attacking Logo-based Phishing Website Detectors with Adversarial Perturbations

Lee, J., Xin, Z., Ng. M. P. S., Sabharwal, K., Apruzzese, G., Divakaran. D. M., European Symposium on Research In Computer Security, 2023 Conference
Oneliner: A novel attack against state-of-the-art DL methods for logo identification, validated via two user-studies.

Abstract. Recent times have witnessed the rise of anti-phishing schemes powered by deep learning (DL). In particular, logo-based phishing detectors rely on DL models from Computer Vision to identify logos of well-known brands on webpages, to detect malicious webpages that imitate a given brand. For instance, Siamese networks have demonstrated notable performance for these tasks, enabling the corresponding anti-phishing solutions to detect even “zero-day” phishing webpages. In this work, we take the next step of studying the robustness of logo-based phishing detectors against adversarial ML attacks. We propose a novel attack exploiting generative adversarial perturbations to craft “adversarial logos” that evade phishing detectors. We evaluate our attacks through: (i) experiments on datasets containing real logos, to evaluate the robustness of state-of-the-art phishing detectors; and (ii) user studies to gauge whether our adversarial logos can deceive human eyes. The results show that our proposed attack is capable of crafting perturbed logos subtle enough to evade various DL models—achieving an evasion rate of up to 95%. Moreover, users are not able to spot significant differences between generated adversarial logos and original ones.

Paper PDF Cite SpringerLink Artifact (Code) Talk