SpacePhish: The Evasion-space of Adversarial Attacks against Phishing Website Detectors using Machine Learning
Conference Annual Computer Security Applications Conference
Austin, TX, USA
Oneliner: A joint effort with UniPD, casting light on some overlooked aspects of adversarial ML in the context of phishing website detection.
Giving (and making) this presentation was really fun. I decided to start with a thought-provoking question (“do you know of any adversarial ML attack that has an effectiveness of only 3%?”), and I did not expect to receive any comment on it during my talk. Surprisingly, one of the attendees interrupted me, asking “what is the point of describing such an attack?” – which was the entire point behind our paper!
Indeed, the most important takeaway of this paper is that it is possible to craft adversarial ML attacks that, despite having an intrinsically low effectiveness (around 3%), are extremely cheap (only 7 lines of code) and that are also dangerous—given the high number of phishing webpages created every day. With these premises, we argue that real phishers are more inclined to opt for these “cheap” strategies over more “expensive” ones previously proposed in literature.
In a sense, this paper tells the importance of context (phishing, in our case) w.r.t. the “real” risk of adversarial ML attacks. I must commend the other main author, Ying Yuan, for the insane effort she put in the experiments (which allowed us to obtain a “Reusable” artifact!)