Real Attackers Don`t Compute Gradients: Bridging the Gap Between Adversarial ML Research and Practice
Conference First IEEE Conference on Secure and Trustworthy Machine Learning
Raleigh, NC, USA
Oneliner: Besides the content of the paper, the talk has a meta-message.
I do not know where to begin in describing how satisfactory it was to work on this paper—and giving the corresponding talk.
Of one thing, however, I am certain—which I tried my best to convey during its presentation: the paper does not have “only” 6 authors. Indeed, this paper was only possible because of the Dagstuhl Seminar on “Security of Machine Learning”, held in July 2022, which most authors participated to (I met most of them there for the first time!).
We wrote a 26-pages–long (position) paper in just 1 month, but doing so was both pleasing and easy: the majority of our takeaways derived from the passionate discussions that we all had during the Dagstuhl Seminar. Once again, I am extremely grateful for having participated to such an event (and, for this, I will be forever thankful to the organizers).
Slides Poster Presentation (Video) Venue Paper
This work apparently was well-received by our community (and this makes me all-the-more happy), to the point that several follow-up talks/webinars were given to further explore some of the themes tackled by the paper. Such events include:
- A webinar involving four of the paper’s authors, organized by Robust Intelligence (the video is available)
- A talk I gave at a research seminar at the SPRING Lab at EPFL