Hardening Random Forest Cyber Detectors against Adversarial Attacks

Apruzzese, G., Andreolini, M., Colajanni, M., & Marchetti, M., IEEE Transactions on Emerging Topics in Computational Intelligence, 2020 Journal
Oneliner: Applying Defensive Distillation to Random Forest!

Abstract. Machine learning algorithms are effective in several applications, but they are not as much successful when applied to intrusion detection in cyber security. Due to the high sensitivity to their training data, cyber detectors based on machine learning are vulnerable to targeted adversarial attacks that involve the perturbation of initial samples. Existing defenses assume unrealistic scenarios; their results are underwhelming in non-adversarial settings; or they can be applied only to machine learning algorithms that perform poorly for cyber security. We present an original methodology for countering adversarial perturbations targeting intrusion detection systems based on random forests. As a practical application, we integrate the proposed defense method in a cyber detector analyzing network traffic. The experimental results on millions of labelled network flows show that the new detector has a twofold value: it outperforms state-of-the-art detectors that are subject to adversarial attacks; it exhibits robust results both in adversarial and non-adversarial scenarios.

Paper PDF Cite IEEE Xplore Code (TBD) Dataset