Abstract. Classifiers based on Machine Learning are vulnerable to adversarial attacks, which involve the creation of malicious samples that are not classified correctly. While this phenomenon has been extensively studied within the image processing domain, comprehensive analyses are scarce in the cybersecurity field. This is a critical problem because cyber-detectors are being increasingly integrated with machine learning methods, making them suitable targets for skilled attackers leveraging adversarial samples to evade detection. In this paper, we propose a thorough analysis of realistic adversarial attacks performed against network intrusion detection systems that focus on identifying botnet traffic through machine learning classifiers. Our large campaign of experiments involves the most recent public datasets, representing multiple realistic network scenarios. Moreover, we evaluate the impact of these attacks against state-of-the-art detectors relying on different machine learning algorithms, providing a clear overview of this problem. The results outline the fragility of these methods. Our study represent a stepping stone for devising suitable countermeasures to the menace of adversarial attacks against cyber-detectors.