Abstract:
Machine Learning and Deep Learning have been widely used in different domains and showed their performance in many applications such as fraud detection, speech recognition, etc. One of the domains in which Machine Learning and Deep Learning demonstrated their effectiveness is network intrusion detection systems. Considering the success of ML and DL in these domains, they have been actively used - the models trained with new algorithms and datasets, deployed,
and actively used in decision-making. Even though they produced high-performance metrics, most recent studies proved that machine learning and deep learning algorithms are not robust and secure against adversarial inputs in the computer vision domain. These findings have
introduced a new concern about the application of machine learning and deep learning in security-related domains such as network intrusion detection systems. As a case in point, an adversarial network traffic flow can cause network intrusion detection systems to classify at-
tacks as benign. In this paper, we demonstrate the performance of adversarial attacks against network intrusion detection systems which are built using deep neural networks based on the results of experiments. Based on our findings, the application of the adversarial examples and the robustness of the deep learning-based network intrusion detection systems are discussed.
Based on the results, adversarial training increases the model performance, but generates extra complexity due to the retraining process. The usage of less adversarial features or Long Short
Term Memory (LSTM) models can help to increase model robustness without the need for a retraining process.