ADA Library Digital Repository

Protecting Machine Learning models against Adversarial Attacks

Show simple item record

dc.contributor.author Shahbandayeva, Lala
dc.date.accessioned 2024-12-19T23:31:36Z
dc.date.available 2024-12-19T23:31:36Z
dc.date.issued 2023-04
dc.identifier.uri http://hdl.handle.net/20.500.12181/927
dc.description.abstract Machine Learning and Deep Learning have been widely used in different domains and showed their performance in many applications such as fraud detection, speech recognition, etc. One of the domains in which Machine Learning and Deep Learning demonstrated their effectiveness is network intrusion detection systems. Considering the success of ML and DL in these domains, they have been actively used - the models trained with new algorithms and datasets, deployed, and actively used in decision-making. Even though they produced high-performance metrics, most recent studies proved that machine learning and deep learning algorithms are not robust and secure against adversarial inputs in the computer vision domain. These findings have introduced a new concern about the application of machine learning and deep learning in security-related domains such as network intrusion detection systems. As a case in point, an adversarial network traffic flow can cause network intrusion detection systems to classify at- tacks as benign. In this paper, we demonstrate the performance of adversarial attacks against network intrusion detection systems which are built using deep neural networks based on the results of experiments. Based on our findings, the application of the adversarial examples and the robustness of the deep learning-based network intrusion detection systems are discussed. Based on the results, adversarial training increases the model performance, but generates extra complexity due to the retraining process. The usage of less adversarial features or Long Short Term Memory (LSTM) models can help to increase model robustness without the need for a retraining process. en_US
dc.language.iso en en_US
dc.publisher ADA University en_US
dc.rights Attribution-NonCommercial-NoDerivs 3.0 United States *
dc.rights.uri http://creativecommons.org/licenses/by-nc-nd/3.0/us/ *
dc.subject Intrusion detection systems (Computer security) en_US
dc.subject Machine learning -- Security applications en_US
dc.subject Artificial intelligence -- Adversarial methods en_US
dc.title Protecting Machine Learning models against Adversarial Attacks en_US
dc.type Thesis en_US


Files in this item

The following license files are associated with this item:

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States

Search ADA LDR


Advanced Search

Browse

My Account