Evaluating Adversarial Learning on Different Types of Deep Learning-based Intrusion Detection Systems using min-max optimization

Public Deposited
Resource Type
Creator
Abstract
  • In this research, we focus on investigating the effectiveness of different adversarial attacks and robustness of deep learning-based Intrusion detection using different Neural networks, e.g., Artificial Neural Network, convolutional neural networks, recurrent neural networks. We utilize the min-max approach to formulate the problem of training robustness intrusion detection against adversarial samples using UNSW-NB15 and NSD-KDD. We structure an optimization framework by applying the max approach to generate persuasive adversarial samples that maximum loss. On the other side, we minimize the loss of the incorporated adversarial samples during the training time. With our experiments on multiple deep neural networks algorithms and two benchmark datasets, we demonstrate that defense using adversarial training based min-max approach increases the robustness of the network under the assumption of our threat model and five state-of-the-art adversarial attacks.

Subject
Language
Publisher
Thesis Degree Level
Thesis Degree Name
Thesis Degree Discipline
Identifier
Rights Notes
  • Copyright © 2020 the author(s). Theses may be used for non-commercial research, educational, or related academic purposes only. Such uses include personal study, research, scholarship, and teaching. Theses may only be shared by linking to Carleton University Institutional Repository and no part may be used without proper attribution to the author. No part may be used for commercial purposes directly or indirectly via a for-profit platform; no adaptation or derivative works are permitted without consent from the copyright owner.

Date Created
  • 2020

Relations

In Collection: