Nghiên cứu tấn công tiêm nhiễm tập dữ liệu chống lại hệ thống phát hiện xâm nhập mạng

  • Van Quan Nguyen Faculty of Information Technology, Le Quy Don Technical University
  • Van Cuong Nguyen Faculty of Information Technology, Le Quy Don Technical University
  • Tuan Hao Hoang Faculty of Information Technology, Le Quy Don Technical University

Tóm tắt

Nowadays, deep learning is becoming the most strong and efficient framework, which can be implemented in a wide range of areas. Particularly, advances of modern deep learning approaches have proven their effectiveness in building next generation smart intrusion detection systems (IDSs). However, deep learning-based systems are still vulnerable to adversarial examples, which can destroy the robustness of the models. Poisoning attack is a family of adversarial attacks against machine learning-based models. Generally, an adversary has the ability to inject a small proportion of malicious samples into training dataset to degrade the performance of victim’s models. The robustness of deep learning-based IDSs has been becoming a really important concern. In this work, we investigate poisonous attacks against deep learning-based network intrusion detection systems. We clarify the general attack strategy, perform experiments on multiple datasets including CTU13-08, CTU13-09, CTU13-10 and CTU13-13. Experimental results have shown that only a small amount of injected samples has drastically reduced the performance of the deep learning-based IDSs.

điểm /   đánh giá
Phát hành ngày
2022-06-27
Chuyên mục
Bài viết