A Review of the Impact of Adversarial Attacks on Intrusion Detection Systems Based on Machine Learning and Deep Learning

Authors

  • Yanlong Li

DOI:

https://doi.org/10.56028/aetr.14.1.1601.2025

Keywords:

Intrusion Detection Systems; Machine Learning; Deep Learning; Adversarial Attacks;

Abstract

As cybersecurity threats grow increasingly sophisticated, intrusion detection systems (IDS) have become pivotal in safeguarding networks and hosts. Machine learning (ML) and deep learning (DL) techniques have markedly enhanced the detection capabilities of host-based IDS (HIDS), particularly in addressing unknown and zero-day attacks. However, the opaque nature of these models renders them susceptible to adversarial attacks. This paper systematically reviews the adversarial attacks targeting ML/DL-based HIDS, their impacts on system performance (e.g., accuracy and efficiency), and available defense measures. Findings reveal that evasion, poisoning, and exploratory attacks pose significant threats, leading to reduced detection accuracy, increased false positive rates, and compromised model integrity. While defense strategies, including adversarial training, feature squeezing, and ensemble methods, demonstrate potential, their practical applicability remains to be validated. This study offers a comprehensive perspective on HIDS vulnerabilities to adversarial attacks and proposes future research directions, such as developing dedicated datasets and real-world validation, to bolster HIDS robustness.

Downloads

Published

2025-07-28