Deceiving Post-Hoc Explainable AI (XAI) Methods in Network Intrusion Detection | |
---|---|
Author | |
Abstract |
Artificial Intelligence used in future networks is vulnerable to biases, misclassifications, and security threats, which seeds constant scrutiny in accountability. Explainable AI (XAI) methods bridge this gap in identifying unaccounted biases in black-box AI/ML models. However, scaffolding attacks can hide the internal biases of the model from XAI methods, jeopardizing any auditory or monitoring processes, service provisions, security systems, regulators, auditors, and end-users in future networking paradigms, including Intent-Based Networking (IBN). For the first time ever, we formalize and demonstrate a framework on how an attacker would adopt scaffoldings to deceive the security auditors in Network Intrusion Detection Systems (NIDS). Furthermore, we propose a detection method that auditors can use to detect the attack efficiently. We rigorously test the attack and detection methods using the NSL-KDD. We then simulate the attack on 5G network data. Our simulation illustrates that the attack adoption method is successful, and the detection method can identify an affected model with extremely high confidence. |
Year of Publication |
2024
|
Date Published |
jan
|
URL |
https://ieeexplore.ieee.org/document/10454633
|
DOI |
10.1109/CCNC51664.2024.10454633
|
Google Scholar | BibTeX | DOI |