Comprehensive Analysis Over Centralized and Federated Learning-Based Anomaly Detection in Networks with Explainable AI (XAI) | |
---|---|
Author | |
Abstract |
Many forms of machine learning (ML) and artificial intelligence (AI) techniques are adopted in communication networks to perform all optimizations, security management, and decision-making tasks. Instead of using conventional blackbox models, the tendency is to use explainable ML models that provide transparency and accountability. Moreover, Federate Learning (FL) type ML models are becoming more popular than the typical Centralized Learning (CL) models due to the distributed nature of the networks and security privacy concerns. Therefore, it is very timely to research how to find the explainability using Explainable AI (XAI) in different ML models. This paper comprehensively analyzes using XAI in CL and FL-based anomaly detection in networks. We use a deep neural network as the black-box model with two data sets, UNSW-NB15 and NSLKDD, and SHapley Additive exPlanations (SHAP) as the XAI model. We demonstrate that the FL explanation differs from CL with the client anomaly percentage. |
Year of Publication |
2023
|
Date Published |
may
|
URL |
https://ieeexplore.ieee.org/document/10278845
|
DOI |
10.1109/ICC45041.2023.10278845
|
Google Scholar | BibTeX | DOI |