Enhancing Machine Learning Model Interpretability in Intrusion Detection Systems through SHAP Explanations and LLM-Generated Descriptions
Author
Abstract

Intrusion Detection Systems (IDS) are critical for detecting and mitigating cyber threats, yet the opaqueness of machine learning models used within these systems poses challenges for understanding their decisions. This paper proposes a novel approach to address this issue by integrating SHAP (SHapley Additive exPlanations) values with Large Language Models (LLMs). With the aim of enhancing transparency and trust in IDS, this approach demonstrates how the combination facilitates the generation of human-understandable explanations for detected anomalies, drawing upon the CICIDS2017 dataset. The LLM effectively articulates significant features identified by SHAP values, offering coherent responses regarding influential predictors of model outcomes.

Year of Publication
2024
Date Published
apr
URL
https://ieeexplore.ieee.org/document/10541168
DOI
10.1109/PAIS62114.2024.10541168
Google Scholar | BibTeX | DOI