Towards XAI in the SOC – a user centric study of explainable alerts with SHAP and LIME | |
---|---|
Author | |
Abstract |
Many studies of the adoption of machine learning (ML) in Security Operation Centres (SOCs) have pointed to a lack of transparency and explanation – and thus trust – as a barrier to ML adoption, and have suggested eXplainable Artificial Intelligence (XAI) as a possible solution. However, there is a lack of studies addressing to which degree XAI indeed helps SOC analysts. Focusing on two XAI-techniques, SHAP and LIME, we have interviewed several SOC analysts to understand how XAI can be used and adapted to explain ML-generated alerts. The results show that XAI can provide valuable insights for the analyst by highlighting features and information deemed important for a given alert. As far as we are aware, we are the first to conduct such a user study of XAI usage in a SOC and this short paper provides our initial findings. |
Year of Publication |
2022
|
Date Published |
dec
|
URL |
https://ieeexplore.ieee.org/document/10020248
|
DOI |
10.1109/BigData55660.2022.10020248
|
Google Scholar | BibTeX | DOI |