"Explainability in Cybersecurity Data Science"

Cybersecurity is data-rich, making it an ideal setting for Machine Learning (ML), but many challenges impede ML deployment in cybersecurity systems and organizations. According to researchers from Carnegie Mellon University's Software Engineering Institute (SEI), one significant challenge is that the human-machine relationship is rooted in a lack of explainability. Cybersecurity data science has two directions of explainability: model-to-human and human-to-model. The researchers have provided an overview of ML explainability, illustrated model-to-human explainability with the ML model form of cybersecurity decision trees, and illustrated human-to-model explainability with the feature engineering step of a cybersecurity ML pipeline. They have also recommended research needed to advance cybersecurity ML to achieve the level of two-way explainability. This level would encourage the use of ML-based systems at the cybersecurity operations level. This article continues to discuss explainability in cybersecurity data science.

Carnegie Mellon University Software Engineering Institute reports "Explainability in Cybersecurity Data Science"

Submitted by grigby1

Submitted by grigby1 CPVI on