LISA : Enhance the explainability of medical images unifying current XAI techniques | |
---|---|
Author | |
Abstract |
This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model. |
Year of Publication |
2022
|
Date Published |
apr
|
URL |
https://ieeexplore.ieee.org/document/9824840
|
DOI |
10.1109/I2CT54291.2022.9824840
|
Google Scholar | BibTeX | DOI |