ILLUMINATING THE BLACK BOX: A DEEP DIVE INTO SHAP FOR EXPLAINABLE AI
DOI:
https://doi.org/10.52152/j909ry72Keywords:
Explainable AI, SHAP, interpretability, model transparency, black box models, feature attributionAbstract
The fast development of artificial intelligence (AI) has produced complex models that are commonly viewed as black boxes, which restrict transparency and interpretability. Based on cooperative game theory, SHAP (SHapley Additive exPlanations) has proven to be an effective solution to this problem, contributing to feature values consistently and in an interpretable way. This review elaborates on the theoretical basis, uses, and difficulties of SHAP in explainable artificial intelligence (XAI). Our fields of analysis include literature in healthcare, natural language processing, finance and customer behavior analytics. The review mentions strengths of SHAP in generating locally and globally consistent explanations, as well as dealing with computational complexity and high-dimensional data. Scalability, domain specific knowledge integration and user-centered evaluation metrics are identified gaps in research. This review will present a summary of the role of SHAP in promoting trustful AI by synthesizing recent research.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Lex localis - Journal of Local Self-Government

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.


