ILLUMINATING THE BLACK BOX: A DEEP DIVE INTO SHAP FOR EXPLAINABLE AI
DOI:
https://doi.org/10.52152/j909ry72Ključne besede:
Explainable AI, SHAP, interpretability, model transparency, black box models, feature attributionPovzetek
The fast development of artificial intelligence (AI) has produced complex models that are commonly viewed as black boxes, which restrict transparency and interpretability. Based on cooperative game theory, SHAP (SHapley Additive exPlanations) has proven to be an effective solution to this problem, contributing to feature values consistently and in an interpretable way. This review elaborates on the theoretical basis, uses, and difficulties of SHAP in explainable artificial intelligence (XAI). Our fields of analysis include literature in healthcare, natural language processing, finance and customer behavior analytics. The review mentions strengths of SHAP in generating locally and globally consistent explanations, as well as dealing with computational complexity and high-dimensional data. Scalability, domain specific knowledge integration and user-centered evaluation metrics are identified gaps in research. This review will present a summary of the role of SHAP in promoting trustful AI by synthesizing recent research.
Prenosi
Objavljeno
Številka
Rubrika
Licenca
Avtorske pravice (c) 2025 Lex localis - Journal of Local Self-Government

To delo je licencirano pod Creative Commons Priznanje avtorstva-Nekomercialno-Brez predelav 4.0 mednarodno licenco.


