ILLUMINATING THE BLACK BOX: A DEEP DIVE INTO SHAP FOR EXPLAINABLE AI

Authors

  • Piyush Zaverbhai Patel
  • Dr. Vipul Vekariya
  • Dr. Kruti Sutaria

DOI:

https://doi.org/10.52152/j909ry72

Keywords:

Explainable AI, SHAP, interpretability, model transparency, black box models, feature attribution

Abstract

The fast development of artificial intelligence (AI) has produced complex models that are commonly viewed as black boxes, which restrict transparency and interpretability. Based on cooperative game theory, SHAP (SHapley Additive exPlanations) has proven to be an effective solution to this problem, contributing to feature values consistently and in an interpretable way. This review elaborates on the theoretical basis, uses, and difficulties of SHAP in explainable artificial intelligence (XAI). Our fields of analysis include literature in healthcare, natural language processing, finance and customer behavior analytics. The review mentions strengths of SHAP in generating locally and globally consistent explanations, as well as dealing with computational complexity and high-dimensional data. Scalability, domain specific knowledge integration and user-centered evaluation metrics are identified gaps in research. This review will present a summary of the role of SHAP in promoting trustful AI by synthesizing recent research.

Downloads

Published

2025-10-03

Issue

Section

Article

How to Cite

ILLUMINATING THE BLACK BOX: A DEEP DIVE INTO SHAP FOR EXPLAINABLE AI. (2025). Lex Localis - Journal of Local Self-Government, 23(S6), 6493-6500. https://doi.org/10.52152/j909ry72