15, July 2025
Using LIME and Tree SHAP to explain decision trees: A case study in Explainable AI
Author(s): G. Roch Libia Rani
Authors Affiliations:
Assistant Professor, Department of Computer Applications, De Paul College, Mysuru, India
DOIs:10.2015/IJIRMF/202507012     |     Paper ID: IJIRMF202507012Abstract
Keywords
Cite this Article/Paper as
References
Abstract: The increasing complexity of machine learning models, particularly in high-stakes domains such as healthcare, finance, and autonomous systems, has highlighted the need for transparency and interpretability. Understanding the reasons behind the AI predictions becomes the critical aspect here. Explainable AI (XAI) methods aim to bridge the gap between model performance and human understanding, enhancing trust, accountability, and decision-making. The study will focus on how these methods make black-box models more transparent and facilitate human-centered decision-making. In particular, this study explores the application of XAI methods—Local Interpretable Model-Agnostic Explanations (LIME) and Tree SHAP—to a decision tree classifier trained on a heart disease dataset. Although decision trees are generally interpretable, LIME and Tree SHAP offer deeper, instance-level and global insights into model behavior. Our analysis reveals that key clinical features such as the number of major vessels (ca), thalassemia status (thal), and chest pain type (cp) consistently influence model predictions. In this way, we expect to arrive at a suggestion that explainability is not only crucial for fostering trust in AI systems but also enhances the quality and transparency of decisions, especially when there is human oversight. The paper would also outline the challenges and future direction for integrating XAI in decision-making processes, highlighting the importance of designing systems that are both accurate and interpretable.
Â
Key Words: explainable AI, decision making systems, interpretability, transparency, LIME, Tree SHAP.
G. Roch Libia Rani,(2025); Using LIME and Tree SHAP to explain decision trees: A case study in Explainable AI, International Journal for Innovative Research in Multidisciplinary Field, ISSN(O): 2455-0620, Vol-11, Issue-7, Pp. 71-76.     Available on –  https://www.ijirmf.com/