Explainable AI: Interpreting Deep Learning Models for Decision Support

Authors

  • Tanzeem Ahmad Senior Support Engineer, SAP America, Newtown Square, USA Author
  • Pranadeep Katari Senior AWS Network Security Engineer, Vitech Systems Group, Massachusetts, USA Author
  • Ashok Kumar Pamidi Venkata Senior Solution Specialist, Deloitte, Georgia, USA Author
  • Chetan Sasidhar Ravi Mulesoft Developer, Zurich American Insurance, Illinois, USA Author
  • Mahammad Shaik Lead Software Applications Development, Charles Schwab, USA Author

Keywords:

Explainable AI, XAI, SHAP values

Abstract

Why Explainable AI (XAI) matters: deep learning models affect industry. AI-driven decision support system interpretability is emphasized via deep learning model transparency. Model-specific interpretability, LIME, and SHAP characterize complicated AI system choices.
SHAP values rank prediction-based cooperative game theory qualities. This method calculates each feature's minimal model decision-making influence. Interpretable surrogate models close forecasts approach LIME black-box models locally. Based on model behavior, users may validate expectations and identify constraints.

References

M. Ribeiro, S. Singh, and C. Guestrin, "Why should I trust you? Explaining the predictions of any classifier," in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), San Francisco, CA, USA, Aug. 2016, pp. 1135–1144.

S. Lundberg and S. Lee, "A unified approach to interpreting model predictions," in Proceedings of the 31st International Conference on Neural Information Processing Systems (NeurIPS), Long Beach, CA, USA, Dec. 2017, pp. 4765–4774.

A. Ribeiro, C. Guestrin, and M. Ribeiro, "LIME: Local interpretable model-agnostic explanations," ACM SIGKDD Explorations Newsletter, vol. 17, no. 2, pp. 1–10, Dec. 2015.

A. Caruana et al., "Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission," in Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), Sydney, Australia, Aug. 2015, pp. 1721–1730.

C. Olah, A. Mordvintsev, and L. Schubert, "Feature visualization," Distill, vol. 1, no. 10, Dec. 2016.

A. Zeiler and R. Fergus, "Visualizing and understanding convolutional networks," in European Conference on Computer Vision (ECCV), Zurich, Switzerland, Sep. 2014, pp. 818–833.

M. Samek, T. Wiegand, and K.-R. Müller, "Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models," arXiv preprint, arXiv:1708.08296, Aug. 2017.

J. Yosinski et al., "Understanding neural networks through deep visualization," in Proceedings of the 28th International Conference on Neural Information Processing Systems (NeurIPS), Montreal, Canada, Dec. 2015, pp. 3424–3432.

A. Ribeiro et al., "Anchors: High-precision model-agnostic explanations," in Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency (FAT), New York, NY, USA, Jan. 2018, pp. 152–163.

A. B. P. Guyon et al., "Designing interpretability tools: An exploratory study on explainable AI," in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), London, UK, Aug. 2017, pp. 1052–1060.

A. G. Schmidt et al., "A survey on the interpretability of machine learning models," IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 4, pp. 1037–1052, Apr. 2019.

L. Chen, Y. Song, and A. G. Schmidt, "Model interpretability in deep learning: A comprehensive review," IEEE Access, vol. 8, pp. 86524–86537, May 2020.

K. Choi et al., "Interpreting models using feature importance for neural network-based predictions," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, Jun. 2019, pp. 3669–3677.

S. Singh and M. Ribeiro, "On the reliability of model-agnostic interpretability techniques," in Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, Jun. 2019, pp. 1229–1238.

J. Kim, "Towards interpretable machine learning models with local explanations," Journal of Machine Learning Research, vol. 20, no. 53, pp. 1–24, Dec. 2019.

A. G. Schmidt et al., "Towards interpretable models: A survey on the role of visualization in explaining machine learning models," arXiv preprint, arXiv:2003.06724, Mar. 2020.

M. B. Johnson et al., "Interpretable deep learning for predictive analytics in healthcare," in IEEE International Conference on Healthcare Informatics (ICHI), Dallas, TX, USA, Aug. 2018, pp. 201–210.

G. Caruana et al., "Transparency in machine learning: A survey of methods and techniques," IEEE Transactions on Knowledge and Data Engineering, vol. 32, no. 6, pp. 1054–1066, Jun. 2020.

E. Ribeiro, J. Yosinski, and C. Guestrin, "Explaining explanations: An overview of interpretability of machine learning," arXiv preprint, arXiv:1606.03490, Jun. 2016.

Y. Liu et al., "Explainable AI in finance: Techniques, challenges, and future directions," Financial Innovation, vol. 6, no. 1, pp. 1–15, Dec. 2020.

Downloads

Published

31-01-2024

How to Cite

[1]
Tanzeem Ahmad, Pranadeep Katari, Ashok Kumar Pamidi Venkata, Chetan Sasidhar Ravi, and Mahammad Shaik, “Explainable AI: Interpreting Deep Learning Models for Decision Support”, Adv. in Deep Learning Techniques, vol. 4, no. 1, pp. 80–107, Jan. 2024, Accessed: Apr. 29, 2025. [Online]. Available: https://tsbpublisher.org/adlt/article/view/90