Explaining probabilistic Artificial Intelligence (AI) models by discretizing Deep Neural Networks
Abstract
Artificial Intelligence (AI) models can learn from data and make decisions without any human intervention. However, the deployment of such models is challenging and risky because we do not know how the internal decisionmaking is happening in these models. Especially, the high-risk decisions such as medical diagnosis or automated navigation demand explainability and verification of the decision making process in AI algorithms. This research paper aims to explain Artificial Intelligence (AI) models by discretizing the black-box process model of deep neural networks using partial differential equations. The PDEs based deterministic models would minimize the time and computational cost of the decision-making process and reduce the chances of uncertainty that make the prediction more trustworthy.Citation
Saleem, R., Yuan, B., Kurugollu, F. and Anjum, A. (2020). 'Explaining probabilistic Artificial Intelligence (AI) models by discretizing Deep Neural Networks'. IEEE/ACM 13th International Conference on Utility and Cloud Computing, Leicester, 7-10 December. New York: IEEE, pp. 446-448.Publisher
IEEEJournal
2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)DOI
10.1109/ucc48980.2020.00070Additional Links
https://ieeexplore.ieee.org/abstract/document/9302808Type
Meetings and ProceedingsLanguage
enISBN
9780738123943ae974a485f413a2113503eed53cd6c53
10.1109/ucc48980.2020.00070
Scopus Count
Collections
The following license files are associated with this item:
- Creative Commons
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-ShareAlike 4.0 International