Explaining probabilistic Artificial Intelligence (AI) models by discretizing Deep Neural Networks
MetadataShow full item record
AbstractArtificial Intelligence (AI) models can learn from data and make decisions without any human intervention. However, the deployment of such models is challenging and risky because we do not know how the internal decisionmaking is happening in these models. Especially, the high-risk decisions such as medical diagnosis or automated navigation demand explainability and verification of the decision making process in AI algorithms. This research paper aims to explain Artificial Intelligence (AI) models by discretizing the black-box process model of deep neural networks using partial differential equations. The PDEs based deterministic models would minimize the time and computational cost of the decision-making process and reduce the chances of uncertainty that make the prediction more trustworthy.
CitationSaleem, R., Yuan, B., Kurugollu, F. and Anjum, A. (2020). 'Explaining probabilistic Artificial Intelligence (AI) models by discretizing Deep Neural Networks'. IEEE/ACM 13th International Conference on Utility and Cloud Computing, Leicester, 7-10 December. New York: IEEE, pp. 446-448.
Journal2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)
TypeMeetings and Proceedings
The following license files are associated with this item:
- Creative Commons
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-ShareAlike 4.0 International