Blockchain for Explainable and Trustworthy Artificial Intelligence

Document Type

Article

Publication Date

10-17-2019

Subject: LCSH

Artificial intelligence--Data processing, Data mining

Disciplines

Computer Engineering | Computer Sciences | Electrical and Computer Engineering

Abstract

The increasing computational power and proliferation of big data are now empowering Artificial Intelligence (AI) to achieve massive adoption and applicability in many fields. The lack of explanation when it comes to the decisions made by today's AI algorithms is a major drawback in critical decision-making systems. For example, deep learning does not offer control or reasoning over its internal processes or outputs. More importantly, current black-box AI implementations are subject to bias and adversarial attacks that may poison the learning or the inference processes. Explainable AI (XAI) is a new trend of AI algorithms that provide explanations of their AI decisions. In this paper, we propose a framework for achieving a more trustworthy and XAI by leveraging features of blockchain, smart contracts, trusted oracles, and decentralized storage. We specify a framework for complex AI systems in which the decision outcomes are reached based on decentralized consensuses of multiple AI and XAI predictors. The paper discusses how our proposed framework can be utilized in key application areas with practical use cases.

Comments

Article is published in the journal, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, volume 10, issue 1, January/February 2020.

DOI

10.1002/widm.1340

Publisher Citation

Nassar, M, Salah, K, ur Rehman, MH, Svetinovic, D. Blockchain for explainable and trustworthy artificial intelligence. WIREs Data Mining Knowl Discov. 2020; 10:e1340. https://doi.org/10.1002/widm.1340

Check your library

Share

COinS