Explaining Outputs of Algorithms with the Help of Explainable AI

Naveen Joshi 19/04/2020 1

Explainable artificial intelligence (AI) will help us understand the decision-making process of AI algorithms by bringing in transparency and accountability into these systems.

AI is deeply penetrating our lives and is getting increasingly smart and autonomous with each passing day. The decisions made by these systems are being relied upon more than ever before. These decisions may prove critical in life and death situations as AI becomes increasingly profound in healthcare, autonomous vehicles, and military operations. For example, if an AI system diagnosis an individual with cancer, the doctor must know the process and reason the AI algorithm used for determining the diagnosis. The methodologies used in explainable AI are the pre-modeling explainability, explainable modeling, and post-modeling explainability. The first two methods are relatively small, and a majority of explainable AI researchers focus on the post-modeling explainable AI methodology. Let’s dive deep into the post-modeling explainable AI methodology.

Post-Modeling Explainable AI Methodology

The post-modeling explainable AI method is also called a post-hoc method. The goal of the post-modeling explainable AI methodology is to extract explanations to describe the already developed model. The method is developed around four important components, namely, the target, the drivers, the explainable family, and the estimator.

Target

The target of an AI model describes the object of the model. The targets of the model can vary in their scope, complexity, or type. The type of target is often determined by the goals of the end-user. The end-user usually requires a functional explanation of the model to understand how the output is produced by understanding the input data. However, the model developer may also need to understand the response of the layers of the AI algorithm for debugging processes. Thus, depending on the user, the type of target can be inside or outside.

Drivers

Factors that have an impact on a target are termed as drivers. Input features are the most common drivers for an AI model. However, they are not the only choice. Every factor having an impact on the AI algorithm can be termed as an explanation driver. These include training samples, model architecture, or choice of optimization of the model. The drivers explain the cause of an AI algorithm.

Explanation Family

The post-modeling explainable AI methodology’s framework transfers information about how drivers cause a target. The explanation family determines the type of information transferred. The importance scores are the most common explanation family used in explainable AI. The scores communicate the impact of each explanation driver on the target. The higher the score, the more impact it has on the target.

Estimator

The computational processes used to reach explanations are termed as estimators. One of the most common methods used to generate explanations is the Back-Propagation method. The target-producing layer estimates the weights of the previous layer of the target. The process is repeated until the drivers are reached. Some of the widely used methods include DeepLIFT, SmoothGrad, and Integrated Gradients.

Share this article

Leave your comments

Post comment as a guest

  • Mary Butler

    Very interesting !!