Empower Your algorithms with Explainable Artificial Intelligence (XAI)

Empower Your algorithms with Explainable Artificial Intelligence (XAI)

Pratik Ravani 30/04/2021 5
Empower Your algorithms with Explainable Artificial Intelligence (XAI)

Imagine that you have applied for a visa for an important business initiative.

After a long process of preparation, today is the day. With high hopes you have just begun a confident conversation with a visa officer and you are interrupted. You are handed back your passport with a rejection stamp without a line of explanation.

What would be your state of mind? You want to understand what exactly went wrong and what you should do to better prepare for the next time.

I am sorry to take you through a virtual rejection process but it was a necessary precondition to come to the point. This process may not be AI driven, but the point is: we want things important to us to make sense to us.

The Human Need For Reasoning

If there is one thing that lies at the core of human behaviour for any engagement then it is trust. If the method or process is not trustworthy then the system too will not be trusted.

Whether an outcome is manual or algorithmic, a human is bound to need a pattern to form an understanding to deal with it. If you think from a broader perspective then all human emotions and actions are results of self-explanations of the outcomes. When a person can’t leave himself without explanations, why would he let machines to get away without them, especially with unfavourable outcomes?

In the West, there are regulations like “right to explanation” and “Equal Credit Opportunity Act” already in effect. The former gives individuals a right to understand how decisions impacting their lives are made, and the latter prevents creditors from any discrimination against any applicants with regard to credit transactions on the basis of race, colour, religion, sex, age, marital status, etc.

Understanding XAI and its Importance

There are three key principles of explainable Machine Learning which collectively bear the acronym: FAT.

  1. Fairness: Explain that model outcomes are without any discernible bias.
  2. Accountability: Explain and own the model behaviour and outcomes.
  3. Transparency: How the model works internally and predictions are made.

Some corporations and researchers have rightly expanded this to add an ethical element to it and converted FAT into FATE. Ethical AI brings in the much-needed governance angle to the Machine Learning based approach.

Moreover, you might have noticed that the terms ‘Interpretability’ and ‘Explainability’ are used interchangeably. To me, Interpretability is the ability of a model to make non-subject-matter experts understand the input and output. Explainability on the other hand goes a step further and gets more into the parameters of transparency. Apart from model inference, it calls for human-friendly explanations for model’s workings as well.

Intrinsically Interpretable Models

The algorithm which has been the most interpretable and simplest for human understanding is undoubtedly the decision tree. The rules are extracted by traversing a decision tree from the root node to the leaf nodes. Importance features are segregated based on their contribution to the reduction in Gini index. In fact, Bayesian Rule List (BRL) by Skater is also very promising though at an experimental stage.

We also understand that all the linear models are innately interpretable. They include all the model variants of the family of Generalized Linear Models. In fact, KNN too has an inbuilt interpretability but it is very much local since it is not fitted to learn parameters. Even Naïve Bayes does explain the probability of each outcome provided the presence of each feature.

However easy-to-understand, these models do struggle when it comes to large datasets with high dimensions. And hence, the complex models (the black boxes) come to the rescue although they are solely focused on great performance and give interpretability a cold shoulder.

The objective therefore is to strike a fine balance between both.

A Few Tools For Complex Models

The scope of machine learning is so wide that for each of its sub-segments there are multiple tools and techniques available to exhibit explainability. Let me pick only one of the complex parts: Understanding image classification results with the methods that use perturbation strategy.

Occlusion sensitivity – It occludes (obstructs) portions of the image by incorporating a gray patch. A sensitivity map is then created with the probability of the target at each point the patch is placed. It is really a rather simple method wherein all it requires are image, model, label, and patch size.

Local interpretable model-agnostic explanations (LIME) – It trains local surrogates to explain a particular prediction while using a sparse linear model to identify the important features. The best part of LIME is that it brings forth linear coefficients that can tell us the positively or negatively correlated features with the predicted class. LIME also works well with structured (tabular) and unstructured (image and text) data.

Contrastive Explanation Method (CEM) – This method explains predictions using what is present (anchors) and what is absent (counterfactuals). With the generation of Pertinent Positive (PP) and Pertinent Negative (PN) one can understand whose absence or presence makes a prediction what it is. PP shows what is sufficiently present to predict the same class and PN shows what is sufficiently absent to predict a different class. CEM works best with continuous and ordinal features because it expects to subtract from features till it reaches the expected outcome. It is better suited for image classification cases than on a tabular data.

There is another important method which is SHAP (SHapley Additive exPlanations). Though Shapley values are perturbation-based, its Deep Explainer is based on the Deep Learning Important FeaTures (DeepLIFT) which works on backpropagation. By the way, Shapley values depict average marginal contribution (change in model performance) by a feature across all possible subsets of features.

As a Concluding Note

I would suggest you to stick to Occam’s razor principle: The simplest solution is usually the right one. If your model doesn’t follow this principle then it is not only difficult to interpret but also difficult to generalize.

I hope this would have given some direction to those exploring the means to inject any XAI element into their ongoing AI projects. Please note that humans by nature are biased to simplicity. There is absolutely no need of using a sword to cut a little cake!

Lastly, I would like to amend that famous line of Edwards Deming, “In God we trust, all others must bring data,” in the light of this context: “In God we trust, all others must bring results we can understand!”

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • Matt McNamara

    Explainable artificial intelligence could be game-changer for businesses.

  • Alexandra Nixon

    AI's level of explainability and transparency will be enhanced.

  • Tyler Davidson

    Explainable AI is increasingly becoming even more important.

  • Ross Smith

    Must read !!

  • Owen Hart

    Thanks for the explanation

Share this article

Pratik Ravani

Tech Expert

With nearly 15 years of industry experience, Pratik works as a delivery head for global analytics projects at a Bangalore-based MNC. Involved in various innovative projects and concepts, he applies a range of Machine Learning and Deep Learning algorithms to create and deliver strategic insights. As part of his wide range of assignments, he pieces together new technology trends and shifting business demands to bring about cutting-edge applications. For years he has been blending his analytical prowess and people skills to tap into the unexplored and less-explored business dimensions and convert them into value creators. Passionate about sharing his continual learnings, he is also a corporate trainer and a speaker at events. Pratik holds an MBA in Finance with Information Technology and a bachelor’s degree in Industrial engineering.

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline