Explainable AI: Is It Even Possible?

Explainable AI: Is It Even Possible?

Technology Enablement:

November 26, 2018 | Margaret Michaels

The first time I did serious research around the concept of explainable AI I found it wasn’t so explainable. For example I found a article by a computer science undergraduate at Cambridge entitled, “An Introduction to Explainable AI, and Why We Need It”1 which began with neural networks and ended with RETAIN (Reversed Time Attention Model) and LIME (Local Interpretable Model-Agnostic Explanations). Even after reading the article three times, I didn’t get it.

Hence the problem with explainable AI. If you don’t have a computer science degree, it might not be comprehensible.

Software company, Tableau, in their recent, “2019 Business Intelligence Trends”2 whitepaper attempts to breakdown explainable AI for more lay audiences. They bring up the highly pertinent question, “As more organizations rely on artificial intelligence and machine learning models, how can they ensure they are ‘trustworthy’?

Just as a decision maker would ask follow-up questions of an analyst preparing a recommendation, algorithms and models should have an audit trail and documentation around how they are designed, the data they are using, and how outputs might look different if different inputs (data) were used. Richard Tibbetts, Product Manager for AI, Tableau put it this way:

“Analytics and AI should assist—but not completely replace—human expertise and understanding.”

Indeed human expertise and understanding prompts the questions3 that bring more transparency to AI like:

  • How was the data that fuels AI systems’ decisions acquired?
  • Was it acquired in a way that is compliant with consumer privacy concerns?
  • How do the algorithms work?
  • Why do they make the recommendations/predictions they do?

IMA’s Perspective

But to ask those questions you need to have a baseline understanding of AI, data analytics, and machine learning. IMA® (Institute of Management Accountants) has been laser-focused on helping its members with the digital competencies they need to ask the right questions of technology. For example IMA’s report on “Building a Team to Capitalize on the Promise of Big Data” looks at both the technical and nontechnical skills that are needed to make the most of data analytics projects.

On the topic of artificial intelligence, IMA draws from the perspectives of practitioners in the field like Rod Kock, CMA, PMP. In his 2017 Strategic Finance article, “Will Artificial Intelligence Eliminate My Job?” Koch notes the challenges of keeping up with fast-moving, constantly evolving technology like artificial intelligence. But he also notes how humans are distinctly equipped to problem-solve and see the larger context in which AI operates. This provides us with the unique attributes to see biases or counter-intuitive decision making by machines.

As the technical and ethical questions around technologies like artificial intelligence multiply, professional associations like IMA have a unique role to play in bringing practitioners together to discuss the issues and offer solutions.

Until explainable AI is truly explainable, it will be up to the professionals who have years of accumulated industry knowledge and a commitment to ethical business practices to sort out the answers.


  1. “An introduction to explainable AI and why we need it,” Patrick Ferris,, August 27, 2018.
  2. “2019 Business Intelligence Trends,” Tableau, November 2018.
  3. “The problem with ‘explainable AI’”, June 2018.