Interpretability and explainability can lead to more reliable ML

Interpretability and explainability as machine learning concepts make algorithms more trustworthy and reliable. Author Serg Masís assesses their practical value in this Q&A.

With machine learning on the rise, businesses are relying on machine learning models and algorithms to derive insights from data and make predictions.

Serg Masís, data scientist and author of Interpretable Machine Learning with Python from Packt Publishing Ltd., believes that in order to know how and why those algorithms make predictions, they must be both interpretable and explainable.

In this Q&A, Masís discusses these concepts, coined "interpretablility" and "explainability," and how they are more than just buzzwords or theory by explaining their value in real-world scenarios.

Editor's note: The following interview was edited for length and clarity.

What near-future trends in machine learning will emerge, and will they adhere to the advice in this book about interpretability and explainability?

Serg Masís: The industry expects a data scientist to have at least three skills: programming, statistics and domain expertise. If each of these skills is a leg of a stool, the thickest or strongest one is the first one: programming. The reason for this is that, right now, you have to be a competent programmer to be able to train machine learning models, placing all the focus on the 'how.' On the other hand, interpretability and explainability are all about the 'why,' and they require ample use of the other legs of the stool.

'Interpretable Machine Learning with PythonClick here to get more
information about this book.

Fortunately, the latest trend with machine learning is no-code or low-code model training -- meaning most machine learning models will be quickly trained using drag-and-drop interfaces in a few years. This means that data scientists can devote their attention to the interpretation of models, statistical analysis of outcomes and formulation of experiments as machine learning becomes less about getting AI systems off the ground and more about mitigating risks and improving reliability.

Can interpretability and explainability be added to a course curriculum for those interested in machine learning?

There are interpretable machine learning methods that use human-friendly explanations, especially with computer vision problems.
Serg MasísData scientist and author, Interpretable Machine Learning with Python

Masís: Definitely. It's hard to tell what the most popular courses or book titles in machine learning are, but 'Deep Learning with PyTorch' or 'Time Series Forecasting with Prophet' are about using tools.

Knowing how to use tools is important, but a tool-centric education often misses the point of what it is to be a practitioner. For instance, knowing how to use a hammer doesn't automatically make you a carpenter. Carpentry is fundamentally about understanding wood. If you don't understand its many properties, you might end up building furniture that gets fungus and decays. And once you build it, you might not know how to test and improve its strength, and it will break.

Likewise, data science is about understanding data and those things we build with data -- such as machine learning models. We can then use that knowledge to make better models. That's why interpretation is a crucial skill and should be taught alongside the tools.

As just one real-world example, can autonomous vehicle developers market their algorithms as both interpretable and explainable in ways that aren't too technical or convoluted for average consumers?

Masís: Yes. There are interpretable machine learning methods that use human-friendly explanations, especially with computer vision problems. For instance, say an autonomous vehicle stopped abruptly, and you asked it why. It could point to a 200-millisecond video recording of a piece of paper rapidly moving toward the car, highlight it with a bounding box and explain that it thought it was a bird with 95% confidence.

Then, a counterfactual explanation would be exceptionally intuitive to underpin why the car stopped. Using a simulated trajectory of the object under both stopping and not-stopping scenarios, it can illustrate that had it not stopped, the object would have smashed into the windshield.

Will less technical audiences be able to use machine learning in any capacity as it becomes more widely known and accessible?

Masís: Absolutely. In fact, one could argue that the nontechnical masses already engage with machine learning as end users. From those engagement optimization models that power social media to those that complete sentences in emails, chances are most people have used machine learning models but haven't trained models directly.

I think that will change in the coming decade. For example, fifteen years ago, websites used to be something only web developers built. A few short years later, anybody can create a website. Also, thanks to mobile phones, everybody is now potentially a photographer, videographer, journalist, etc. I believe the same will happen with AI, thanks to no-code AI.

Dig Deeper on AI technologies

Business Analytics
CIO
Data Management
ERP
Close