
Getty Images/iStockphoto
AI transparency: What is it and why do we need it?
While organizations are utilizing AI technology, there isn't a clear understanding of how it works. Here, we explore the benefits and drawbacks of the lack of transparency in AI.
Modern AI was born when rules-based software programming was no longer able to tackle the problems the computing world wanted to solve. It wasn't possible to code every condition the program had to measure, so computational experts designed machines that imitated how humans think, enabling AI to learn by itself by observing data. The approach, known as neural networking, gave rise to AI technologies like face recognition programs, cancer detection algorithms and self-driving cars.
But neural networks came with a tradeoff: We couldn't understand how the systems worked. The AI models lacked transparency. This phenomenon is known as black box AI, and it has turned out to be quite problematic.
Tradeoffs of black box AI
AI is typically measured in percentages of accuracy -- i.e., to what degree the system is able to give correct answers. Depending on the task at hand, the minimum accuracy required may vary, but accuracy, even if it is 99%, cannot be the only measure of AI's value. We must also take into account a major shortcoming of AI, especially when applying AI in business: An AI model with near-perfect accuracy can be problematic. As the accuracy of the model goes up, AI's ability to explain why it arrived at a certain answer goes down, raising an issue companies must confront: the lack of AI transparency of the model and, therefore, our human capacity to trust its results.
The black box problem was acceptable to some degree in the early days of the technology but lost its merit when algorithmic bias was spotted. For example, AI that was developed to sort resumes disqualified people for certain jobs based on their race, and AI used in banking disqualified loan applicants based on their gender. The data the AI was trained on was not balanced to include sufficient data of all kinds of people, and the historical bias that lived in the human decisions were passed on to the models.
AI also showed that a near-perfect model could still make alarming mistakes. An AI model with 99% accuracy could make errors for the remaining 1%, such as classifying a stop sign as a speed limit sign.
While these are some of the most extreme cases of misclassification -- and purposely designed adversarial inputs to fool the AI model -- they still underline the fact that the algorithm has no clue or understanding of what it is doing. AI follows a pattern to arrive at the answer, and the magic is that it does so exceptionally well, beyond human power. For the same reason, unusual alterations in the pattern make the model vulnerable, and that's also why we need AI transparency -- to know how AI reaches a conclusion.
Particularly, when using AI for critical decisions, understanding the algorithm's reasoning is imperative. An AI model that is designed to detect cancer, even if it is only 1% wrong, could threaten a life. In cases like these, AI and humans need to work together, and the task becomes much easier when the AI model can explain how it reached a certain decision. Transparency in AI makes it a team player.
Sometimes, transparency is a necessary step from a legal perspective.

"Some of the regulated industries, like banks, have model explainability as a necessary step to get compliance and legal approval before the model can go into production," said Piyanka Jain, an industry thought leader in data analytics.

Other cases involve GDPR or the California Consumer Privacy Act, where AI deals with private information. "One aspect of GDPR is that, when an algorithm using an individual's private data makes a decision, the human has the right to ask for the reasons behind that decision," said Carolina Bessega, chief scientific officer and co-founder of Stradigi AI, an AI software company.
It seems evident that AI transparency has many benefits, but then, why aren't all algorithms transparent?
Weaknesses of AI transparency
As great as an algorithm that can explain how it reached a certain decision can be, it can also become proportionally easier to hack.
By understanding the reasoning of AI, hackers will have an easier time tricking the algorithm. "AI transparency isn't encouraged in fraud detection," Jain explained. "We want fewer folks to know how we are catching the fraud -- same for cybersecurity. In general, when we are trying to use AI to catch bad guys, we want fewer folks to know the underlying logic, and AI lends itself well for that."
Another concern with AI transparency is protection of proprietary algorithms, as researchers have demonstrated that entire algorithms can be stolen simply by looking at their explanations.
Lastly, transparent algorithms are harder to design, and at least for the time being, they can only be applied on simpler models. If transparency in AI is a must, it may force companies and organizations to use less sophisticated algorithms.
How to reach a balance
As with any other computer program, AI needs optimization. To do that, we look at the specific needs of a certain problem and then tune our general model to fit those needs best.
When implementing AI, an organization must pay attention to the following four factors:
- Legal needs. If the work requires explainability from a legal and regulatory perspective, there may be no choice but to provide transparency. To reach that, an organization may have to resort to simpler but explainable algorithms.
- Severity. If the AI is going to be used in life-critical missions, transparency is a must. It is most likely that such tasks are not dependent on AI alone, so having a reasoning mechanism improves the teamwork with human operators. The same applies if AI affects someone's life, such as algorithms that are used for job applications.
On the other hand, if the AI's task is not critical, an opaque model would suffice. Consider an algorithm that recommends the next prospect to reach out to out of a database of thousands of leads -- cross-checking the AI's decision would simply not be worth the time.
- Exposure. Depending on who has access to the AI model, an organization may want to protect the algorithm from unwanted reach. Explainability can be good even in the cybersecurity space if it helps experts reach a better conclusion. But, if outsiders can gain access to the same source and understand how the algorithm works, it may be better to go with opaque models.
- Data set. No matter the circumstances, an organization must always strive to have a diverse and balanced data set, preferably from as many sources as possible. Eventually, we'll want to rely on AI as much as we can, and AI is only as smart as the data it is trained on. By cleaning the training data, removing noise and balancing the inputs, we can help to reduce bias and improve the model's accuracy.