Getty Images/iStockphoto

Pega CTO: Ethical AI for developers demands transparency

Pegasystems CTO Don Schuerman believes the cure for AI's ethical issues lies in broad data inputs, being sensitive to biases and algorithms that make explainable decisions.

AI can help developers boost efficiency and meet deadlines, but it takes further tooling to create ethical AI, which provides transparent and explainable decisions that won't plunge users into murky moral waters.

That's according to Don Schuerman, Pegasystems CTO and vice president of product strategy and marketing, who said Pega has developed its AI with such anti-bias and explainability mechanisms. That way, Pega's AI doesn't suffer from some of the issues plaguing AI-driven development tools, such as sexism and copyright infringement, Schuerman said.

In Schuerman's view, transparency and explainability are vital for avoiding bias in AI decision-making -- especially when it comes to enterprises that must meet regulatory guidelines, such as financial institutions that must adhere to fair lending laws.

In this Q&A, Schuerman talks about the challenges facing AI development, how ethical AI could bring customer service back to the 1970s and what's coming next for Pega.

How does Pega's AI approach differ from other AIs, such as GitHub Copilot, which is running into copyright issues?

Don Schuerman, CTO, PegasystemsDon Schuerman

Don Schuerman: We're not training universal models that then get generically applied to every client that we have. We're providing a decision engine that our client puts their own data into, and then builds models that are unique to their business.

Another term that you hear in the industry a lot right now is explainability, which is important when you're making decisions that are related to how you engage with a customer, what products you offer, whether you suggest a particular service or how you evaluate risk of a particular transaction. You can explain how you made the decision, both for regulatory reasons and, in some cases, it's important so that users learn to trust the models and they can see how the decisions are being made.

If I'm figuring out whether to offer somebody a loan -- boy, that better be really explainable, why and how I made that decision.
Don SchuermanCTO, Pegasystems

We built [what] we call the 'transparency switch,' which allows you to govern the level of transparency and explainability for your models. Say I'm deciding what kind of ad to run for a particular offer -- maybe I don't need as much explainability for selecting that image. But if I'm figuring out whether to offer somebody a loan -- boy, that better be really explainable, why and how I made that decision.

One of the big challenges in the AI world is that AI is trained on data. Data can contain, for better or worse, the biases of our society. So even if your AI predictors -- the pieces of information you're using -- aren't necessarily aligned against protected classes, you can still be making decisions or follow trends that align against something that's protected, like race or gender orientation. So, we've built ethical bias testing into the platform so that our clients have the tools to test and validate their algorithms to make sure that they're not doing that.

But the people who created the ethical bias testing feature have their own biases. So how do you ensure that this feature itself is not biased?

Schuerman: I think part of that is getting as broad a set of viewpoints into the conversation as possible, both from our own internal developer community, but also from our client community, who represents our advisory community -- the folks that we talk to in the industry space.

The first thing we must address in our biases is being aware of them. It's not going to guarantee that every client in the world doesn't, at some point, do something that doesn't align with bias best practices, but we're making it front and center. We're asking people to think about it as they think about how they deploy their AI models. We're also asking people to think about customer empathy.

If you give customers too many AI-generated suggestions, aren't they going to switch off?

Schuerman: One of the clients that we work with talks about how his goal was to bring banking back to the 1970s. And what he meant by that was not that everybody would wear bell-bottoms and disco, but that you would be using AI to, as much as possible, capture back the personal relationships you would have had with your local banker, who you knew as an individual and saw in the local branch every week. You don't have that anymore.

We need to use some of the AI tools to have that knowledge and understanding of the client -- whoever in the company the customer is engaged with. Maybe it's somebody in the contact center. Maybe it's somebody in the branch who just started this week. Maybe it's the client on a self-service. We're still having that personal 'I understand you, I know you, I'm aware of what your goals and needs are' [message]. That's what we're trying to do, is to make this a human experience, but at scale.

What can last month's release of Pega Infinity 8.8 do for developers that they couldn't do before?

Schuerman: This latest release, it gives developers the ability to apply AI and its decision-making quite broadly across a process to figure out, 'Are there opportunities to drive efficiency? Can I predict that this process is going to miss its service-level [agreement]? Wait, we're going to be missing a deadline.' They have an AI model that predicts that and automatically escalates the process before they miss a deadline and have to either explain that to a customer or, worst case, pay regulatory fines.

What's next for Pega in the ethical AI realm?

Schuerman: We work with a lot of enterprises that are now in a world where they can't deploy one application for payment exceptions, because they need to keep their data from Switzerland clients in Switzerland, and then they need to keep their U.K. client data in the U.K., and data for Singapore clients in Singapore. They must have distributed versions of that application. We architecturally support that, but what we're also thinking about is connecting those physically distributed applications. How do you connect that back into a holistic experience?

Editor's note: This Q&A has been edited for clarity and conciseness.

Dig Deeper on Software design and development

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close