Getty Images

Data crucial for finance companies using AI

Organizations in credit, banking and insurance must pay attention to data to build explainable models. This Q&A with an AI vendor CEO explores data, governance and explainability.

NEW YORK -- The use of AI spans almost every area of finance. From customer experience applications in banking to fraud detection in credit, financial organizations are leaning heavily on AI tools.

One important aspect of AI in finance is making the technology explainable. Organizations must ensure that their models are unbiased and can be easily explained in light of increasing compliance and regulatory oversight of AI and machine learning technology.

During the Ai4 2022 Finance Summit on March 1, H2O.ai CEO Sri Ambati spoke to attendees about explainable AI, AI governance and data governance.

H2O.ai is an AI cloud vendor that develops an open source machine learning platform for enterprises. Recently, the vendor introduced a new deep learning tool.

In this Q&A with TechTarget at the conference, Ambati dives deeper into some of those topics, zeroing in on the impact of AI and data in finance.

You have said 'every company needs to be an AI company' and 'every company needs to be a FinTech company.' Why is that?

Sri Ambati: Every company needs to be an AI company because AI is kind of transforming all software. The real superpower of being an AI company is to make new data. For example, Amazon is selling books, and the new data they created was the reviews of the books. The reviews they had were better than others'. As a result, they could predict what's the future. Being an AI company, you can create new data that no one else has. Hence, you have the first entry into your market.

Image of Sri Ambati speaking
Sri Ambati, CEO and cofounder of H2O.ai, giving a presentation at the Ai4 Finance 2022 Summit.

Now the second piece, which is everyone needs to be fintech. Many of my customers who are not in banking are in things like fast-moving consumer goods, telecommunication. They have data about their consumers, customers and consumer behavior. With 'buy now, pay later' and all these new, digital and even crypto ways of doing banking, people have kind of deconstructed the traditional banks. These retailers and telcos can predict using a better signal who is creditworthy. These predictions are happening based on purchase choices that are not necessarily going through the traditional banking system. That's why this disintermediation's happening. Everyone who has data could essentially become a banking company.

What trends are you noticing about the use of AI in finance?

Sri Ambati: The biggest one we're seeing is AI is no longer just good to have; it's become a must-have. AI has become more often a service within our most mature customers.

You can essentially learn from how data is changing and as the data is drifting, you must rebuild the models. Traditionally you will be promoting models every six months or 12 months. But with COVID, the economy has sustained more frequent … events than before. This means that one needs to do stress testing of the models more frequently and update them as frequently. So, stress testing and validating AI models become more important.

Everyone who has data could essentially become a banking company.
Sri AmbatiCEO, H2O.ai

How will explainable AI affect the finance industry in the coming years?

Ambati: Finance is a very regulated space. If the data is biased, then the models come out biased as well. As a result, we need to stress test those models. Find their blind spots and understand and deconstruct a deep learning model, or a model that is kind of difficult to interpret. Explainable models -- which are kind of post hoc explanation of models -- also could be fraught with danger, because then you're trying to create a way to avoid classic disparities [involving race or gender].

One of the biggest dangers we face today in the finance industry is a lot of the folks who have a deep understanding in the domain are about to retire or leave the industry itself.

That means that a lot of the institutional knowledge about good lending practices could be lost.

So we can essentially create a kind of learning and create a new domain of experts to replace the domain experts who are going away from the industry and build models and features around how they would solve these problems.

On the compliance side of the house, we're seeing regulators who are more open to using machine learning and AI models than they were before. And that's been a positive sign. There's a concerted body of best practices across Discover Card, Wells Fargo, Capital One, Citi, that will begin to be able to be reused by other banks as well. 

One of the things we've been working on is creating model validation methodologies that allow you to repeat benchmarks of tests, or create automatic validation for underwriting models, models that are used for lending.

How do explainable AI and governance intersect?

Ambati: When the models are wrong, debugging them and figuring out where they went wrong or how they went wrong is easier with interpretable models. Companies must build a very rich repertoire of understanding the cause, not just the correlation.

With AI, you can find [the bias] down to the actual feature that led to the model being going wrong or the math that led them to go wrong, so you can try and fix it. But of course, the problem with AI is you can apply it very fast across a billion people and that will lead to very odd outcomes. This is the reason why governance and brakes are important guardrails on it so that the models don't fall off the cliff.

How do you keep models in check? One, you could put in a lot of rules, right? But of course, rules are brittle. You keep models in check by using models. Use AI to govern AI.

Editor's note: This interview has been edited for clarity and conciseness.

Dig Deeper on AI infrastructure

Business Analytics
CIO
Data Management
ERP
Close