Getty Images/iStockphoto

Businesses confront reality of generative AI in finance

As large language models move from pilot projects to full-scale deployment in finance, the industry is facing a mixture of compliance and technological challenges in 2024.

NEW YORK -- Last year saw mounting enthusiasm and hype around generative AI as companies and individuals began experimenting with emerging tools like ChatGPT and GitHub Copilot.

But in 2024, enterprise AI initiatives are shifting from small-scale pilots to real-world deployments, and organizations are grappling with the challenging realities of adoption.

At the AI in Finance Summit New York 2024 this week, speakers and attendees expressed a mixture of excitement and trepidation about the rapid pace of generative AI development.

John Chan, director of technology, AI/ML, at Raymond James, compared generative AI to the advent of technologies like mobile and cloud. "This is a type of technology wave that we have experienced before," he said in his presentation. "But the only difference this time, I believe, is that it's really much faster."

While those technologies tended to have a distinct set of early adopters, generative AI has appeared to have much broader appeal across demographics, said Sarah Hoffman, vice president of AI and machine learning research at Fidelity, in a Q&A following her presentation. This widespread transition from experimentation to practical application is tempering the initial excitement about generative AI in the financial sector.

Generative AI's applications in financial services

Practitioners cited wide-ranging use cases for generative AI in finance and banking: chatbots and virtual assistants, fraud detection and prevention, credit risk assessment, personalized marketing, investment management, and document analysis and processing.

Of these applications, Chan said, text and code generation is currently the most common -- a broad category encompassing everything from email drafts to SQL queries to synthetic data.

Text summarization and analysis is also highly popular. Financial services companies often find themselves with massive volumes of data, such as customer transactions, financial reports and regulatory filings. But much of this information is unstructured and isolated in different departments, making it difficult for organizations to use effectively.

"In large companies, it's really, really hard to share knowledge across the company, and it's hard to collaborate," Hoffman said. Internal generative AI tools could help users by summarizing information about company structure, goals and projects in other business units, she said.

A technique called retrieval-augmented generation -- a popular topic throughout the conference and in the AI industry -- could be particularly useful here. RAG links generative models with databases or document repositories, enabling the model to find documents relevant to a user's query and use that data to construct a better-informed response.

While prompt engineering aims to improve how questions are asked, and fine-tuning works on improving large language models' domain knowledge through training, RAG focuses on improving accuracy and access to knowledge. It also requires less effort from users compared with prompt engineering, and it can be a cheaper, more model-agnostic alternative to fine-tuning.

The challenges of moving from pilots to production

To date, financial services firms' generative AI initiatives have largely taken the form of internal proof-of-concept or pilot projects.

Among his startup's finance and banking customers, "I would say 95% or even more are still very internal-facing," said Sahil Agarwal, CEO and co-founder of Enkrypt AI, a generative AI security platform vendor, in an interview with TechTarget Editorial. "Only a handful have gone external-facing."

This is partly because the stakes of external projects are higher. While an internal misstep might be frustrating in terms of wasted time and resources, an external failure could be publicly embarrassing and overly expensive -- particularly for those in a heavily regulated, high-stakes industry like finance.

Multiple presenters and attendees mentioned the February incident in which Air Canada was ordered to compensate a customer who received misleading information from the company's chatbot. Agarwal also referenced New York City's AI-powered chatbot, which an investigation recently found regularly advises businesses to break the law. "That's where you realize it's still not ready for production," he said.

In addition to these reputational and compliance risks, building production-scale generative AI involves practical challenges, such as collecting and cleaning data as well as acquiring necessary technical talent and compute infrastructure. In many cases, Agarwal said, there's intense pressure from executives and leadership to implement generative AI, but those actually responsible for that implementation are struggling.

"Everybody wants to work on an AI project, but for executives, how do you prioritize?" said Brennan Lodge, co-founder and CEO of cybersecurity startup BLodgic, in a presentation. He cited the difficulty of determining which projects are viable, revenue-generating and realistic, considering organizational resources.

Understanding the limitations of generative AI

Experts also stressed the importance of recognizing generative AI's limits. Generative AI has a notable tendency to invent false or misleading responses, known as hallucinations -- it's still the biggest problem with the technology, Chan said. And techniques like RAG, though helpful, aren't a cure-all.

"What people think is, RAG-based systems will not hallucinate," Agarwal said. "They're trying to constrain [the model] with respect to the vector database or the set of documents ... but the technology is such that it can still hallucinate. It can still make up or mix up answers across documents. And that becomes a real challenge for anyone trying to put these things in production."

Even with guardrails in place, generative AI tools can still produce output that is biased against marginalized groups or otherwise harmful -- for example, dangerous or explicit. "Even when this technology doesn't hallucinate, it's getting its information from the internet, and that might not have your values," Hoffman said.

Generative AI can also introduce security and compliance problems.

Lodge pointed out that generative AI systems introduce two intellectual property (IP) concerns. Using a generative AI tool involves exposing organizational IP to a third party. But systems themselves can also generate copyrighted content, such as an AI coding tool like GitHub Copilot producing snippets of proprietary code.

We can use [generative AI] as a tool, but it's far from being superhuman.
John ChanDirector of technology, AI/ML, Raymond James

This could have expensive consequences down the line as the regulatory landscape around generative AI solidifies. "When I talk to security and compliance officers at these large financial organizations, their main concern -- their only concern, actually -- is, is the auditor or the regulator going to fine me or not?" Agarwal said. "They are taking a very cautious approach. They need to, irrespective of any technology."

As organizations move into real-world deployments, managing these risks will be crucial to successful AI initiatives. For businesses, this process will involve in-depth preparation before embarking on an AI initiative, keeping a human in the loop in any generative AI deployment and carefully choosing the right use cases.

"This technology -- we can use it as a tool, but it's far from being superhuman," Chan said. "AI cannot solve every problem."

Lev Craig covers AI and machine learning as the site editor for TechTarget Editorial's Enterprise AI site. Craig graduated from Harvard University with a bachelor's degree in English and has previously written about enterprise IT, software development and cybersecurity.

Dig Deeper on Enterprise applications of AI

Business Analytics
CIO
Data Management
ERP
Close