For consumers who have started using generative AI tools such as ChatGPT in large numbers, a big part of the allure of the new technology is its usability.
However, many enterprises must use such generative AI tools cautiously to avoid regulatory and compliance problems.
"There are certain administrative functions where it might make sense to have generative AI involved, and then there are certain ones that involve more sensitive decision-making, more discriminatory decision-making ... where you might want to be more conscious," said Regina Sam Penti, partner at the Ropes & Gray law firm, during a streamed panel discussion at the MIT Technology Review's EmTech Digital 2023 conference on May 2.
Highly regulated industries, such as finance and healthcare, have long had to exercise restraint when using AI tools and technologies.
Organizations that use AI technology have also in recent years been dealing with stepped-up regulatory activity by the Federal Trade Commission (FTC), which has started to look for violations of laws such as the Fair Credit Reporting Act and the Equal Credit Opportunity Act by the improper use of AI.
While most existing laws and regulations don't explicitly mention AI bias or other problems, the FTC has targeted some organizations that have used biased or unexplainable algorithms in handling consumer credit, employment, housing and insurance.
Challenge for regulated industries
Considering the tighter regulatory environment, it's not surprising that enterprises in highly regulated industries have been slow to incorporate generative AI tools such as ChatGPT from Microsoft partner OpenAI or Google's Bard into their systems.
However, even healthcare providers and financial firms that deal with massive volumes of sensitive data can't ignore the excitement about and popularity of these large language models (LLMs).
For instance, ChatGPT has outpaced other innovative AI technologies at the moment and already reached more than 100 million active users. While some might be tempted to incorporate the technology right away, faster is not always better because of potential problems with the technology.
Organizations in highly regulated industries should consider using the tools in controlled environments so that they're able to mitigate risk better, Penti said.
Still, there are many problems with generative AI that keep specific industries from implementing some form of it right away.
For example, the models tend to be nondeterministic and will likely produce different results with each question or query, which can create risks in the decision-making process. The models are also only updated sporadically. For instance, ChatGPT is still only updated to 2021.
Moreover, generative AI systems often can be wrong and spit out false information. Many users have tried to manage those problems with prompt engineering -- an attempt to fine-tune questions or data input in the models -- but hackers are already misusing that. Finally, cost is a factor for not only those in highly regulated industries, but for all enterprises and organizations.
For example, ChatGPT is available with Microsoft Azure, so those considering it might have to integrate with Azure, which can be costly.
For big multinational financial services companies such as JPMorgan Chase & Co., the world's largest bank, the approach is to slowly implement the LLMs in areas in which the risk is not too imminent.
Brian MaherHead of product, firmwide AI and ML platforms, JPMorgan Chase & Co.
"Our approach is very much 'crawl, walk, run,'" said Brian Maher, head of product, firmwide AI and machine learning (ML) platforms at JPMorgan Chase, during the same EmTech panel discussion.
"We're 100% in crawl, and maybe not even crawling yet," he said.
The finance giant examines applications LLMs can work with that are low risk and that have minimal impact on customers and the firm. It also considers whether the data it's using with the LLM is of low risk or is internal and sensitive data, Maher said.
"We have a safe learning environment," he said. "We can't just open this up as a generally available tool because we don't know enough about it."
To deal with the problems of using generative AI, JPMorgan Chase takes a slow and paced approach, Maher said. Meanwhile, the firm closely watches the technology.
"Every usage of this technology in our firm has to be highly registered," Maher said. "It has to be monitored."
This monitoring means keeping a human in the loop constantly providing feedback about whether the model is working.
The need for such high-level monitoring is because of the nature of generative AI models compared with traditional AI models. Maher said the risks of generative AI models are not from how they're built, but how they're used.
And there's little visibility into how the models work, which could potentially lead to problems for firms such as JPMorgan Chase from regulators such as the FTC.
"The model monitoring that we're required to do regularly, I assume is going to be amped up quite a bit more," Maher said. "There's so many unknowns because it's really hard to explain."
One example of an upcoming regulation change is that the U.S. Securities and Exchange Commission (SEC) is reviewing comments about a new rule that prohibits investment advisers from outsourcing certain services and functions without conducting due diligence and monitoring of the service providers.
"Even though it doesn't necessarily name AI, it's right there in the mix," Penti said. "These are some of the most specific rules that I have seen ... the SEC come up with in terms of requiring asset managers and investment advisers to have specific terms in their contract as to assurances that your providers are going to provide the services that are necessary for you to meet your client demands."
Dealing with such rules and regulations means a cautious approach for those in finance.
For JPMorgan Chase, another way to be careful is by aiming to ensure AI models are explainable, Maher said.
"It is our responsibility at JPMorgan Chase, as a financial provider, to be transparent -- with all of our regulators, all of our stakeholders, all of our shareholders, all of our customers -- around how we are doing this," he said.
The two-day conference was held in person and virtually.
Esther Ajao is a news writer covering artificial intelligence software and systems.