Credit card giants step up AI fraud detection
While Capital One and Mastercard delve into AI and machine learning to detect credit card fraud, IBM joins the ranks of AI vendors with AI bias detection tools.
SAN FRANCISCO -- Youssef Lahrech, a senior vice president at Capital One, said his mother pays for things with a credit or debit card. She has one card, he said, but only uses it as a sort of last resort, to pay for items that can't be bought with cash.
To him, this isn't surprising. Years ago, a fraudster got to one of his mother's accounts and wiped it out. She lost all her savings, and her financial security, in one instance.
It hit his family hard, which is one of the reasons why he said he has concentrated on bringing AI fraud detection tools to Capital One.
"Fraud is a big business," Lahrech said, speaking at The AI Summit. "Thirty billion dollars of fraud losses will happen this year," and millions of Americans are yearly affected by fraud.
Successful fraudsters today take advantage of advanced technologies, using APIs and big data to help steal money, he said. But, as they have become more advanced, so have companies like Capital One.
The financial services giant uses a variety of behind-the-scenes machine learning (ML) and AI fraud detection tools to help identify potential fraud in milliseconds. The company's system is designed "so that the customer will provide us with data by reporting," Lahrech explained.
If a suspected fraudulent transaction is detected, a notification is sent to a customer's phone instantly. The customer is then able to mark if the purchase was legitimate or not, and if not, the card is immediately locked down.
The system, by relying on input from millions of customers around the world, in a large-scale, general way, is able to better learn what type of transactions are more likely to be fraudulent.
Taking on fraud with bots
The system, however, is scalable, and is able to learn in more personalized ways, as well, Lahrech said. Recently, the company rolled out Eno, a chatbot-type of virtual assistant that can do things like break down a customer's charges or alert a customer to fraud through a conversational user interface.
Youssef Lahrechsenior vice president at Capital One
The AI fraud detection tool uses natural language processing (NLP) to accept a wide range of interactive responses from customers if it does report fraud, which Lahrech said can help Capital One build more of a context around the potential fraud case. An example is with cases of "friendly fraud," or when a consumer tried to defraud a credit card company, in which the issues can be complex.
With ML technology, those answers from customers can be used to give more personalized responses, and also establish a profile based on pattern recognition.
Such a profile, which could include information on how a customer usually writes or fills out a form, "lets us create a better layer of protection," Lahrech noted, and could ultimately be used to find out if a scammer or a bot is impersonating them.
These ML and AI fraud detection tools have largely been successful at Capital One by significantly boosting the company's ability to automatically and accurately flag fraudulent activity, Lahrech said.
"ML is changing the world, one service at a time," he said. "Your banking will go from something you pull out of your pocket when you need to do it to a virtual personal assistant."
New IBM tool helps users spot AI bias
At The AI Summit, IBM introduced a new tool that can help organizations recognize and mitigate biases in an AI-powered system. IBM's new Fairness 360 Kit will enable organizations to scan their AI models for AI bias and make tailored recommendations on how to fix detected biases.
"Models are by definition bias, at least in their initial creation," said Bill Lobig, vice president of engineering for Watson data and AI at IBM, in an interview at the conference. The models are based on sample data, he said, and are created by humans. And humans, by nature, are inherently biased.
With the Fairness 360 Kit, organizations get greater visibility into factors that influence the outcome of their models, Lobig said.
It can be hard for organizations to detect bias in their systems because many such systems operate inside a black box, with its operators understanding the system solely through its inputs and outputs.
This technology, Lobig said, can almost be thought of as glasses for such systems. A person might not know that they have inherently blurred eyesight until they slip on a pair of glasses, which instantly and automatically correct the vision.
IBM is still working on the automatic part, which Lobig said is in research. But the Fairness 360 Kit, with input from data scientists, will enable an organization to gain insights into biases, as well as ways to help eliminate them, he said.
Recently, Google and Microsoft introduced similar products. Lobig said those appear to be more "on the data science and build and train side, while we're more focused on the application, integration, insight side."
Meanwhile, Rahul Deshpande, senior vice president of digital strategy integration at Mastercard, in a separate talk at The AI Summit spoke about some of the AI fraud detection tools in place at Mastercard.
Like at Capital One, ML technologies at Mastercard have enabled the company to create more advanced ways to flag fraudulent behavior. The credit card company does this by creating pattern-based profiles for consumers, which Deshpande said can help with identifying a real user versus an advanced bot trying to imitate that user.
Relying on data gleaned from ML and AI fraud detection tools is important, he said, because after so many hacks of so many different companies, and with so much personal data in the hands of criminals, "traditional data sources that we have are probably not good, are compromised."
The AI Summit, held Sept. 19 to 20, drew hundreds of companies that use ML and AI.