Sergey Nivens - Fotolia

As AI identity management takes shape, are enterprises ready?

Experts at the Identiverse 2018 conference discussed how artificial intelligence and machine learning are poised to reshape the identity and access management market.

BOSTON -- Enterprises may soon find themselves replacing their usernames and passwords with algorithms.

At the Identiverse 2018 conference last month, a chorus of vendors, infosec experts and keynote speakers discussed how machine learning and artificial intelligence are changing the identity and access management (IAM) space. Specifically, IAM professionals promoted the concept of AI identity management, where vulnerable password systems are replaced by systems that rely instead on biometrics and behavioral security to authenticate users. And, as the argument goes, humans won't be capable of effectively analyzing the growing number of authentication factors, which can include everything from login times and download activity to mouse movements and keystroke patterns. 

Sarah Squire, senior technical architect at Ping Identity, believes that use of machine learning and AI for authentication and identity management will only increase. "There's so much behavioral data that we'll need AI to help look at all of the authentication factors," she told SearchSecurity, adding that such technology is likely more secure than relying solely on traditional password systems.

During his Identiverse keynote, Andrew McAfee, principal research scientist at the Massachusetts Institute of Technology, discussed how technology, and AI in particular, is changing the rules of business and replacing executive "gut decisions" with data intensive predictions and determinations. "As we rewrite the business playbook, we need to keep in mind that machines are now demonstrating excellent judgment over and over and over," he said.

AI identity management in practice

Some vendors have already deployed AI and machine learning for IAM. For example, cybersecurity startup Elastic Beam, which was acquired by Ping last month, uses AI-driven analysis to monitor API activity and potentially block APIs if malicious activity is detected. Bernard Harguindeguy, founder of Elastic Beam and Ping's new senior vice president of intelligence, said AI is uniquely suited for API security because there are simply too many APIs, too many connections and too wide an array of activity to monitor for human admins to keep up with.

There are other applications for AI identity management and access control. Andras Cser, vice president and principal analyst for security and risk professionals at Forrester Research, said he sees several ways machine learning and AI are being used in the IAM space. For example, privileged identity management can use algorithms to analyze activity and usage patterns to ensure the individuals using the privileged accounts aren't malicious actors.

"You're looking at things like, how has a system administrator been doing X, Y and Z, and why? If this admin has been using these three things and suddenly he's looking at 15 other things, then why does he need that?" Cser said.

In addition, Cser said machine learning and AI can be used for conditional access and authorization. "Adaptive or risk-based authorization tend to depend on machine learning to a great degree," he said. "For example, we see that you have access to these 10 resources, but you need to be in your office during normal business hours to access them. Or if you've been misusing these resources across these three applications, then it will ratchet back your entitlements at least temporarily and grant you read-only access or require manager approval."

Algorithms are being used not just for managing identities but creating them as well. During his Identiverse keynote, Jonathan Zittrain, George Bemis professor of international law at Harvard Law School, discussed how companies are using data to create "derived identities" of consumers and users. "Artificial intelligence is playing a role in this in a way that maybe it wasn't just a few years ago," he said.

Zittrain said he had a "vague sense of unease" around machine learning being used to target individuals via their derived identities and market suggested products. We don't know what data is being used, he said, but we know there is a lot of it, and the identities that are created aren't always accurate. Zittrain joked about how when he was in England a while ago, he was looking at the Lego Creator activity book on Amazon, which was offered up as the "perfect partner" to a book called American Jihad. Other times, he said, the technology creates anxieties when people discover they are too accurate.

"You realize the way these machine learning technologies work is by really being effective at finding correlations where our own instincts would tell us none exist," Zittrain said. "And yet, they can look over every rock to find one."

Potential issues with AI identity management

Experts say allowing AI systems to automatically authenticate or block users, applications and APIs with no human oversight comes with some risk, as algorithms are never 100% accurate. Squire says there could be a trial and error period, but added there are ways to mitigate those errors. For example, she suggested AI identity management shouldn't treat all applications and systems the same and suggested assigning risk levels for each resource or asset that requires authentication.

"It depends on what the user is doing," Squire said. "If you're doing something that has a low risk score, then you don't need to automatically block access to it. But if something has a high risk score, and the authentication factors don't meet the requirement, then it can automatically block access."

Squire said she doesn't expect AI identity management to remove the need for human infosec professionals. In fact, it may require even more. "Using AI is going to allow us to do our jobs in a smarter way," she said. "We'll still need humans in the loop to tell the AI to shut up and provide context for the authentication data."

Cser said the success of AI-driven identity management and access control will depend on a few critical factors. "The quality and reliability of the algorithms are important," he said. "How is the model governed? There's always a model governance aspect. There should be some kind of mathematically defensible, formalized governance method to ensure you're not creating regression."

Explainability is also important, he said. Vendor technology should have some type of "explanation artifacts" that clarify why access has been granted or rejected, what factors were used, how those factors were weighted and other vital details about the process. If IAM systems or services don't have those artifacts, then they risk becoming black boxes that human infosec professionals can't manage or trust.

Regardless of potential risks, experts at Identiverse generally agreed that machine learning and AI are proving their effectiveness and expect an increasing amount of work to be delegated to them. "The optimal, smart division of labor between what we do -- minds -- and [what] machines do is shifting very, very quickly," McAfee said during his keynote. "Very often it's shifting in the direction of the machines. That doesn't mean that all of us have nothing left to offer, that's not the case at all. It does mean that we'd better re-examine some of our fundamental assumptions about what we're better at than the machines because of the judgment and the other capabilities that the machines are demonstrating now."

Dig Deeper on Identity and access management

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close