Just before the holidays, I had a conversation with Idaptive to learn about the next phase of their user behavior analytics strategy. Essentially, they’re taking machine learning capabilities used to authenticate uses, and applying them to authorization decisions.
These capabilities are part of Idaptive’s “Next-Gen Access Cloud” IDaaS platform. To understand how this all works, let’s first step back and look at how machine learning has entered the EUC space.
Machine learning for authentication
Over the last few years, while the entire tech space has been getting excited about machine learning and artificial intelligence, in the EUC space, we’ve seen it most commonly used for authentication.
These products collect all that data around user authentication activity—including things like time of day, location, network, device, and what application a user is accessing—and use it to build a model. Then, when subsequent authentication activity occurs, all of the same data points can be compared to the model. The degree to which the user’s activity matches their model can be used as a signal to inform other actions, i.e., let the user have access right away, send them an MFA challenge, block access, or notify an administrator.
The classic example is travel. If I’m logging on to an app every morning from my office in San Francisco, then the system can be confident that it’s me. But if I travel out of the country and log in from a new location in the middle of the night, then the system can ask for an MFA challenge to make sure it really is me, and not someone that stole my password. And of course, it can get much more nuanced than this, spotting patterns that are less obvious to us humans.
These processes can use some form of analytics, machine learning, or artificial intelligence. I’m not a data scientist of AI/ML expert, so I can’t talk about the specifics. But the key is that advances in these fields have been trickling down to all sorts of products in the last few years.
Regardless of the types of algorithms under the hood, all this is interesting to EUC in a few ways. First, there’s been a huge amount of hype around AI/ML, and it’s natural to wonder how new technologies will affect us in EUC. Second, authentication products based on AI/ML help address all the huge security issues around lost, stolen, and weak passwords, since they help us spot malicious activity. Third, this offers a better employee experience, as you don’t have to ask for MFA as often.
I’m assuming that by now, most of us are familiar with everything I’ve talked about so far. Here at BrianMadden.com, I wrote about how identity and access management products are getting smarter back in 2016, a time when EUC and identity were just starting to get closer. Now, plenty of products use AI/ML for authentication. What’s the next step?
Idaptive and the next step
Now back to Idaptive. One of the first AI/ML for authentication products we covered was Centrify Analytics, back in 2017. Since then, Centrify split into two companies, with the new Idaptive providing identity as a service.
Now, Idaptive is taking machine learning beyond just “is this user who they say they are” and applying it to “is this user accessing resources they’re supposed to.” In other words, they want to apply it to authorization, in addition to authentication.
My first thought was wait, don’t we already have role-based access control for this? And what about attribute-based access control? But of course like many IT problems, it’s not that simple. But, when you have thousands of users of all different types, hundreds of apps, and different roles in those apps, managing all the rules about who can access what gets very complicated. It’s all too easy to have access to apps and data that you probably shouldn’t, because putting all of the granular permissions in place would take way too much time.
Like authentication, authorization is another area where you can look at typical activity, use it to form a model or a baseline, and then watch out for subsequent activity that looks out of place based on historical behavior. For example, I usually access TechTarget’s content management system and editorial folders in our file sync and share platform. If I was to suddenly start accessing the system that my colleagues use to collect sales leads, that would be a problem that would be flagged. This is a very straightforward example that would hopefully be prevented by role-based controls, but again as we all know, with tons of apps, users, and roles, covering every single situation manually is next to impossible. AI/ML could catch things that we just wouldn’t be able to spot on our own.
This is what Idaptive is building into their products. I spent some time talking to Archit Lohokare, their chief product officer, and reading through their public-facing product documentation.
Idaptive has taken their user behavior analytics engine that was used for authentication, and are now bringing in data from more sources. They have sensors that can integrate with various data sources, including log files, syslog events, Windows Event Logs, app logs, network logs, and Palo Alto Network’s Cortex Data Lake. Idaptive can take the data from these sources and map it back to their machine learning model. From there, they can determine the riskiness of user activity, and then take actions in Idaptive’s SSO, MFA, and Device Security Management products. Idaptive also has security workflow orchestration capabilities, so you can do things like surface events via notifications or web hooks.
So, it’s easy to see how this concept could help with spotting unusual access requests. As a reminder, we’re talking about users that have already been authenticated. So, it could be helpful for spotting insider threats, dealing with app access requests, designing user roles, and compliance. Another thing to note is that you could use the data model for one user and use it as the basis for a model for a new user in the same role. Idaptive even talked about industry-wide organizational models; for example typical models for user roles in healthcare and finance.
Obviously, you would want to roll out these policies in a very conservative way, defaulting to tighter access controls, with admins checking up on alerts. But, even just having something watching out for anomalous activity could be helpful.
As you can see, this is a newer topic for us to be thinking about, and is part of a much broader definition of end user computing. I have all sorts of questions about how this gets deployed, how different types of policies get built, how different companies use it, and how easy it is to get up and running.
There’s also a balancing act. Some systems can follow every single user action once they’re in an app, but they’re proprietary to a single vendor. Others are broader, but require more work to integrate.
Anyway, it’s a good time to take a look at where AI/ML/analytics can provide more help in providing and securing end user access to resources.