Alex - stock.adobe.com

AI Security Risks Force CIOs to Rethink Strategy

In this Q&A, Michael Spisak of Unit 42, Palo Alto Networks, explains the cybersecurity risks and opportunities that enterprises now face with the rapid rise of AI.

From data leakage to global disruptions and instability, cybersecurity is at the forefront of discussions in the enterprise right now. The rapid rise of agentic and other forms of AI has only intensified the spotlight on security issues. 

AI is seen as both a serious security risk and a technology that may revolutionize enterprise cybersecurity. Anthropic's security frontier model Mythos exemplifies this, as it has been released only to a select number of users due to its potential danger. For example, frontier models like Mythos could dramatically lower the barrier and increase the speed for hackers to find and exploit enterprise vulnerabilities.

AI has introduced or accelerated enterprise security risks, but it can also help combat them, according to Michael Spisak, managing director of cybersecurity R&D for Unit 42 at Palo Alto Networks.

Unit 42 is a division of Palo Alto Networks that provides cybersecurity services for the company and its customers. This includes proactive security, like threat assessments; reactive security, like incident response; and managed services, like managed security operations centers.

Informa TechTarget spoke with Spisak about the current opportunities and challenges around enterprise AI security at the MIT Technology Review EmTech AI conference, held in Cambridge, Mass. From April 21-23.

Editor's note: The following transcript was edited for length and clarity.

What has changed in the past few years in cybersecurity with the rise of AI in the enterprise?

Michael Spisak: The past few years have been an interesting time from a cybersecurity perspective because it seems to move at light speed. When we're dealing with today's challenges like AI and quantum, I think back to client-server, cloud and mobile and some of those adoptions, and I relate what's happening today versus what happened then. I say, 'let's not repeat the sins of the past and kick the can down the road.'

Cloud is a great example where organizations [said] 'that's not us, we're not there, we're not going to be there,' 'the cloud's not secure,' 'we don't understand it,' or 'we're on-premises and have the castle and moat security structure.' Then, all of a sudden, they were there, but probably didn't realize it because the people were using the cloud internally. It's similar now with AI, because it's too powerful and too valuable to push away. A few years ago, I had companies say we're not going to AI, and I told them that you're going to change your tune. It's far too powerful and has too many benefits. [Companies] that put up the wall and blocked it, their employees ended up pulling out their own devices and circumvented security controls to get to it. So, blocking it was not the strategy.

What are some specific threats around AI that should be top of mind for organizations?

Spisak: One of the first things is the data, because when it comes to AI, data is the center of the universe. The more data you give AI across a variety of contexts, the better it will perform toward whatever objective you set for it. But there's a lot of concern about crown-jewel type data leaking out to unsanctioned AI.

Can this happen without a threat actor going in and getting it?

Spisak: Correct, it's like a non-malicious intent. Ultimately, your employees' intentions are well-meaning. They recognize that velocity is king, and they need time-to-market as they try to serve their customers. They think that they need to get this done, so let's use AI. Very often, they'll find their own unsanctioned tool that's not been vetted for use within their enterprise, and they'll inadvertently leak data to it.

Who should be responsible for securing these tools? The tool vendors or the enterprise?

Spisak: It's important that third-party SaaS AI providers need to be secure, so you should understand their security posture and work with them on security. Then you need to understand your own data security posture. You need to understand what's public, what's restricted, what's confidential and what's top secret. For example, many organizations feel that code is top secret. You need to have your data classification sorted out and you can build a grid of classification and data type examples, then specify what types of AI vendors you're going to allow your data to interact with

What about what's going on with the war and global instability?

Spisak: That ranks very high on the risk level. It's very worrisome. From a supply chain perspective, a single attack can have a domino effect on hundreds or thousands of organizations and many layers of code in between. We've been saying this for a long time, but everyone needs to take inventory and rigorously understand a software bill of materials -- where you're using open source and where you're using third parties-- so you can identify and contain breaches when they happen.

Does AI have some specific characteristics that make it more of a security threat or is it just that the speed of threats is increasing?

Spisak: It's a little bit of both. Things have been accelerating for some time, and we've seen activities that suggest adversaries have been using AI for a long time. AI doesn't necessarily show up on a log file. But because of the acceleration and scale of attacks we've seen unfold over the years, everything suggests AI has been involved with adversaries for a while. Fast forward to a few weeks ago. We now have frontier models -- new models that basically have the capabilities to execute tasks autonomously at the expert level. A cybersecurity frontier model can very quickly identify software vulnerabilities and systems. For example, there's Anthropic's Mythos model, and OpenAI has one accessible through its trusted advisor program, and so on. These capabilities have now really changed the game, and that comes down to vulnerability identification and exploitation at scale. Because of that risk, you've got to bring AI to this fight.

How?

Spisak: All CISOs need to start thinking like an adversary, re-evaluate how they're prioritizing what they're finding in their environment, and bring AI to that fight. How do you do that? Use it to triage all the alerts, use it to literally attack yourself like an adversary would find all these vulnerabilities and then understand quickly how to prioritize them for the world that we're living in today.

All CISOs need to start thinking like an adversary, re-evaluate how they're prioritizing things that they're finding in their environment and bring AI to that fight.
Michael SpisakManaging Director of Cybersecurity R&D, Unit 42 at Palo Alto Networks

Anthropic's Mythos and OpenAI's frontier model are being limited in usage because they're too dangerous. Are they that dangerous?

Spisak: We've had early access to Mythos, where we've been exploring it, with lots of researchers assessing and evaluating it and trying to understand how to best use it and apply best practices. When people say, 'Is it really all that [dangerous]?' I'll say that in three weeks, it has done a year's worth of penetration testing. Another example is a critical vulnerability Mythos discovered -- a 27-year-old bug in OpenBSD. OpenBSD was widely considered hardened, battle-tested and secure, but here was a bug that had been sitting there, and Mythos uncovered it and exploited it. So, there's definitely a shift that's happening because of these AI frontier models.

How are the frontier models doing this?

Spisak: There's a lot of "magic" under the hood of how a lot of these models work – stuff that's not visible to us -- there's a lot of training and tuning and neural network activity happening under the hood of the way these models operate. With Mythos in particular, Anthropic had been focused on coding models for a long time, and as a side effect of it, writing great code, it was also able to detect vulnerabilities in that code. The outcome is that, because models are moving at machine speed, they can find vulnerabilities faster than humans can, which is the shift we're seeing.

Do you have advice for CIOs and CISOs to deal with today's threats?

Spisak: The first is to self-assess. You need to look within and think like an attacker, evaluate yourself with AI and become ruthless in inventorying your open source, third-party apps and internal applications, and assess their exposure on the outside and inside. The second is posture. Start to rethink how you measure the risk of your vulnerability findings, and then posture or remediate them in a methodical way. Finally, from a security operations perspective, you need to do this with a platform. At Palo Alto Networks we call it platformization. But what happens is, in organizations you'll find 19 different security tools, one for data, one for identity, one for network and so on, of what people call best-of-breed point solutions. The problem is they leave gaps in between them, so our advice is always to unify your security operations in a platform that can bring all these pieces together; the stuff that's hiding in those cracks will inevitably get you.

Jim O'Donnell is a news director for TechTarget, where he covers IT strategy and enterprise ESG.

Dig Deeper on CIO strategy