agsandrew - Fotolia

Companies ill-equipped to combat malicious AI

Companies are on the precipice of being attacked by malicious AI but lack the skills and tools needed to put up a basic defense, according to ISACA's Rob Clyde.

What happens when AI gets into the hands of nation-states and cyberhackers? For Rob Clyde, vice chairman of the board of directors at the IT governance organization ISACA, the rapid and recent progress made by artificial intelligence hardware and software is tempered by questions like this one.

In an interview, Clyde shared his thoughts on the recently published "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation," a paper by 26 academic, industry and policy experts that raises the alarm on malicious AI. Although he believes the paper does a good job of framing the issues related to malicious AI, he worries that companies lack the skills and tools needed to deal with what he considers an imminent threat.

In part one of this two-part interview, Clyde discusses the importance of AI audits and why self-training algorithms, such as AlphaGo Zero, the successor to Google's AlphaGo and now the reigning Go champion, offer proof that companies need to face the dangers of AI -- right now.

Editor's note: This interview has been edited for clarity.

What did you take from this malicious AI report?

Rob ClydeRob Clyde

Rob Clyde: There are a couple of things that jumped out at me and bothered me for a while. By the way, I'm a big fan of AI. In the long run, I am an optimist, and while we may go through some pain in the short term, eventually, we will figure things out.

With that as a backdrop, there are some issues that bothered me. For example, at ISACA, we have many who have been trained to do IT audits. In other words, to apply a certain level of assurance that our systems are doing what we think they should be doing and are not easily hackable or vulnerable.

I don't think many people know how to audit an AI [application]. If you ask the question, 'How do we know the AI was trained properly?' I don't think we have the tools and techniques to determine that, and it is a serious issue that we have to look at.

Secondly, I don't think people are aware of how fast AI is progressing. It is now in the exponential part of the curve -- its abilities are growing exponentially, not linearly. When I was in school, the idea of writing a program [that] could beat a grandmaster at Go was a pipe dream. It was the ultimate goal of AI, the holy grail. That was done by AlphaGo in 2016 by training it with tons of games.

Here's the amazing part: The next year, a new version called AlphaGo Zero just played itself and beat AlphaGo in three days and then beat more grandmasters and is the reigning grand champion.

You said we don't have the tools or techniques to perform AI audits. What, exactly, is missing?

Clyde: First of all, skills. Most people who are in the business of doing audits are not that familiar with how AI works and the nature of AI. So, for example: machine learning in an antifraud system that learns what fraud is and what fraud isn't -- how do you know that it has not been maliciously trained?

And then, tools. Many times, those doing an audit make use of tools that the company already has in place. If the company had the usual suite of systems that are out there, one of the tools is likely to be some kind of policy compliance tool, which is going to include vulnerability scanning, and that will be one of the key parts of the report -- that [the policy compliance tool] is regularly running. Well, what's the name of the tool that checks to see if somebody maliciously trained an AI? We don't have that.

How will that change?

The answer to dealing with vulnerable software is more software that checks for vulnerable software. So, it is not unusual to use the technology itself -- in this case, AI -- as a way to help solve the problem.
Rob ClydeISACA

Clyde: My gut is, if you think about it, while the future of hacking is likely to be [driven by] AI, I also think the future of cybersecurity and IT audit is likely to be AI. So, the answer is probably also the problem. It's using AI as a way to help ensure that AI is OK.

And if that scares people, I want them to just think for a second. The answer to dealing with vulnerable software is more software that checks for vulnerable software. So, it is not unusual to use the technology itself -- in this case, AI -- as a way to help solve the problem.

And, in many cases, the automated tools outperform humans.

Clyde: Far better. That is something most people are not aware of.

Of course, you also have to contend with AI's other side -- such as adversarial networks that can trick the human eye in to not seeing something in an image that's really there.

Clyde: Yes. It's one thing when AI tells us things. It gets scarier when AI is doing something like driving a car and it gets it wrong. That's an aspect to being maliciously trained, and I also think there are concerns of systems that are basically gaming against themselves. This is a common technique in AI.

I mentioned AlphaGo Zero that beat everything else and never saw a human play. It just knew the rules and learned by playing itself. So, imagine, for example, if, in the future, we had a traffic system that was able to teach itself. And as it monitored traffic, it noticed the goal was to make the best possible traffic for a city. It noticed that, when people made a mistake -- ran through a red light and there was a car crash -- a change in traffic occurred because, guess what, there were fewer cars on the road. And it comes to the conclusion that causing lots of accidents is the solution to traffic: I will do that tomorrow.

The idea is irrational.

Clyde: A human being would never do that, but an AI might. And this is where the researchers get into the right area. At the end of their report, they mentioned that, just like we've had to use best practices in coding, we can do a similar thing with AI if we build ethics into it. It almost goes back to Isaac Asimov's Three Rules of Robotics. If you imagine that the traffic system's No. 1 rule is don't harm humans, now, it's got some ethics that are built in, and we might feel a little more comfortable with AI being able to self-learn.

Editor's note: In part two of this Q&A, Clyde analyzes the report's advice on what companies can do to thwart AI-driven cyberattacks, including embedding security into hardware and delaying publication of AI breakthroughs.

Dig Deeper on Risk management and governance

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close