I recently talked about the use of AI in the enterprise with Rob Clyde, vice chairman of the board of directors at ISACA, an organization focused on IT governance. The conversation focused on a new report about the threats AI poses to society and floated possible recourses. Remedies ranged from building security into AI chips to delaying publication of new AI technology until it could be vetted for security vulnerabilities.
The 26 authors wanted the 101-page report to act as a conversation-starter and motivate policy makers, academics and corporations to start thinking not just about the benefits and drawbacks that come with the use of AI.
Clyde did just that, providing SearchCIO with a cogent analysis of the report as well as with a number of his own suggestions for protecting ourselves against malicious AI — from beefing up training for AI skills to figuring out how to audit AI applications. Not everything Clyde and I talked about made it into the published articles, which you can find here and here. Here is a snapshot of his views on the validity of the report.
On what the report is lacking, Rob Clyde said:
“This issue of self-training — where an AI learns from itself. I don’t know if you know this but [AlphaGo Zero] has tipped the master Go players’ games upside down. The [masters] have learned new ways to play Go [from Google’s robot] after a thousand years or more playing in human history. For the first time ever we have radically new ways to play and win the game of Go. And it came from AlphaGo Zero playing itself.”
On whether the report left out any important voices, Rob Clyde said:
“This is a good group. It’s an impressive group. I always look at something like this and I don’t ask who was left out but were enough people included that you start to say there’s a good chance this had some balance to it. And I do feel like that.”
On whether the report’s focus on available technologies or technologies that are plausible in the next five years was too narrow, Rob Clyde said:
“It is the right timeframe because we are in the exponential part of the curve. So I’m all for that. … This is one of the greatest times for AI. And unlike many others of the so-called golden periods in computer science, I’m thrilled that at this point — we’re at the beginning of that exponential curve of very rapid growth — that we’re having conversations like this to ensure that we’re at least putting some thought and questions around the use of AI, allowing the community to rally around it and have a better chance of protecting ourselves as we go into the future.”