metamorworks - stock.adobe.com

Enterprises not focused enough on AI governance, security

AI has reached a plateau, with many organizations using similar AI and machine learning tools. But enterprises have been slow to implement AI ethics and effective cybersecurity.

The AI industry must enter its next stage of adoption to avoid the dreaded AI winter.

An AI winter is the traditionally cyclical fallow period for the technology when innovation and R&D slow down or stall.

In its annual "AI Adoption in the Enterprise" report out on March 30, tech education publisher O'Reilly Media found that there hasn't been much change in enterprise adoption of AI tools and technologies over the last few years. This is O'Reilly's fifth year publishing the report.

Industries such as IT and financial services are still most likely to adopt AI tools and technologies, while the government and education sectors are still largely evaluating AI, according to O'Reilly. When they look at AI tools, organizations continue to favor open source machine learning frameworks such as TensorFlow, PyTorch and AWS SageMaker.

But the absence of dramatic growth in the rate at which enterprises are using AI as part of their standard hardware and software suite makes some think the AI industry has, in some sense, stalled.

In this Q&A, Mike Loukides, one of the authors of this year's report and vice president of content strategy at O'Reilly Media, said enterprises' attitudes toward AI governance, ethics and security need to change to push AI adoption to its next stage.

What does the lack of significant progress in enterprises' adoption of AI indicate?

Mike Loukides: There is a sense that we are sort of at a crossroads. AI is sort of in a dangerous state, where five years from now, are we going to find out that we built a lot of AI systems and they're really not what we want?

Mike Loukides, vice president of content strategy, O'Reilly MediaMike Loukides

A couple of things that disturbed me were that practical interest in ethics was just the same as it was a year before and, relatively speaking, not that high on the list of people's concerns.

Even more concerning, after everything we saw last year with security and safety, we're just dead on the same as we were last year. Whatever we should have learned about security after a really bad year with ransomware and all sorts of other attacks, the AI community doesn't appear to be learning it. So that, I think, is a big problem.

Is there a disconnect between what we're hearing about AI governance and what enterprises are actually doing?

Loukides: It's really important for people to understand responsible AI, responsible computing in general. I don't know if the message is getting across there.

What might spur enterprises to take AI governance and ethics more seriously?

Loukides: I think what will cause a change is that we're increasingly seeing regulation like GDPR and the California Consumer Privacy Act. Those regulations will force change.

Whatever we should have learned about security after a really bad year with ransomware and all sorts of other attacks, the AI community doesn't appear to be learning it.
Mike LoukidesVice president of content strategy, O'Reilly Media

The problem with regulation is that it's often badly thought out. One of the goals of GDPR was to make websites less intrusive by limiting the use of cookies. Now, all that happens is that whenever you go to a website, you get an extra popup to click, to tell them that they can use cookies.

Regulation is important, but, particularly with technology, it's often done poorly.

What are some of the security problems with AI for enterprises?

Loukides: I think the biggest security problems around AI are going to come up when you start seeing ... more data poisoning attacks. For example, if a company creates a chatbot to do customer service, you are almost certainly going to have some class of people come along who think it's fun to see if they can get it to become racist or misogynistic.

We don't have good ways of getting a handle on that. One problem in the industry is that people are not terribly used to thinking about what can go wrong with this.

A point that the ethics community makes is that as long as development teams are primarily white, and male, you're not going to get people who are sensitive to the issues that people actually face in the real world.

There's a big issue of cultural sensitivity. That tends not to happen if the development is done by ... the old white boys' network.

Editor's note: This interview has been edited for clarity and conciseness.

Dig Deeper on AI technologies

Business Analytics
CIO
Data Management
ERP
Close