Why effective cybersecurity is important for businesses What is attack surface management and why is it necessary?
X
Tip

Evaluate the risks and benefits of AI in cybersecurity

Incorporating AI in cybersecurity can bolster organizations' defenses, but it's essential to consider risks such as cost, strain on resources and model bias before implementation.

With AI dominating many technology conversations, some organizations are pondering AI investments for cybersecurity to rapidly identify security risks and threats.

But AI has a range of downsides, including bias in decision-making, lack of explainability or transparency in recommendations, the AI skills shortage, and the resource-intensive nature of AI. These limitations mean that investing heavily in AI for cybersecurity -- especially generative AI such as LLMs -- can have its drawbacks.

"If you define AI as true machine intelligence, resembling human intelligence, we're not close to that point," said Thomas P. Vartanian, executive director at the nonprofit Financial Technology and Cybersecurity Center. "So, from that perspective, AI is largely not in use."

Use of LLMs that are trained on vast amounts of data is growing exponentially, however, Vartanian said. To determine what role AI should play in their cybersecurity strategy, IT leaders should weigh the potential benefits and drawbacks.

Benefits of AI in cybersecurity

AI can be a valuable tool for strengthening an organization's cyberdefense posture. AI's potential benefits in cybersecurity include the following:

  • Detecting, analyzing and responding to security threats faster than traditional security tools.
  • Understanding an organization's networks and systems.
  • Analyzing large amounts of data to detect unusual activity.
  • Suggesting options to address discovered vulnerabilities.

Perhaps the top benefit of AI in cybersecurity is leveling the playing field against attackers. Hackers and other bad actors typically have the most cutting-edge tools at their disposal. Organizations should want nothing less if they are to mount a good defense to keep pace with ever-changing threats.

"Adversaries have been using artificial intelligence tools for some time," said Larry Clinton, president of the cybersecurity trade association Internet Security Alliance. "If you're not doing that, then you are really at great risk of being subjected to sophisticated attacks -- more sophisticated than you may even imagine."

AI-driven security can help an organization move toward a more proactive, forward-looking risk posture, he said. AI tools can quickly and efficiently evaluate potential threats and recommend response options. Machine learning algorithms can also adjust their own behavior over time, enabling better vulnerability management, more secure authentication and stronger defenses against malicious bots.

"AI may be very helpful in determining how an adversary might attack you and which attempt is most likely to hit you," Clinton said. "So, the AI can certainly help with that kind of data analysis and forecasting."

An underappreciated benefit of AI is its ability to assess vulnerabilities in a hybrid or remote working environment, Vartanian said. Organizations' networks have expanded substantially due to the number of people working from home, which also creates security vulnerabilities. AI can help organizations deal with their growing security needs in response to employees working remotely, he said.

Drawbacks of AI in cybersecurity

Among the top drawbacks of investing in AI for cybersecurity is the expense of AI adoption efforts. With LLMs, for example, acquiring applications, integrating them into an organization's IT systems, and then monitoring and maintaining them will be a very expensive process, Vartanian said.

Another significant drawback is that, at least for the foreseeable future, AI will be resource intensive. In addition to the underlying infrastructure, AI security models require extensive and diverse training data, as well as personnel who understand how to operate and maintain those models and software programs.

"This is something to be watched carefully by CIOs, because we have a lack of enough people who really understand the technology," Vartanian said.

Without sufficient data, AI systems can produce incorrect monitoring results and false positives. And these risks can have real consequences for organizations.

"Companies are being damaged by AI systems that are formally trained on bad data," Vartanian said. "That can lead to bias in the output, and bias in the output often leads to lawsuits and reputational harm. But most fundamentally, it can just lead to the wrong answer."

Because of these concerns, CIOs must consider AI in light of their organization's cybersecurity strategy, the anticipated costs and the potential rewards before implementation.

Current uses of AI in cybersecurity

Many organizations are currently in the experimental stage when it comes to AI, exploring its potential use cases for cybersecurity.

One sector that has benefited from investments in AI is banking and financial services, where one promising use case is detecting money laundering, Vartanian said. It's impossible for a human to manually monitor millions of transactions for money laundering every day while simultaneously considering all applicable rules.

But with AI models, organizations can identify previously undetectable patterns and monitor for suspicious behavior at scale. "[Machines] track trends and suspicious activity in ways that human beings cannot begin to do," Vartanian said.

Overall, organizations' investments in AI for cybersecurity are largely driven by two opposing impulses, Clinton said. The first is the fear of losing out, which pushes organizations to invest in new technologies so as not to be left behind by their competitors -- regardless of whether the organization is ready to adopt that technology. The second is caution about the potential unanticipated consequences of implementing AI.

"Organizations are being pretty cautious about using AI in their cybersecurity because the unknowns are so enormous," Clinton said. "And the people who do cybersecurity tend to be a fairly cautious lot."

Moving forward, Clinton expects to see maturation with respect to the uses of AI tools in cybersecurity, especially LLMs. "ChatGPT, for example -- which is everywhere now -- is not very good for making decisions, but it's really good for generating options," he said.

As AI platforms evolve, their decision-making capabilities will need to improve. For now, organizations can start by weighing AI's potential benefits for cybersecurity efforts against its overall business impact.

"Don't jump the gun," Clinton said. "Blend this into your business plan. Do it carefully. Do it thoroughly. It's not a magic bullet."

Next Steps

What generative AI's rise means for the cybersecurity industry

What are the risks and limitations of generative AI?

AI in risk management: Top benefits and challenges explained

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close