sdecoret - stock.adobe.com

Tip

Generative AI in SecOps and how to prepare

Generative AI assistants could be game changers in the SOC -- but not if SecOps teams haven't prepared for them. Here's how to get ready.

It is easy to imagine how broad-function, generative AI could provide overworked and understaffed security operations teams with vital assistance in the management of fast-changing cybersecurity environments.

After all, systems such as ChatGPT can already write passable code. Within a few years, they should be ready to update security policies, script ad hoc system integrations and otherwise support SecOps staffs in their work. They may even be ready to take the place of security staff members an organization can't find or afford, assuming the software proves less expensive than equivalent staffing.

To be sure, some thorny problems still remain -- for example, the fact that some AI systems make up information, including fake citations from real or invented sources. No one wants a security assistant that lies about the presence or absence of a threat or about whether it has addressed a confirmed threat.

Setting aside the question of when ChatGPT and similar generative AI tools will truly be ready for widespread cybersecurity use, other questions warrant consideration in anticipation of that day. They include the following.

Is your staff ready to use generative AI in SecOps?

In an ideal world, generative AI tools would be so effective and user-friendly that SecOps staff would require no special training to fold them into existing security operations. Unsurprisingly, however, no cybersecurity professional should expect these systems to reach such heights in the short term.

No one can yet know what new expertise cybersecurity practitioners will need to use generative AI for SecOps purposes. At a minimum, though, staff should be well practiced in the following:

  • Crafting clear and unambiguous requests. The quality of an AI assistant's output depends significantly on the quality of the human operator's input. Effective prompt engineering maximizes the odds the software returns helpful responses.
  • Reviewing generative AI output carefully and critically. Don't believe everything you read -- even if it comes from enterprise-grade generative AI. SecOps practitioners need the critical thinking skills to look for errors and inconsistencies in AI output and to apply the output appropriately.
  • Accounting for the limitations of AI-powered assistants. SecOps teams need to consider when and how complex, multistep and multibranch processes might exceed generative AI tools' reasoning capabilities -- with potentially disastrous results.

For instance, the AI assistants of the near future likely won't have the sophistication to recognize how context should shape their responses. Imagine a SecOps team tells its AI assistant, "Immediately shut down any PC that might have malware X." But, in following this directive, the AI shuts down the CEO's PC during a pivotal presentation to the board.

In anticipation of such a scenario, the security team could instead create the following context-aware prompt: "Always ask for approval before shutting down executive-level PCs. Immediately shut down any other PCs that might have malware X."

In theory, generative AI could eventually mature to the point that it can make complex, context-aware decisions as well and as reliably as humans. Initially, however, the Sorcerer's Apprentice problem -- cases in which the assistant, without proper oversight, wreaks havoc by casting spells it lacks the wisdom and power to adequately understand and control -- will almost certainly persist. This problem has vexed every other generation of automation to date, with no reason to think generative AI will be exempt.

Early generative AI tools will likely do best in SecOps environments that are orderly and predictable.

Can your environment support generative AI in SecOps?

Early generative AI tools will likely do best in SecOps environments that are orderly and predictable. If, for example, operations staff members have created an environment where tools can readily distinguish between an executive's PC and an entry-level employee's PC -- perhaps based on a system-of-record database -- generative AI will be able to act more intelligently.

The less orderly, consistent and well documented the environment, however, the more SecOps teams have to lean on the AI to infer correct behavior, which is a potentially hit-or-miss proposition. Again, while AI assistants' ability to reason underlies their promise in operations, it is unlikely to be fully up to snuff in the early days.

Happily, generative AI could help make the environment more orderly and manageable by applying inhuman levels of consistency, patience, persistence and focus to housekeeping tasks. AI assistants may ultimately prove better able than previous generations of automation to do the following:

  • Knit together and maintain an accurate picture of the environment, using network logs; SMTP data; stale configuration databases and directories; and other systems.
  • Ask intelligent questions to get information about the environment it lacks.
  • Suggest ways to make the environment more manageable.

In this way, generative AI may help get the SecOps house in order so it can -- in turn -- better manage it.

Next Steps

How to manage generative AI security risks in the enterprise

Evaluate the risks and benefits of AI in cybersecurity

Dig Deeper on Security operations and management

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close