Coralogix CEO highlights the difference between AI and software monitoring, as illustrated by his company's acquisition and product expansion this year.
AI monitoring represents a new discipline in IT operations, or so believes one observability CEO, whose company recently made an acquisition to help it tackle the technology's unique challenges.
In December 2024, security and observability vendor Coralogix bought AI monitoring startup Aporia. In March, Coralogix launched its AI Center based on that intellectual property. AI Center includes a service catalog that tracks AI usage within an organization, guardrails for AI security, response quality and cost metrics.
Ariel Assaraf
This tool represents a strong departure from the previous application security and performance management world for the company, said Ariel Assaraf, CEO at Coralogix, during an interview on the IT Ops Query podcast.
"People tend to look at AI as just another service, and they'd say, 'Well, you write code to generate it, so I assume you'd monitor it like code,' which is completely false," Assaraf said. "There's no working and not working in AI -- there's a gradient of options ... and damage to your company, your business or your operations can be done without any error or metric going off."
This is especially true for established enterprises, he said.
"If you're a small company ... you see a big opportunity with AI," Assaraf said. "If you're a big company ... AI is the worst thing that has ever happened. ... A dramatic tectonic change like AI is something that now I need to figure out, 'How do I handle it?' It is also an opportunity, of course, but it's beyond that as a risk."
There's no working and not working in AI -- there's a gradient of options ... and damage to your company, your business or your operations can be done without any error or metric going off.
Ariel AssarafCEO, Coralogix
The key to effective AI monitoring and governance is to first map out what AI tools exist within an organization, Assaraf said. It's an approach known as AI security posture management, similar to cloud security posture management -- one taken by Coralogix and competitors including Google's Wiz, Microsoft and Palo Alto Networks.
Coralogix AI Center first discovers and lists the AI models in use within an organization, and then uses specialized models of its own behind the scenes to monitor their responses and apply guardrails. These guardrails span a wide range of AI concerns, such as stopping sensitive data leaks, preventing hallucinations and toxic responses, and making sure AI tools don't refer a customer to a competitor.
"Once you do that, you'll start getting stats on how many hits you've had [against] one of these guardrails and ... go all the way to replaying that particular interaction ... so I can maybe interact with that user and proactively resolve the issue," Assaraf said.
However, while it's important to give AI guidance and ensure its good governance, AI's real value lies in the fact that it's nondeterministic, so it's equally important not to install so many guardrails that it's fenced in, he said.
"If you try to overly scope it, you end up with just expensive and more complex software," Assaraf said.
Beth Pariseau, a senior news writer for Informa TechTarget, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out @PariseauTT.