arthead - stock.adobe.com
How DXC Technology uses agentic AI in the SOC
CIOs must explore new security approaches as cyber threats grow more complex. Explore how DXC Technology deployed agentic AI in the SOC to enhance threat detection.
As cybercriminals increasingly use AI, CIOs face growing pressure to adapt their IT security strategies.
Traditional security operations centers (SOCs) can struggle to keep up with AI-powered attacks. However, agentic AI -- an AI system capable of autonomous action and decision-making -- can help organizations fight fire with fire. It offers CIOs and CISOs a way to defend against cybercriminals who operate at machine speed.
At DXC Technology -- a global IT services company -- agentic AI has reshaped how the SOC operates and how security aligns with broader IT strategy. Under the direction of CISO Mike Baker, the company implemented agentic AI to automate tier-one security monitoring, accelerate threat detection and triage, and enable human analysts to occupy more complex roles. The company's implementation highlights how CIOs and CISOs can separate AI hype from reality, manage organizational change and tie agentic AI to measurable ROI.
CIOs should pay close attention to how DXC Technology applied agentic AI in a live SOC environment, as it illustrates how the technology can improve security outcomes and reshape the SOC's cost and talent model. In the following interview, Baker explains the operational bottlenecks that led to adoption and the benefits and challenges that followed.
Editor's note: The following transcript was edited for length and clarity.
What problems in the SOC were you trying to solve before you brought in agentic AI?
Mike Baker: People are looking at AI as an opportunity to unlock value. The challenge is that there's a lot of hype in the market right now. So, there's a lot of obligation within business -- whether you're in cyber, IT or elsewhere -- to stay connected and understand AI hype versus reality. When I think about real returns, number one is ROI -- and not just a monetary ROI, but also efficacy and efficiency. Are we more accurate in what we're doing? Are we doing it more quickly?
From a threat standpoint, when you look at how our adversary is going to use AI, you must start to think about how, as a company, you're going to use AI to defend against those attacks.
We saw the Anthropic report in November, which was kind of the first documented AI-orchestrated adversarial attack. They said the requests were coming in at rates that were physically impossible. You can only counter that with physically impossible rates of defense, which is generally driven by agentic AI.
When I looked at the processes associated with a cyber program, the most obvious was disruption of the traditional SOC. I went out to the market 10 months ago and did a full review -- startups, venture capitalists, resellers and major platforms -- saying 'Where are you really and what capabilities can you truly offer today?' Luckily, I felt the ecosystem of providers and expertise was really geared towards that SOC disruption. For me, that was the obvious first step to get real ROI.
What ROI have you seen since implementing agentic AI in the SOC?
Baker: It hasn't been easy. When you're implementing AI systems, it is disruptive. You must lead the teams and people and think about processes in a new way.
But as of today, DXC, as it relates to our own SOC, has agentically implemented capabilities on 100% of our security monitoring tools, use cases and tier one SOC analysts. From an ROI perspective, that means we can upskill and redeploy those tier one analysts to jobs that offer greater value per hour. Those people can become a human in the loop and trained in AI and this new way of thinking about event detection. We can upskill them into cyber threat intelligence analysts, cyber threat hunters and instant responders.
Additionally, we've seen a 68-70% improvement in the time it takes to acknowledge a ticket, and we are 77-80% quicker at triaging it. And it's just going to get better over time. If I'm a predicting man, as we continue to tune and work with the vendor, those numbers will probably exceed 90% here in the next quarter.
How did team members initially react to hearing about this project? Were they afraid of AI replacing them?
Baker: There's a natural hesitation. We're human, and when you're introducing major change into a system that's been relatively unchanged for the last 20 years, there's going to be apprehension. But that was my job as a leader -- to shine a spotlight on what we're doing, why we're doing it and what the opportunity is for our employees. If you're able to grab that by the horns and explain to your team the why behind the activity, it really assuages that apprehension.
To introduce these new tools and get to the art of the possible, there's almost a mentorship that you must have, regardless of fear of job displacement. How do we as leaders lead our cyber teams down the path of thinking about a problem differently and fully embracing that moving forward? There's always going to be some healthy tension in these projects, and it takes a little bit of time to get through that.
What challenges did you face in implementing agentic AI?
Baker: I mentioned the people side of it as it relates to leading through change and making sure the teams are thinking about processes differently.
The other thing is that these are all net-new capabilities. We've teamed with an innovative startup called 7AI. You almost have to grow with your partner when you're doing things like this. This isn't something that's been around for 20 years that you can just take off the shelf and have it work perfectly.
Setting up processes, communication, teamwork and growing with your partner over time is key. That's what we've done. Our partner has been innovating in real time on the technology side to make our lives easier.
What made you confident that this technology was mature enough to use in a live security environment?
Baker: We tested it quite extensively. We had to keep humans in the loop and do continuous validation of the agentic conclusions on a certain ticket. What we didn't want to do is slow down to the point where we would wait for obscure rules to fire x number of times and then validate that almost in a waterfall method. Instead, we deployed half of our tier one team to sit alongside the AI, almost as a human copilot, continuously validating as things churned out.
Using that method, we were able to determine the false-positive rate. With our partner, we were able to put in almost a continuous feedback system -- whenever the AI made the wrong conclusion, a human or two humans would look at it and say, 'Yeah, this is something that's not benign. We really need to look at it.' We were able to put that feedback in real time and have our vendor partner implement the necessary changes to prevent it from happening again.
We've seen our false-positive rate plummet as we've done that. We will likely always have some form of human-in-the-loop validation. However, again, when you think about ROI, you're going from x number of people in the tier one SOC, redeploying most of them to different jobs and different opportunities, and keeping a few there that are AI native that can then offer a little bit of ongoing assurance. My confidence in the product certainly grew over time with that human-based validation loop that we were in for the last couple of months.
What advice would you give other CISOs considering agentic AI?
Baker: I have two pieces of advice. The first is to just get started. That may sound a little generic, but I do think that there's a general fear. Here's an example: 'Hey, Mike, are you worried about the false positive rates that would come out of an agentic SOC?' Of course, I'm worried about false-positive rates the same way I'm worried about them in a human-based SOC. They both have false positive rates.
People are used to deterministic products that you code it a certain way -- like a security orchestration, automation and response product -- and it does it every single time, 100% of the time. That's not going to be the case with AI, and that's kind of scary. But if you just get started, start putting these capabilities in place and start disrupting your thinking around these processes, you will be able to learn so much over time.
My second piece of advice is, again, the adversary is using these tools to attack you at physically impossible request rates. So, my quick follow-up to 'get started' is, if you're not going to get started now, then when? If it's a year or two from now, that may be too late as it relates to how far the adversary has advanced in their techniques to use this stuff.
So, just get started. Move slowly and cautiously. It is our job as modern CISOs to enable the business. Getting to 'Yes' in terms of innovative capabilities is a really good trait. If you look back at past revolutions -- for instance, cloud in 2012, 2013 and 2014 -- CISOs may have been the party of 'No.' Right now, we shouldn't be getting in the way of technological progress. We should be the champions of that technological progress. To do that, we need to get started.
Tim Murphy is site editor for Informa TechTarget's IT Strategy group.