Manage Learn to apply best practices and optimize your operations.

AI security: Top experts weigh in on the why and how

Real Talk on AI Security: What You Should be Doing Now

AI is rapidly transforming business operations, bringing with it unprecedented security challenges that traditional cybersecurity approaches are not equipped to deal with. But securing the unique vulnerabilities that AI brings is essential.

In this episode of Security Balancing Act, host and Protect AI CISO Diana Kelley leads an insightful discussion on AI security with Jennifer Raiford, executive vice president and CISO at Globe-Sec Advisory, and David Linthicum, founder and lead researcher for Linthicum Research.

The three-way conversation covers practical steps organizations should take, such as monitoring systems for anomalies, defending against specialized attacks like data poisoning, and safeguarding privacy in the AI age. These and other measures discussed can help organizations' AI projects succeed.

Watch the full episode now to hear from AI security experts and get actionable advice for your organization.

Editor's note: An editor used AI tools to aid in the generation of this news brief. Our expert editors always review and edit content before publishing.

Brenda Horrigan is executive managing editor with Informa TechTarget Editorial's Programs and Execution group.

View All Videos
Transcript - AI security: Top experts weigh in on the why and how

Diana Kelley: Hello, and welcome to the 65th episode of the Security Balancing Act. This is a BrightTALK original series, where every month we explore how businesses can realize the transformative power of technology and the cloud securely, responsibly and ethically. I'm your host, Diana Kelley, CISO, Protect AI, and our topic today, it's about AI, it's real talk on AI security, what you should be doing now.

To walk us through that and talk us through that today, I am joined by two experts in the field, Jennifer Raiford, who is the EVP and chief information security officer at Global Security Advisory. Welcome, Jennifer.

Jennifer Raiford: Thank you for having me.

Kelley: And I apologize. I said, 'Global Security,' and it's like my brain has gone. It's Globe-Sec Advisory, so my apologies. Globe-Sec Advisory, and also with us today, David Linthicum, who is the founder and lead researcher at Linthicum Research.

David Linthicum: How you doing? It's great to be here. Thanks for the invite.

Kelley: Thanks for being here, David, and thanks, everybody in the audience, for being here too. If you are watching us live, you can send questions to us, give us feedback. We would love to hear what you want to know. You've got two experts here to walk us through this problem space. So, anything that you want to know, that you want to ask us, any feedback, please go ahead and submit your questions through the player at any time. And we'll be taking those questions throughout today's discussion.

All right, so let's get started. We'd like to start here with a little bit of a thesis, a level set, and Jennifer, if you would kick us off, there's a lot of confusion about what AI is, how to secure it, how it's being used, you know, what makes it vulnerable, and I was just wondering if you could help us really understand how AI is being used and why it's different, why the security risks may be different for AI than they have been for traditional technology?

Raiford: Sure. I think the confusion around AI often comes from the rapid evolution and the diverse applications, which include generative AI, predictive analytics, autonomous systems.

And NLP AI systems are unique because they learn from data, adapt their behavior, and can even make decisions, which introduces dynamic vulnerabilities not seen in traditional software. These systems are susceptible to attacks like data model poisoning, and then adversarial inputs and the exploitation of their decision-making logic.

Kelley: Yeah, very true. Okay, thanks. David. What are your thoughts on this?

Linthicum: Yeah, … AI is a pattern of systems been around since the fifties, so it's old stuff. Actually, when I was 18 years old, I was a list programmer and built AI systems and actually agentic AI systems, by the way -- so I got everybody there who's running around saying, 'I do agentic AI.'

But at the end of the day, it's the ability to provide systems that have dynamic behavior. In other words, we just don't program them with static actions they're able to carry out, but we're able to give them a behavior, personality, something they're able to carry on, and we're able to train them through massive amounts of data, and they're able to look through the patterns of the data and process the patterns of data much better than we can do as human beings. And that's its power. And so that's why we leverage it for things like supply chain integration or recommendation engines, things like that. The security challenge would be the fact that we have some new attack vectors we have to consider. And certainly, when you get into the generative and the agentic systems.

In other words, it has been around for a long period of time, but it wasn't used at the scale that's being used now, and so suddenly it's sitting in two layers. Number one, people are weaponizing it -- the bad actors, in terms of attacking systems using AI-based attack engines. And the other thing, too, is they're using some of the vulnerabilities of AI.

And so, you know, it was brought up earlier, the ability to poison systems -- your ability to, in essence, introduce adversarial behavior; your ability to pollute the training models and the training data from the systems. And these are things that most rank-and-file security people don't understand right now.

So, in other words, they know how to do security, they know how to lock stuff down. They know identity access management, they know encryption. But when you deal with AI and you're having to deal with application-level security at the AI level, you kind of have no clue. And this is a whole new frontier that's in front of us now, where I think that the technology's gotten way ahead of the best practices and the skill sets that are out there.

Raiford: Yeah, I want to add this one thing too. I totally love everything that you just said, and I was just thinking that we need a shift in our mindset in security fundamentals, right? The traditional measures like firewall, static code testing, have to evolve and include robust data integrity checks, transparency through explainable AI and continuous lifecycle monitoring.

But security has to be baked into every phase of the AI development, from concept to development. I just wanted to add that part.

Linthicum: No, you're absolutely, absolutely right. And the thing is, it's basically … I was about to say just what she said, what Jennifer said … at the end of the day, this is something where security has to be systemic to everything, which is something I've been yelling about with the architecture space for a long period of time.

So, we have to think differently in terms of how we deploy security. No longer is security an afterthought we bake in at the last part of this stage of the deployment models; it has to be baked into the architecture development of the models, development of the training, data development of the inference engines. All these sorts of things should be known and built around the use of security systems.

And that's not being done today. And so that's where the vulnerabilities are occurring because we're forgetting lots of stuff. We're opening up many different holes and attack vectors within these systems. And I think they're going to be breached.

I think 2025-2026 is the year of where AI systems are deployed. First generation stuff -- for many of these organizations at least -- the generative and agentic based systems, and they're going to get nailed, just because they're way exposed. The breaches are about to happen, and we're going to get a lot of lessons in 2025-2026; just basically what Jennifer just mentioned.

Raiford: I say the same thing because one of my soapboxes is, 'Go ahead and really pull back the layers and get really intentional about your data because AI is going to expose that.' So, I totally agree with you.

Kelley: Yes. Okay, great. Yeah, a lot to unpack here. So, first of all, if anybody is sort of challenging David on the 1950s, one of the most famous pieces of AI software originally was developed in the 1960s by Joseph Weizenbaum at MIT. It was called Eliza. So, David's absolutely right. It's been around a long time, and the first conference on AI was in the 1950s. David, do you know what state it was held in?

Linthicum: Whatever state Dartmouth University's in, that's where it was. I know the university; I don't know the state. Dartmouth's in what…? Massachusetts, I think, or did I get that wrong?

Kelley: It's in New Hampshire; I'm in New Hampshire, so very proud, very proud. Yes, the first day AI conference was in the fifties, and you're exactly right it was held at Dartmouth, which is in New Hampshire.

The other thing about building security into AI, to underscore that there's a movement called MLSecOps, which is similar to DevSecOps, but it looks at how we build security into the AI and machine learning lifecycle.

And if you're saying, 'But we're talking about AI and you're saying machine learning,' AI is the superset. Machine learning is a subset of AI, and generative AI is a subset of machine learning. So, machine learning drives AI. So, when people say machine learning security operations, that actually is for machine learning and for generative AI and MLSecOps. There's some training up on LinkedIn Learning, for example, but that's really the movement of trying to do exactly what Jennifer and David are talking about, which is building security in, throughout the entire process of AI development. So, thank you both for that.

This is a great start. Both of you had mentioned a couple of attacks. You know, we heard things like adversarial prompt injection, data poisoning. Jennifer, I was wondering if you could just give us a couple of specific attacks that are different, you know, just do a slightly deeper dive. I know you've mentioned it at a high level but, you know, when we think about something like data poisoning that you had brought up, how does that happen and what happens if we do poison the data of our AI?

Raiford: So, basically, that is really injecting malicious data during the training to corrupt the model behavior, right? So, what happens is you can't trust the information if that occurs. When you start talking about some of the other risks that come into play -- data leakage, where you know you are inadvertently exposing sensitive information -- the supply chain is, basically, compromised. Training data or libraries can introduce back doors, right? So, it's not having those measures of control around the data -- and these risks becoming common and being able to be exploited … I think manipulation, that's another one where … it's exploiting biases and decision-making processes for fraudulent purposes.

And each of these risks exploits the unique characteristics of AI systems and demands tailored countermeasures.

Linthicum: Yeah, Jennifer, you're absolutely right. The only thing I would add to that, on the data poisoning side, you hit the supply chain thing, right? In other words, your ability to get into manipulating data that's going to have a downstream effect on the training information to get certain diabolical stuff in the system. That's going to allow you to exploit that stuff. So, in other words, instead of doing malicious prompt injecting … you know, that kind of stuff … we're prompting, accessing the prompt, we're getting it through an attack vector that normally people don't think about. In other words, the ability to put information in it, which is setting some sort of a bias in the system, which is setting forth some sort of a malicious actor.

Getting something out of that system that we didn't anticipate would be in there. For example, years back, in dealing with an AI training system, we found out that they were getting data poisoning systems, but it was by accident. In other words, they had some data errors that were occurring, it was getting into the system, they didn't realize it, and it was causing massive mistakes to be made in, include overpaying by $300 million. How do you like that as a mistake? And it was able to be corrected, but it was a simulation in terms of what these attacks look like.

In this case, it was a big blender that they did. But we're seeing out there where these, we have to be protective of these systems that all the way, shape and form the ability to get information that's ingested in these systems, this training data has to be checked, has to be secured, and has to be treated in the same way that we treat normal data in the enterprise.

And the ability to manage that knowledge model that's used to train from the information, the ability to exploit, uh, basically remove some of the malicious information out of that, at that level. So, it blows everybody's mind the amount of work that has to go into protecting these systems.

And the threat actors are exploiting this from everywhere, shape or form. And, you know, Jennifer hit the nail on the head. The ability to look at the supply chain, your ability to look at the inference processing, there's training information, securing at the inference layer. Your ability to secure at the prompt layer, your ability to secure at the API layer, your ability to look at all these different areas to make sure that we're operating this thing in a way where it's going to be useful…. In other words, it's not going to have so many limitations on it -- people aren't going to use it because limitations are there -- but in a way which is going to return the most value back to the business … but is also going to … also do it for the least amount of risk.

And I think that's where the challenge is right now: They don't see a good way. And I guess maybe that's the talk about the security trade-offs. That's it -- is where do you, you know, place your time and effort in how you're going to secure these systems.

Kelley: Yeah. Yeah. Great points. It's a lot though. It can feel really overwhelming to people, to try and first understand how AI works and then how to secure it. Jennifer, do you have any go-to resources that you depend on to help you think about AI risk and AI governance?

Raiford: So, NIST AI Risk Management Framework is definitely one. And then any sort of AI-specific publications, you know, that Gartner or -- there's a lot of tools out there. And then, from a risk management standpoint, the time that you can spend putting the governance structures for AI security into play, the balancing model for access and transparency, which we kind of just talked about. And putting in the processes to secure those things. But, one of the things that I think is that the organization itself, taking a real … um … stance on getting prepared and being able to mitigate the AI risk, I think, is something worth each organization making sure that they're diving into, or in this case maybe catching up on, um, based on things that we've chosen to do over the last couple of years or, chose not to do or block. So, I think that there is a piece to that, and those are the tools that I would use to have those conversations and to start building those pieces out and to have a strong governance and a technical AI risk management program.

Kelley: Yeah. Yeah. Thank you. And yeah, for anybody that hasn't looked at the reference that Jennifer started with, the NIST AI Risk Management Framework, it is a really fantastic resource to start with. David, what's your go-to?

Linthicum: Well, the only thing I would add would be the Cloud Security Alliance.

I like their stuff and they're pretty up to date. So, you know, follow them a bunch and certainly the studies they do are pretty handy, especially for someone who's blogging information and has a YouTube channel, something to report on, things like that. The other thing would be the RSA -- but not necessarily going to the RSA [RSAC] conference in [late April 2025], to which I will be going, but the ability to look at the papers and some of the artifacts that come out of the conference. I just find that very useful, you know, versus some of the Ph.D. dissertations that are out there, which have a tendency to be way too deep. These things are kind of at a consumer level moving forward. So, I saw some excellent stuff come out of that conference last year. Not just in the presentations, but in the documents that were published and even some of the presentations that were recorded and going through some of those. Those are the two areas where I find this the most benefit of it.

Because the big problem now is, by the time some of these big security organizations get around to publishing these frameworks and getting these things in place, it's kind of too late. We've already changed process -- the technology for three years and you guys are four or five years behind and you're not necessarily helping.

I think I appreciate them doing that, but right now things are moving too fast. Everybody's running and gunning, and you kind of have to look at the organizations that are able to deal with this stuff in real time. The thought leaders, the influencers and some of the conferences out there, and following some of the key vendors that have artifacts in this area and some of the IP [intellectual property] in this area as well.

Kelley: Yeah, thank you. And what a great call out on the RSA conference. A lot of times, people think, 'Well, if I can't go there, I can't get any of the value.' But you're exactly right. So many of the talks are available.

Linthicum: You get more value on an RSA conference just watching it on the internet, and the same thing with all the other cloud conferences as well. I mean, I found that during the pandemic -- I was able to get, you know, get to all these conferences and attend them, and it was perfectly fine. And the information I got was even better because I wasn't distracted.

Kelley: Yeah, really great. Great underscore reminder. Thank you. Jennifer, what are companies sort of blind to about AI? What aren't they doing that you really wish that they would start doing, but sort of a blind spot? Like, one thing we talked about, and maybe this is where you want to go with this, is that some companies initially just said, 'No, don't use AI.'

Raiford: Yeah.

Kelley: So, what are you thinking on like, where are companies kind of being blind about, about AI and AI risk?

Raiford: I think when you start talking about some of the significant blind spots, the lack of visibility into the AI training data, the inadequate monitoring for the adversarial behavior, the shadow AI -- because people were using it anyway, even though you said not to. And then the insufficient model explainability and then, without understanding what drives the AI decisions, organizations risk vulnerabilities like output manipulation and biased models.

I think another blind spot, as we've already touched on, was the supply chain feeding the AI systems, which can introduce compromised data or code.

So, I think … the big one that … kind of what we talked about … where if you were blocking or saying they couldn't and you didn't embrace it and create a strategy from the top down, I would spend some time making sure you are on top of all of the various uses that are out there and get that streaming into a governance piece.

So that as changes happen -- updates, vulnerabilities, whatever that is -- that they're being addressed. But to me, those are some of the key blind spots.

Kelley: So, David, Jennifer mentioned getting a handle on how companies are using it, and AI is an interesting reality because there are so many different ways to adopt it, build it, deploy it. We're seeing little popups of like [what] marketing's using; they're creating their own chatbot. Could you just give us kind of an overview, lay of the land, on how AI has been or is being adopted?

Linthicum: Zero to 60 miles an hour, where people are about to run into a tree, I think, is probably the best way to do it. So, in other words, back up a couple of years ago, when ChatGPT just first launched, and everybody was like, 'Let's leave it to the lawyers to do our AI strategy.' And of course it was, 'No, no, no, don't use any AI stuff. We're worried about the copyright issues and things like that.' And then suddenly they realize there's huge amounts of productivity gains, and we're able to take the company and use AI as an innovative differentiator to take the company to the next level.

In other words, really kind of define something like a supply chain that's able to operate with almost-perfect information and make almost-perfect decisions. That's very powerful in today's marketplace. And so, they're moving in that direction.

And so, they're taking on these very large projects and trying to push AI to making them happen, but they're not necessarily ready to make that happen.

So, my take on this is you have to do some intermediate steps. You need to put together a strategy. You need to put together a plan. Most people don't have that out there. Where you wanna take this technology? And then you need to look at your data assets -- is it prepared to basically function as training data for AI? It's not about the LLMs [large language models] out there; it's not about ChatGPT. It's about you leveraging small language models and agentic-based deployments using your own proprietary data that's going to be of value to you. Is that stuff ready to go? And the ability to get your data ready. These moonshots are what I'm seeing out there now.

So, everybody went from, 'We're never going to do anything -- we're scared of it' into some of these moonshots that are occurring by some of these organizations. And in many cases, the rank-and-file IT folks who have been there forever and ever deem themselves AI experts. And so, they're the ones who are taking it to the next level and not necessarily getting the expertise on them that I think they need to make it happen -- definitely not dealing with the security and definitely not dealing with the strategies … McKinsey's reporting 80% of AI projects out there fail to deliver ROI. And that's because people are making these missteps and using AI without an overall strategy, understanding the changes in the quality of data that needs to exist in order to power their AI systems. So, it's really, that's a good way to depict the market. So, I'm not seeing these intermediate plays, where people are taking baby steps into these larger AI systems. They went from doing nothing to doing way too much without the expertise and skill sets and the quality of data needed to drive this, and they're failing all over the place.

That's what's going on right now.

Kelley: That's a lot to unpack. Yeah. Thank you for that. And I mean, really important information there. Jennifer, are you seeing this, too? Are you seeing that companies are in the hype cycle, but they're quickly in the trough of despair? Anything that you can share about what you've seen and also any guidance on how companies can take advantage -- but hopefully not, you know, put the business at risk -- by spending a lot of money on something that's due to fail?

Oh, Jennifer, I think that you've muted yourself, perhaps.

Raiford: Sorry. I am seeing something very similar with clients that have leaned in, and now they have either realized they moved too fast, or they are literally asking the question, 'How do I do this right?' Which is, I love those clients because that's where you get a chance to do exactly what David just talked about, which is … you know, come up with that strategy and really … you know, build it out from there. And then making sure all the components are where they need to be and that you've got a good program. So in regards to whether or not I've noticed that there is a lot of organizations that have just kind of ran with it. Wild, wild west. You know, we kind of talked about that a little bit earlier, Diana, where I feel like we should learn from history, right? We've talked about what we saw with the internet and social media and, one of my soapboxes is I would like to see us put security first, integrate it now, answer and solve those problems now, and not let the issues come up and then try to solve it later.

Let's not think that AI is not a security conversation because it is.

Kelley: So, David, I know some people will say, 'But AI's going to secure itself.' Also, as we know, the battle to build security in has never been an easy one. Any advice on how we can do that for AI? How we're going to be more successful with AI than we have been with all the other bullet trains that came out of the station and we in security had to go running after them?

Linthicum: AI is not going to secure itself, just like cloud computing is not going to secure itself, and it's a much more complex security parameter to deal with. So, we're dealing with many more moving parts that are in place that we need to know, figure out how these things are going to be managed and consumed; we're dealing with the data sets and things like that. So, it's really a matter of understanding the complexity of what this stuff is going to look like.

They see the output of it. They use an LLM, they get excited about what LLMs are, and then they'll come to people like me, and they say, 'We want to build the LLM for banking.' And I say, 'Okay, that's $125 million. Every time you build that model, it uses enough power to power a small city for a couple of years. Okay? Do you wanna get into that business? You can't afford it, so we're going to use AI using tactical purposes. What are those purposes going to be?'

And the ability to find a strategy and get them aligned to a common set of the ways in which one they want to consume the information, use it in the business cases, and find the business cases is where the chasm is right now.

In other words, we're looking to get to a point where this stuff is going to be able to bring lots of value back to the business, and in some places is going to be the business. We're at point A; we're looking to get to point B, which is going to be AI additive to add all these additional values. What does that look like, and what are the top 10 business cases that are going to make this happen?

And by the way, if I know that I can figure out how to secure it, I can figure out how to operate it. I can figure out what your data state needs to be in all these sorts of things. And it's really just kind of getting people into that first step. And we have a tendency not to wanna do that. Even when the cloud computing stuff fired up 15 years ago, everybody said, 'We're cloud-only. We're not going to go.' And they moved, you know, 10,000 applications, lift-and-shift, in the cloud. Well, guess what? In 2025, they're pushing 'em back to where they found them. Because a lot of those mistakes remain. And there's no reset button for a lot of this stuff …

And the amount of money that needs to be put in place and resources to use are huge. These things cost five to 10 times the amount of money to build an application using a generative AI or an agentic AI platform than it does building traditional technology. So, that kind of bet in place, then we need to take a more strategic direction in terms of how it's going to be fit, and that's where the businesses are falling down.

The majority of them, they're not looking strategically in terms of how this technology is going to be employed. They're coming to me, people like me, and other SMEs [subject matter experts] and asking, 'What's the best AI?' And we're like, 'Well, you need to answer lots of other stuff before we figure that out and try to figure what security and governance and frameworks and all that kind of stuff needs to go into it.'

And it's the heavy lifting that I think people are pushing back on. In other words, they realize that it's going to take a lot of skill, time and energy and money to get to a point where they're ready to start implementing their first strategic AI system that has any chance of bringing any value back to the business.

And that's what's missing right now. And I don't think the technology industry is helping because what they're doing is they're AI washing everything as quickly as they can. They're agentic washing, their generative AI washing, and so they're causing confusion out in the space. People think they haven't gotten an invitation to a party that everybody's going to. And so, that's what they're doing with some of the generative AI initiatives these days. And we're seeing again, huge amounts of failures, and, therefore, we're not seeing huge amounts of value that's coming back for the investment in this technology.

And this is not just minor stuff, like migrating an application into a cloud and having something 50% less efficient; this is you spending 20 times the amount of money. This is business kind of stuff that's going to send you guys into the auction block for getting your business bought, because you're unable to figure it out and your competitors are, and they're going to eat your lunch.

So, we're at that kind of stakes right now, and just people don't understand it yet.

Kelley: Yeah. Yeah. Jennifer?

Raiford: Yeah, no, I just wanna add that I love everything that you just said, David. I'm thinking that a way to think about that would be all of the ways that you can incorporate AI back into the enterprise risk-management portfolio. Looking at the risk from a financial and an operational impact of AI breach, the privacy-preserving techniques, and … and making sure that you've got incident response built in for AI breaches, and that that's now a part of your testing plan and your crisis management. I think it's baking it in at every layer. I think that's the key, that's the answer. And it's not something that should be treated separately. I think it's something that gets baked in. So, I just wanted to kind of add that to that.

Kelley: Yeah, that is a really good point. And David brought up a lot of great points about the cost of actually launching an AI. And Jennifer, you're talking about instant response. What's the cost that we need to think about from a security perspective? I'm thinking training -- do we need new tools, AI-specific? What about training people on incident response for AI specifically? Do we need a CAISO, uh, a chief AI security officer? What is that going to look like, Jennifer? And is that going to also add a lot of expense to AI deployments?

Raiford: I think it depends on the organization because in some cases, the security group can maybe fold in. In some cases, maybe it will require additional resources, and probably in a lot of cases, depending on the uses that they have, they will require a separate team.

But to my point, at the end of the day, even what they're doing should be going back into the overall enterprise portfolio and looking at it from that holistic perspective. And we talked about risk registers before, so even then, you know, you have those, and you bake that in.

Kelley: Yeah. So, David, there's the belief that the hope, the fantasy of AI is going to be magically secure, fix itself, deploy itself. We're very clear it's not. How do we have that conversation with our C-suite and with our board?

Linthicum: Yeah. I think it's just being blunt and serious about it, like I do. I guess that's why I don't get invited to a lot of parties.

In other words, the AI security stuff and the operational stuff is going to cost you double of what it is for the traditional stuff. That's the reality of it. I've done the mathematics and I've done the modeling and things like that. You can't get away from that. The training, the tooling, the process, the SecOps stuff that you have, the email SecOps that you're going to have to get the ability to interject this into your DevOps tool chains and, you know, things like that. It's hugely game-changing, but it's hugely expensive. So, you have to be able to align them to the amount of cost, and they're pushing back on that, which I can understand because they don't have the budget to do it.

And that's the big thing right now. So, in other words, we have a security budget, which is, say, $10 million a year for a particular company. I need you to increase that to $20 million a year if we're going to do this stuff in AI and have the actual secure system. And they're just like, 'No way, it's not going to happen.' So, you can do things in an organic way over time. Rob Peter to pay Paul. I'm getting these kinds of things: 'Well, what if we, you know, take a little bit more risk in doing this?' And we accept it, that that's going to be the case, and the cost of risk is there.

And those aren't necessarily smart decisions people are making. So, I think that: 'Here's what it costs. Here's what you're going to have to pay for,' whether you're developing, securing it, things like that. It's extremely expensive. We can all see that, based on the amount of power that it takes to build some of these larger models and the money that takes to throw in there.

So, you've got to be able to play in the big leagues, or you're going to find yourself out because you're going to make huge mistakes that are going to kill your business. And so, what are you willing to do? You can certainly wait for things to get cheap, which they will. And cloud security got cheap over the last 10 years.

Cheaper, not completely cheap, and the technology's better. And certainly, AI is going to go the same path, but you're going to have to wait 10 years for that to happen as well. So, if you're going to be an early adopter, which is awesome, you're going to have to pay the piper, and that's what the going rate is to get to an infrastructure that's going to have the security, the compliance, the operating excellence capabilities to bring you into the next level.

Kelley: Yeah. Thank you. Jennifer?

Raiford: Yeah, so I just wanted to add onto that … brilliantly stated … was if there is any sort of AI-related breach, you'd have to then factor in the reputational damage, regulatory fees, operational downtime. And so that could be significant and different from what traditional breaches look like.

So, I add that to the cost of training, getting people on board, having resources that are able to do all of that. So yeah, the answer is yes, there is a lot of additional costs coming in that are all tied to AI.

Yeah. And, and an excellent point about, you know, the incident response. And for anybody that's a fan of the CISA tabletop, TTX, you know, exercise playbooks. They recently did release one that includes AI and an AI attack. So strongly recommend if you're trying to do your next tabletop, you wanna bring in AI; you can get a lot of great info or inspiration from that. Um, Jennifer, can we just, can I just, buy insurance? And that would take all the risk away?

Raiford: No, no. Why not? I wish it would. No, and here's the thing. If you think about AI, yes, AI can do a lot of different things, but it doesn't change that we still have to do the fundamentals the same. That's what I mean by everything that we know today is what is required to, you know, protect the company.

That … that doesn't change with AI. Yeah. Right. So, you still have to have your, you know, your program, you still have to have your … your measures in place. You still have to be able to monitor and mitigate those risks. Yeah, um, you know, and you still have to have that. So, it's not the way, and it doesn't erase what's still required for us to properly protect the company and our assets and our data.

Yeah. Do you agree, David? Can I just buy insurance and everything's fine? I would.

Linthicum: I would actually like you to buy insurance because that would get to the cost of risk because they're, they're, uh, insurance companies are all about risk and they're going to go, okay, you want insurance? It's going to be $12 million a year when you pay a million dollars a year before because they're looking at your systems and they just see a breach ready, ready to happen.

And so … they're smart individuals. So, you would see the cost of risk, and you would see the risk being removed; as you started putting in more security systems in place, your insurance would go down. So that would be a good metric for me to sell them the fact that they need a good security posture more so than it is today, and the insurance companies do not play.

They will charge you whatever they think they're going to charge or else they're going to go out of business. And everything is risk with those guys. And they know how to assess an IT infrastructure, and they know when there's a breach risk that's there. And they will insure it, but they're going to insure it for many more millions of dollars than you think you should be paying.

And that's the reality of it. And I love insurance companies because of that … I always tell people, [if] you want to figure out how much your risk is going to cost, ask an insurance company to come in and work up a quote for you. They're going to quote you exactly what your risk is.

Raiford: I love that, actually. And there are some, you know, on the cybersecurity side that, you know, insurance companies are actually requiring certain things to even insure them. So, when you look at it from that standpoint, maybe they might lean toward that with AI, and to your point, then that would require companies in order to get the insurance.

To do the things I'm saying, you know, to make sure you've got those measures in place. So, I agree with that from that standpoint. And, you know, maybe that's an incentive … in order to get the insurance: Do you have an AI program? Do you have certain, you know, things in place? Do you have that governance mechanism? Are you doing this -- you know, A, B, C, and D? Do you have this baked in here? You know, can you show me where that is in your plan?

Linthicum: Yeah, absolutely.

Kelley: Yeah.

Raiford: Yeah.

Kelley: So, David, let's think a little bit about privacy. And there are a lot of organizations that are pretty excited about internal copilot-type solutions, where maybe they're being RAG-tuned; they have access to everything that's on the hard drive, and now people within the organization can ask this chatbot to answer questions, but there are some privacy concerns that can be included in that. Some researchers have called this the confused pilot problem. Could you talk a little bit about how that might impact privacy and why it's such a challenge in AI?

Linthicum: Yeah, I built recommendation engines over the years using different AI systems even before generative AI was there, and the amazing thing you can find out when building those systems is how much these chatbots and knowledge models can backfill information, which is going to be very close to PII information and therefore regulated. And one of the things I found when I worked on some health systems, we actually had anonymized information that went into a knowledge model and we were looking for outcome-based data, based on treatment and successive outcomes and things like that. And it was great for research, but the system was set up in such a way that it was able to go off and find who that anonymized data was associated with. And so it was able to put the PII [personally identifiable information] in directly, what shouldn't have been in there. Suddenly, I had a database, in this case, a knowledge base, that was violating information, and we found out via the chatbots that some researcher asked it to do that.

So, in other words, someone came to a prompt, and they said, 'Hey, can you fill in all the information with every patient that you can find on the open internet that's closely aligned to this stuff, so we can get demographic data and get other data that we weren't allowed to have because it's PI information?' and it carried it out.

And so, extremely dangerous in terms of the capabilities of these systems. One bank had the system where they were able to get a poison prompt-based system through their chat engine, where it not only produced private information -- in other words, account data from other bank customers, which was, yeah, fun. In other words, you could go, 'Hey, what's my neighbor having?' and the bank account would produce that. And that was because some of the open stuff that was there, you know, it wasn't necessarily protected. And so, we have this huge disassociation with understanding what these prompts are and the power that they're able to carry out.

You've got to remember, they're programming interfaces into themselves. They can change the behavior of a knowledge model if they're allowed to do so. That's why we have limitations and things and governance and secure stuff we put in there. Most people who implement some of these chatbots out there don't have that.

They produce private information all the time. They violate the law all the time. One of the chatbots that was dealing with an airline erroneously told somebody about the bereavement fares and they were sued, and they had to pay the bunch of money, you know, based on that. Not actually private data, but very similar in terms of the breach.

So, it's getting people smart in how you're using this technology and also the knowledge models that they're tying to these things. They're not just cordoned-off knowledge models that people are using; they're the ones that are used by the bank. So, they're using the same AI knowledge processing, inference processing that they use for the business transactions, and guess what? If you have access to a prompt, you can exploit that and get at the information that that particular model has access to. So, lots of fun stuff there. I hear about these things, by the way, you don't hear about 99% of them. I do.

But it's not making the papers where a lot of these huge mistakes, these blunders are being made with AI.

They hit the papers when you get to the courts because it's open, open in the courts, but there's a lot of these things going on where PI information is being exposed, private data is being exposed, things are being exploited that are violating privacy each and every day. Because we're not putting the security restrictions on top of these things. And in, you know, fun, comical ways it's fun to hear about, but it wouldn't be fun to hear about if it was my private data.

Kelley: Yeah, and I mean it's such a great point, is that there's a lot of data leakage going on that does not go to the papers. It's not out there, but when you talk to companies, it's happening. It's just not as widely publicized. Jennifer, can traditional data leak prevention -- it's been around for a long time -- could we apply this to the outputs from our LLMs or our bots, or are there challenges specifically in AI that make that even more difficult than it's been to do DLP [data loss prevention] in traditional environments?

Raiford: No, I think the output filtering mechanisms, context-aware sanitization and regular auditing of AI inference processes can minimize inadvertent data leaks, when you're talking about preventing AI systems from leaking sensitive information. So, again, I think it's better for us to start doing these things now versus going, 'Oh, we should do that down the road….' The same thing that we are doing, we need to add those same measures to AI. So those are some of the things I think we can do.

Kelley: And David, you brought up inference and being able to reconstitute data out of information. There's also the issue that about 30% of training data is memorized by models even when it's not supposed to return that information, sometimes that information does. So, given that GDPR is focused on the right to be forgotten: Can we truly forget in AI?

Linthicum: Not if you're learned by AI; unless the system is eliminated completely and completely erased, you're always going to be part of the system.

And also, you've got to remember, even if your information isn't used as training data for the AI system, as I mentioned, with the thing where the model was able to go out and populate it with PI information that it didn't have, it's smart enough to go back and fill in the blanks if asked to do so. And so, people will push the limits of these prompts, and they will go out and figure, okay, basically based on this account information where no person is being tied to it, they've been forgotten, so to speak, according to them. Figure out who this is, and many times it can figure out who it is, so you won't be forgotten.

And guess what? That can not necessarily be the banks. It can be a data broker. It can be a bunch of bad actors or a bunch of gray actors that are not necessarily going to be authorized to see the information. So, the right to be forgotten, it's a great concept. I just don't think people understand the capabilities of the technology now with AI as their ability to do that.

Because I don't necessarily need your information; I just need to have a sense of you, and I can backfill your information in there, so you won't be forgotten.

Kelley: Yeah. Yeah. Jennifer, any additional thoughts on privacy regulations, especially GDPR and how we might need to adjust in the age of AI?

Raiford: I think I've kind of touched on it in terms of how we need to shift and incorporate into everything that we're doing. Yeah. So going back on the data and how we are managing that.

Kelley: … One of the things that you hear about issues with AI are, quote, unquote hallucinations, -- i.e., the AI has returned something that is not entirely accurate, and I'm wondering what your thoughts are on why this is, why is this it? It sounds funky. It's just, it's a hallucination. You know? What are the real risks and impacts from inaccurate information and how, as we move into these more agentic, meaning autonomously, changed AI. So, AI controlling other AI and then taking action. How could a hallucination or inaccurate responses be amplified as we move toward more autonomous use?

Raiford: I think there's a lot of different ways that that could, you know, be magnified. But one of the things to consider is if we're not monitoring, if we're not making sure that we're not just relying on AI and we actually have some sort of process baked in … where you can validate that the information that you have is accurate, no different than if we were to sit down and write something, it still needs to have that integrity.

So, I feel like … that's where I feel like people will always need to be kind of integrated into the process now. I will say that part of that is it's growing and learning … and I think it gets better over time as it's corrected. So, I think that's part of it.

Some of it could be intentional, where it's bad information has been injected and the model's corrupt. So, you could have some of that. But I think the risk of reporting bad information or the risk of giving the doctor bad information on a patient -- there's a million different ways that that could become catastrophic, right? Including things that could cause us to go to war, if you think of it from a national standpoint.

So that's why the integrity of the data is so important, and that's why security needs to be baked in at every layer, because you can't just let it run and not have those data integrity checks and all of that put into it. So, at a high level, yeah, that's where I'm coming from. Yep.

Kelley: Yeah. Thank you. David, any thoughts on hallucinations and inaccurate responses and how agentic may even make this?

Linthicum: Yeah, they're incredibly rare with LLMs right now because they have the adversarial checks and the RAG systems.

So, you're normally getting good information out of those, even though it's training itself from the whole of the internet. No matter what LLM you use, they're always +/-12%. I don't know why we have 500 of them, but we do. They all kind of operate the same way. Most of these are going to come out of small language models that are tactical deployments that are built by enterprises, and they're not going to know they're there because they're not going to have the checks in place to make sure that they're catching some of the most, some of the errors that come out of these systems. What you do in the LLM model … so, they haven't stopped to do the adversarial checks where we're looking at the response and comparing the response and querying the response, and double-checking the response. Those things are typically not there, so you will have account information that's going to be wrong.…

Because we do not have good AI engineers building these systems that are able to put some of the checks in place to make this happen. We do at DeepSeek and we do at OpenAI, but we don't at the corner bank, and those are the people who are going to be running into these … issues now. So, it's going to be a big problem. I think we get more into AI. And it's probably an indicator of why we're not seeing the ROI from some of the AI systems and finding the business cases. I think people are deploying them and saying, 'Okay, this is wrong. Okay, that's wrong too. I thought this was a smart system.' And then they're pulling back and then figuring out how to redo it. And that's because we don't have the engineering talent. We don't have the AI architects in place -- people who are able to address this process because most of them work for the big research companies and the big LLM-based manufacturers, the IBMs and the HPs and things like that.

So, it's going to be a huge issue. It's probably going to be as bad as some of the breaches we see out there. Because some of this stuff is not necessarily going to be detected. So, people are going to get a million-dollar deposit in their account. Ooh, hey, I had thought I had 10,000 and, you know, head off and withdraw and head to Mexico and things like that. We're going to hear some of that.

Kelley: Yeah.

Linthicum: Because we just don't have enough protection at these lower, tactical levels to protect themselves from these smaller systems to make these huge sweeping mistakes.

Kelley: Yeah. Thank you. All right. Well, thank you. This has been just a fantastic conversation. I'm really grateful to both of you, a lot of fantastic info that you've both shared. But as we're wrapping up, I was wondering, Jennifer, if you could start us off: If the audience can take away one thing, what do you think the most important thing that anyone … any company … can do in the next month to improve their AI risk and resilience without stopping innovation?

Raiford: I would say within the next 30 days, implement robust monitoring of AI input-output activity using anomaly-detection tools tailored to your AI systems. I think this is a quick measure that ensures real-time detection of unusual behavior, helping to prevent adversarial inputs and outputs of manipulation.

Kelley: Of that runtime monitoring. Yeah. Okay. David?

Linthicum: Training, training, training. You do not have the skills in your house to make the AI work in the ways I think you want it to work. And so, the ability to use outside consultants, mentors, and the ability to get some smarter people in the organization to help transfer the knowledge in the organization, to start building and deploying these systems in secure, safe ways. I think that's the big thing out there, and I think that's where people are running [into] the problems: They're using whoever's in-house as a staff member, pivoting them to AI and kind of hoping for the best.

And I think at the end of the day, this stuff is so complex and so far-reaching that you're going to need to get some outside help. I'm not saying hire the big six consulting firms, anything else like that, but it's: Get some people in there that can actually move you in the right directions, if you truly are moving in the right directions.

And the other thing would be, get real about what this stuff costs. Don't try to do this on cheap, and that's the big thing right now. So, realize the fact that you're going to spend double the amount of money to build these systems. And then that's normally, sometimes there's a lot more than that and you have to allocate the cost and understand that you're going to have to buy once, cry once.

Kelley: Yeah. And a great point too, that with the education and bringing the experts in is that they're the ones that can also help you really understand what the true costs are. So yeah, a double win. All right. Thank you really, I appreciate this has been a FUD-free, real talk, no magical sprinkles on here about AI and what you can do. I am so grateful, Jennifer, for you being here and sharing all of your knowledge. The same thing, David, really appreciate your sharing this with the audience and helping us to break this down in an easy-to-understand way. And, of course, thank you to everybody who is here for attending, this the 65th episode of the Security Balancing Act.

We are so grateful that you were here, and if you click through the attachments button in your player, you can register for the next episode in our series, which is on 'Continuous compliance: Is this hype or hero?' We'd also love to hear your feedback about today's episode, suggestions for what we can do in the future to cover topics that you're most interested in.

So again, David, Jennifer, thank you so much and thank you everybody for joining us on the Security Balancing Act. We'll see you next one. Bye-bye.

+ Show Transcript