
putilov_denis - stock.adobe.com
Experts weigh in on securing AI effectively
Using AI comes with security risks. Learn what the top attack vectors and privacy threats are, then discover how to mitigate them through proper strategy, monitoring and more.
AI and business ops have collided, and the impact has raised unprecedented security challenges. Securing AI systems is an urgent priority now across industries; traditional cybersecurity approaches simply aren't sufficient.
On a recent episode of The Security Balancing Act, host Diana Kelley sat down with two AI security experts -- Jennifer Raiford, executive vice president and CISO for Globe-Sec Advisory and David Linthicum, founder and lead researcher at Linthicum Research -- to talk about the unique security risks of AI systems and how organizations can use AI securely.
Traditional security can't secure AI systems
Traditional methods for securing systems simply do not work with AI, which brings with it unique vulnerabilities. But Raiford, Kelley, and Linthicum suggest methods for overcoming them, including adoption of machine learning security operations (MLSecOps), which integrates security throughout the AI development lifecycle. Specifically, they advise employing an MLSecOps framework that integrates security checkpoints at each development phase. In addition, create a dedicated AI security team -- one trained to understand AI-specific security problems and how to avoid or at least mitigate them.
"No longer is security an afterthought," Linthicum stated. "It has to be baked into the architecture development of the models, development of the training, data development of the inference engines."
Where is AI most insecure?
In this session, Raiford and Linthicum discussed the ways AI systems can create unique insecurities. Data poisoning is a key one. This, said Raiford, is when actors inject "malicious data during the training to corrupt the model behavior," making the outputs from the model untrustworthy.
The experts in this session promoted the solution of implementing rigorous data integrity checks for all AI training data sets, such as provenance tracking and integrity verification. They also proposed developing, and regularly testing, controls that work against AI-specific attacks like prompt injection and model manipulation.
While not unique to AI, privacy issues were another concern the three security experts discussed in depth. "If you have access to a prompt," for example, said Linthicum, "you can exploit that and get the [personally identifiable] information that that particular model has access to." AI-oriented privacy impact assessments are essential. On that note, the panel suggested implementing stronger data minimization practices and other privacy techniques when sensitive data is part of AI model training.
How to do AI right
A rush to get AI projects launched before thinking through the security ramification was another focal point of this discussion. This is the key reason, Raiford and Linthicum agreed, why AI projects fail. Linthicum noted the widely cited statistic, from a McKinsey report, that 80% of AI projects implemented fail to show the expected ROI. Linthicum blamed lack of strategic planning and use of quality data. Raiford agreed, noting she's seen clients who " have leaned in, and now they have either realized they moved too fast, or they are literally asking the question, 'How do I do this right?'"
The discussion moved on to what organizations wanting to exploit the benefits of AI should do to make sure their projects are secure. A clear AI strategy is a first step, but it must include an AI governance framework that considers how risks will be managed. Security monitoring controls need to be implemented too.
To learn more details -- about the dangers of AI and the best route to managing them -- watch the full episode of The Security Balancing Act. Or read the transcript here.
Editor's note: An editor used AI tools to aid in the generation of this article. Our expert editors always review and edit content before publishing.
Brenda Horrigan is executive managing editor for Informa TechTarget's Editorial Programs and Execution team.