
Askhat - stock.adobe.com
News brief: Safeguards emerge to address security for AI
Check out the latest security news from the Informa TechTarget team.
Enterprise adoption of AI and machine learning tools is growing by the second. CISOs, security teams and federal agencies worldwide must work quickly to optimize security for AI tools and determine the best methods of keeping AI models and business-critical data safe.
Agentic AI has become a major security pain point, too often handing out the keys to the kingdom, as evidenced in a zero-click exploit demonstrated at Black Hat USA 2025 that requires only a user's email address to overtake an AI agent.
Meanwhile, application developers are adopting vibe coding -- using AI tools to assist with code generation -- to speed up development, yet they don't always fully understand its effects on security. According to VeraCode's "2025 GenAI Code Security Report," AI-generated code introduced security vulnerabilities in 45% of tested tasks.
This week's featured articles focus on identifying methodologies to improve security for AI tools and better protect data through responsible AI at the federal and enterprise levels.
NIST seeks public input on how to secure AI systems
NIST outlined plans to develop security control overlays for AI systems based on its Special Publication 800-53: Security and Privacy Controls for Information Systems and Organizations. The federal agency created a Slack channel for community feedback on the development process.
The initiative aims to help organizations implement AI while maintaining data integrity and confidentiality across five use cases:
- Adapting and using generative AI -- assistant/large language model (LLM).
- Using and fine-tuning predictive AI.
- Using AI agent systems -- single agent.
- Using AI agent systems -- multiagent.
- Security controls for AI developers.
The guidance addresses growing concerns about AI security vulnerabilities. For example, researchers at Black Hat USA 2025 this month demonstrated how malicious hackers weaponize AI agents for attacks and use LLMs to launch cyberattacks autonomously.
Business execs eye responsible AI to reduce risks, drive growth
A report from IT consulting firm Infosys found that companies are turning to responsible AI use to mitigate risks and encourage business growth.
In a survey of 1,500 senior executives, 95% said they experienced at least one "problematic incident" related to enterprise AI use, with average reported losses of $800,000 due to these incidents over a two-year span.
Still, more than three-quarters of respondents said AI will result in positive business outcomes, though 30% admit they are underinvesting in responsible AI use by about 30%.
While organizations' definitions of responsible AI practices differ, they include incorporating fairness, transparency, accountability, privacy and security into AI governance efforts.
Read the full story by Lindsey Wilkinson on Cybersecurity Dive.
AI-assisted coding: Balancing innovation with security
Vibe coding is in vogue right now for both good and malicious development. Industry experts, such as Danny Allan, CTO at application security vendor Snyk, have confirmed widespread adoption of AI coding tools across development teams. "I have not talked to a customer that's not using AI coding tools," he said.
Organizations that permit AI-assisted code generation must consider how to do so securely. Experts shared the following key steps to mitigate vibe coding security risks:
- Keep humans involved to verify that generated code is secure. AI isn't ready to take over coding independently.
- Implement security from inception using specialized tools. Being able to code faster isn't useful if the code generated has vulnerabilities.
- Account for AI's unpredictability by training models on secure code generation and using guardrails to keep AI-assisted code from creating weaknesses.
Read the full story by Alexander Culafi on Dark Reading.
Editor's note: An editor used AI tools to aid in the generation of this news brief. Our expert editors always review and edit content before publishing.
Kyle Johnson is technology editor for Informa TechTarget's SearchSecurity site.