WANAN YOSSINGKUM/istock via Gett

News brief: Security worries and warnings as AI use expands

Check out the latest security news from TechTarget SearchSecurity's sister sites, Cybersecurity Dive and Dark Reading.

"We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems -- because the risks are real," warned Dr. Seán Ó hÉigeartaigh, executive director of Cambridge University's Centre for the Study of Existential Risk and co-author of the report, "Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation."

This week's featured news is thus both encouraging and disquieting as AI experts urged caution and policymakers took steps to set up guardrails to mitigate the myriad risks associated with the unchecked adoption of the powerful technology.

While White House representatives sought more information on how major tech firms are using AI for cybersecurity, international thought leaders called attention to the hazards posed to national defense and critical infrastructure by agentic AI systems. The concerns are warranted, as illustrated in a Zoho study that found 90% of surveyed organizations believe AI will strengthen cybersecurity, but 80% report that their tech stacks cannot handle modern threats. It's fertile ground for establishing safeguards that NIST and industry partners are exploring as they strive to develop standardized testing methods for AI models.

The latest news suggests that after years of hype about the great promise of AI, followed by widespread adoption, more prudent voices are being heard as the pitfalls of impulsive AI use come to light.

Governments issue AI agent safety warning

A document released by CISA, the NSA, the Australian Signals Directorate and international partners from the U.K., Canada and New Zealand urged "careful adoption" of agentic AI systems, addressing growing cybersecurity risks as key infrastructure and defense sectors increasingly deploy AI agents for mission-critical operations. Concerns noted include expanded attack surfaces, privilege creep, behavioral misalignment and obscured event records. The guidance strongly recommends organizations avoid granting AI agents broad or unrestricted access to sensitive data or critical systems.

Read the full article by Eric Geller on Cybersecurity Dive.

White House queries tech giants on AI cybersecurity

The White House Office of the National Cyber Director has reached out to major tech companies with questions covering AI, cybersecurity, information sharing and federal collaboration opportunities. The outreach reflects the administration's focus on strengthening cybersecurity partnerships as AI adoption accelerates across critical sectors, seeking industry expertise to shape effective government support mechanisms. While the correspondence emphasized proactive engagement with frontier AI labs to address challenges in scaling AI technology safely, some companies have been hesitant to share their sensitive information.

Read the full article by Eric Geller on Cybersecurity Dive.

AI security confidence outpaces readiness, study finds

Businesses are rushing to adopt AI for cybersecurity but remain vulnerable due to critical gaps in zero-trust implementation and identity controls, according to Zoho's "State of Workforce Password Security Report 2026."

The global survey reveals a stark mismatch between confidence and capability. While 90% of organizations believe AI will enhance security measures, only 8% are currently equipped to deploy AI-powered security tools. The report highlighted several barriers slowing AI adoption, including legacy systems, migration complexity concerns and budget limitations.

Read the full article by Eric Geller on Cybersecurity Dive..

U.S. government to pre-screen AI models from tech giants

To assess cybersecurity threats, NIST's Center for AI Standards and Innovation will evaluate frontier AI models from Google, Microsoft and xAI before public release. This marks the U.S. government's effort to proactively address security risks from advanced AI systems. The partnerships enable information exchange, voluntary improvements and cross-agency testing, including in classified environments.

This represents a policy shift for the Trump administration, which previously eliminated AI security reviews but reconsidered after Anthropic deemed its Claude Mythos model too dangerous to release due to vulnerability-finding capabilities. Questions remain about CAISI's testing standards and threat assessment criteria.

Read the full article by Eric Geller on Cybersecurity Dive.

Editor's note: An editor used AI tools to aid in the generation of this news brief. Our expert editors always review and edit content before publishing.

Richard Livingston is an editor with Informa TechTarget’s SearchSecurity site, covering cybersecurity news, trends and analysis.

Dig Deeper on Threats and vulnerabilities