kras99 - stock.adobe.com

From security to trust: How AI is transforming the CISO's job

Modern security officers must manage AI risks, safeguard enterprise data and ensure AI systems operate securely, expanding their role beyond traditional cybersecurity.

AI is rapidly redefining what it means to lead cybersecurity in the enterprise. Historically, CISOs focused on implementing security measures such as firewalls, access controls, audits and incident response, with the primary goal of safeguarding an organization's digital assets against internal and external threats.

As AI becomes embedded into core business systems, it's expanding the scope of what constitutes a security incident. Failures in AI models, whether through manipulation, data leakage, misuse or unexpected behavior, can expose sensitive information, disrupt operations and erode customer trust just as quickly as traditional cyberattacks. CISOs are increasingly accountable for understanding where AI is used, what data it touches and how it's governed.

"All of a sudden, CISOs are responsible for securing AI across the enterprise, which is a massive undertaking because of all the places AI can exist," said Alex Lanstein, CTO of StrikeReady, an AI-powered security command center. "You need to understand where AI is being used, what data is getting shared, where that data lives and who is using unapproved tools."

This shift in the scope of responsibility is something many security leaders are experiencing firsthand. "I'm increasingly responsible for AI-related information security risks and handling sensitive data," said Aaron Weismann, CISO at Main Line Health, a nonprofit health system.

Industry data underscores how widespread this change is. A HackerOne report surveying more than 400 CISOs across 13 industries found that 84% manage AI security and a third test their AI offensively. For security leaders, this can also mean moving beyond traditional IT oversight to actively shaping how AI is deployed and monitored in the organization.

"The CISO role has expanded from securing infrastructure, products and data to also governing and assuring AI use across the company," said Pritesh Parekh, vice president and CISO at PagerDuty, a SaaS-based digital operations management platform. "I now partner with product and machine learning teams to ensure model integrity, guard against data poisoning, adversarial inputs and drift, while making sure AI-driven outcomes meet our security, privacy, safety and compliance standards."

For CISOs, this marks a broader shift in how digital trust is established: It now depends not only on secure infrastructure but also on the reliability and resilience of AI systems embedded across the business.

How AI is rewriting the CISO's job

In the past, the CISO's mandate centered on protecting data, securing infrastructure and ensuring compliance. In that context, digital trust meant strong security controls. If systems were secure and compliant, organizations could assume a baseline level of trust.

AI fundamentally alters that equation. Enterprises rely on AI systems to process sensitive data, generate outputs used in decision-making, and connect to a growing network of third-party tools and models. When those systems behave unpredictably, rely on opaque vendors or expose sensitive data in new ways, traditional security assurances are no longer sufficient. Security remains the CISO's core mission, but the scope of what must be secured has expanded to include AI-enabled systems and their downstream risks.

AI hasn't replaced traditional cybersecurity responsibilities, but it has significantly broadened them, said Alan Osborne, executive vice president and group CISO at Paysafe, a global online payments company.

All of a sudden, CISOs are responsible for securing AI across the enterprise, which is a massive undertaking because of all the places AI can exist.
Alex LansteinCTO of StrikeReady

"AI has woven itself into day-to-day work across the organization," Osborne said. "My focus is less on the performance of individual tools and much more on how these capabilities are used, governed and controlled across the environment." Understanding how data flows into and out of AI systems and ensuring sensitive information isn't inadvertently exposed has become a growing priority as employees experiment with new tools, he added.

For many CISOs, this scope expansion manifests in daily risk assessments. "The risk my team and I own is the risk to information," Main Line Health's Weismann said. "A few questions I constantly ask: Are we feeding AI with the correct information? Are we inadvertently exfiltrating data? Is the AI providing the responses we expect or hallucinating? Are we concerned about model poisoning for internal applications or external SaaS leveraging AI?"

These questions highlight how security oversight is evolving. Rather than focusing on defending networks and endpoints, CISOs are increasingly being pulled into decisions about how models are integrated, how training and inference data is protected, how AI vendors are vetted and how outputs are monitored for potential misuse or data leakage.

AI is also challenging longstanding security assumptions. Traditional IT systems are largely deterministic: Given the same inputs, they produce the same outputs. AI systems are probabilistic and constantly evolving, often relying on external models, APIs and data pipelines. Their behavior can shift over time, making them harder to monitor and secure using legacy controls. This dynamic forces security teams to rethink how they assess vulnerabilities, monitor activity and manage third-party risk.

"The CISOs will increasingly become responsible for AI system assurance end-to-end," said PagerDuty's Parekh, noting that this shift requires building new capabilities and teams with expertise spanning both cybersecurity, AI and machine learning (ML). The focus is moving beyond defending against known threats toward building resilience against emerging AI-driven risks, including adaptive defenses and recovery strategies for when systems behave unpredictably, he added.

As a result, CISOs are taking a broader role in evaluating AI systems through a security lens -- identifying where they could be manipulated, where sensitive data could be exposed and where reliance on external tools could introduce systemic risk. All this requires close coordination with risk, compliance and AI leaders to ensure AI adoption scales without introducing new vulnerabilities.

The broader scope of the CISO role

Many security leaders now work closely with data science teams, compliance leaders and product owners as AI tools spread across the enterprise. Shared objectives drive this collaboration, but the players view the issues through different risk lenses. Security teams focus on ensuring AI is deployed safely, with appropriate controls and visibility, especially as employees experiment with unsanctioned or shadow AI tools.

"Everyone has responsibility for appropriate data use," Weismann said. "Where I'm primarily concerned with data exfiltration and poisoning, legal and compliance focus on regulatory compliance and permissions, data science on modeling accuracy, and product on adoption and efficiency." The challenge is defining responsible use in a way that works across teams while meeting strict data management requirements, he added.

Securing AI involves more than setting policies; it requires a deep understanding of how the technology works. CISOs and their teams need a working knowledge of how AI and ML models are built, trained and deployed to assess potential vulnerabilities. They also need to know about data-handling risks and third-party dependencies. They must be able to translate these risks for their boards of directors and guide mitigation efforts. While other teams handle ethics and the effect on customers, CISOs focus on keeping AI systems secure, safeguarding sensitive data and addressing potential vulnerabilities.

Understanding AI also helps security teams anticipate how attackers might try to exploit these systems. AI is already enabling sophisticated threats, such as AI-enhanced phishing campaigns, convincing deepfakes used for fraud and impersonation, automated vulnerability discovery and AI-optimized ransomware attacks.

Paysafe's Osborne said one of his biggest concerns is how AI amplifies identity-based threats. "AI is increasingly enabling highly targeted, campaign-of-one, social engineering attacks as well as synthetic identity fraud and deepfake attacks that could bypass [know-your-customer] controls or impersonate executives," he explained.

These evolving threats highlight why security teams must not only understand identity-based attacks but also the broader ways AI can be weaponized.

"Security teams also need to understand how threat actors are using AI," Weismann said. "That includes evaluating AI-generated malware and recognizing how large language models can be used to identify potential vulnerabilities within a corporate perimeter." 

The modern CISO's responsibilities typically include:

  • Evaluating AI vendors and tools for security controls, data-handling practices and potential vulnerabilities.
  • Managing risks from shadow AI and unsanctioned use of external tools that could expose company or customer data.
  • Setting guardrails for internal AI use, including policies on data sharing, access control and acceptable use.
  • Enabling fast adoption for low-risk AI scenarios while applying rigorous evaluation to high-risk systems.
  • Ensuring alignment with emerging AI frameworks and regulations where they intersect with security and data protection, such as the NIST AI Risk Management Framework, ISO/IEC 42001 and the EU AI Act.
  • Developing incident-response playbooks for AI-related failures, misuse or data exposure.
  • Managing AI supply chain and third-party risk by assessing the security practices of vendors, APIs and cloud providers to prevent data exposure or vulnerabilities.
  • Securing AI agents and autonomous systems when they're integrated into enterprise environments.
  • Continuously monitoring AI systems in production for drift, misuse or signs of compromise.
  • Participating in cross-functional AI governance or risk councils to provide the security perspective.

These responsibilities reflect how deeply AI security is embedded in the CISO's day-to-day work. What was once primarily an infrastructure-focused role now spans vendor evaluation, policy design, technical validation and executive risk translation, placing CISOs at the core of enterprise AI adoption.

Challenges CISOs face in this new role

As CISOs take on greater responsibility for AI security, they face new complexities because traditional IT tools, metrics and governance models don't always apply to AI-driven systems.

The following are key challenges CISOs deal with in this evolving role:

  • Lack of visibility into AI systems. Many AI tools, especially those built on third-party or foundation models, operate as black boxes. CISOs are expected to understand how these systems interact with enterprise data and infrastructure, what access they require and where they could introduce vulnerabilities, even when vendors provide limited transparency.
  • Data vulnerability and exposure risks. AI systems depend on large volumes of data and can inadvertently expose sensitive information through prompts, outputs, integrations and logging. Ensuring that proprietary or regulated data isn't leaked, improperly retained or used to train external models has become a central concern for security teams. "Many AI systems run through browsers or legitimate services like Google, which means traditional data-loss prevention tools don't always detect when sensitive information is being shared," StrikeReady's Lanstein said. This creates new blind spots CISOs must identify and address to protect enterprise data, he added.
  • Unclear accountability structures. AI blurs lines of responsibility across security, IT, product, legal and business teams. When an AI system fails, is misused or exposes data, it's not always clear who owns the response. CISOs are often involved in defining escalation paths and responsibilities when incidents have security implications.
  • Skills and knowledge gaps. Security teams often lack deep expertise in AI systems, and data science and product teams might not fully understand security requirements. CISOs must bridge this divide -- building AI literacy within security teams while embedding security principles into AI development workflows. CISOs should focus on integrating security early and ensuring systems operate safely and reliably, while leaving technical implementation details to data science and engineering teams, said Peter Hawes, vice president of the security advisory team at LevelBlue, a managed security service provider.
  • Difficulty in measuring AI-related risk. Unlike traditional controls, the risks introduced by AI systems are harder to quantify. CISOs are still determining how to track some issues, such as misuse, drift and unexpected outputs, in ways that align with existing security metrics. As a result, CISOs often focus on what can be measured today, such as data exposure, access controls and incident response readiness, and developing new ways to anticipate and mitigate risks that aren't fully visible.
  • Resource and tooling constraints. Securing AI deployments often requires new processes, technologies and expertise, from model monitoring tools to updated vendor risk assessments. Yet many CISOs are expected to address these challenges without additional budget or headcount, even as expectations from boards and regulators grow.
The evolution is less about replacing traditional cybersecurity responsibilities and more about expanding the scope to reflect the growing influence of AI.
Alan OsborneExecutive vice president and group CISO at Paysafe

These pressures are reshaping how security leaders operate. The modern CISO is no longer just defending infrastructure. They're also navigating uncertainty, reducing data risk in AI‑enabled environments where visibility, ownership and measurement remain in flux.

The CISO's future in an AI-driven enterprise

As AI becomes more deeply embedded in enterprise systems, the CISO's role will continue to evolve. Over the next three to five years, security leaders are likely to spend more time on the assurance and protection of AI-enabled systems. They'll focus not just on infrastructure security but on how AI-driven processes behave, access data and influence business decisions.

However, that evolution isn't a departure from the CISO's core mission. Osborne sees the role becoming more interdisciplinary rather than fundamentally redefined. "The evolution is less about replacing traditional cybersecurity responsibilities and more about expanding the scope to reflect the growing influence of AI," he said.

One emerging area is agentic AI systems. These autonomous agents often gain access to data and systems automatically, creating visibility and control challenges that traditional security models weren't designed to handle.

"In many organizations, these AI agents are already operating with little awareness from security teams, creating a new class of unmanaged risk," Hawes said. As a result, CISOs must look more closely at how AI makes decisions, what level of access it has and how that access is governed responsibly. The role is shifting from securing technology itself to securing the behavior of intelligent systems acting on behalf of humans, Hawes added.

In practice, this might mean the CISO has more involvement in monitoring the security and risk posture of AI systems in production, looking for anomalies that could signal misuse, compromise or data exposure. It might also mean CISOs ensure organizations build security controls into AI deployments from the outset. Some companies are already exploring dedicated roles such as AI risk officers and model auditors, often working in partnership with security teams or within broader risk functions. How responsibilities are divided will likely vary by organization, particularly as chief AI officer roles become more common.

Expectations from boards and regulators are shifting as well. Leadership increasingly wants clarity into how AI-related risks intersect with cybersecurity and data protection, signaling stronger requirements for auditability and oversight of AI systems. Security leaders are frequently called upon to support those efforts wherever AI risk touches data integrity, system resilience or incident response.

Parekh summarized the evolving expectation: "Digital trust has become a clear board-level priority," he said. "Boards now expect CISOs to serve as strategic advisors on AI risk, and the conversation is shifting from 'Are we secure?' to 'Can we trust our AI systems to make decisions on our behalf?'"

In this environment, responsibility for AI risk and governance remains distributed across the enterprise, but the CISO's mandate is clear: safeguard systems and data while ensuring AI is deployed safely and responsibly. Organizations that define roles early and integrate security into AI strategy from the onset will navigate this transformation most effectively.

Kinza Yasar is a technical writer for Informa TechTarget's AI and Emerging Tech group and has a background in computer networking.

Next Steps

How CISOs can balance AI innovation and security risk

What CISOs need to know about AI governance frameworks

The CISO evolution: From security gatekeeper to strategic leader

Dig Deeper on AI technologies