Identity and data security themes at Black Hat 2025

Read about the identity and data security happenings at Black Hat 2025, including advancements that enable AI adoption and products that help prepare for a post-quantum world.

If you were one of the 20,000 attendees at Black Hat 2025 in the 103-degree heat of Las Vegas, I hope you've recovered. For those of you who could not attend or want to get a perspective on the identity security and data security aspects of Black Hat, let's dive in.

Security for agentic AI

The unquestionable winner of the Black Hat 2025 buzzword bingo is [drumroll please]: agentic AI.

While nearly every vendor used AI to improve existing security tooling and operations, many vendors also placed an emphasis on security for AI.

Enterprises are currently embracing agentic AI, but much of the activity is within a vendor walled garden. Enterprises are adopting Salesforce Agentforce agents or Microsoft Security Copilot agents to streamline their work within these platforms. That's a great first step down the agentic AI path that can provide an immediate impact.

A bigger opportunity for agentic AI will come in using AI agents with core enterprise systems and data stores to streamline operations, i.e., reduce costs, or create new revenue streams, i.e., increase revenue. It's still early days as enterprises work with Model Context Protocol (MCP) and Google's Agent-to-Agent (A2A) protocol and wrestle with security challenges such as fine-grained authorization, customer data privacy and data loss prevention. Okta has made some big strides in identity standards for agentic AI with its Cross App Access extension for OAuth that helps manage the growing complexity of autonomous AI agents.

Black Hat saw a slew of agentic AI innovations, including Token Security and Oasis Security providing visibility and remediation for nonhuman identities. Descope announced an identity control plane with policy-based governance, auditing and identity management for AI agents. As enterprises move beyond the vendor walled garden to touch core enterprise applications and sensitive data, such products will be essential to provide visibility and facilitate secure, well-managed deployment of AI agents that avoid breaches and fraud and maintain compliance.

Identity verification, deepfakes

In April 2025, Mandiant reported that North Korean threat actors were impersonating U.S. IT workers. Subsequent news coverage and judicial actions have focused CISO attention on workforce identity verification (IDV). The two prominent use cases for IDV are fraudulent job candidates, i.e., a new candidate unknown to the organization, and unauthenticated contact with a service desk, i.e., using social engineering or deepfakes to obtain credentials.

What is interesting about these use cases is that a fraudulent job candidate is an unprovisioned user who is not present in HR systems, and an unauthenticated contact with a service desk is an internal IT security issue for a user present in HR and identity and access management systems. Products that address the fraudulent candidate use case typically need to integrate with applicant tracking systems, and an HR team drives or participates in the product decision. The unauthenticated contact with a service desk use case involves existing identity security systems with a CISO as the key decision maker.

Deepfake detection in meetings is another problem. Adversaries use deepfake audio or video content within meetings on platforms such as Cisco Webex, Google Meet, Microsoft Teams or Zoom to target employees. IDV products analyze the audio or video stream and alert participants to potential impostors. The use case was in the news when a deepfake CFO requested a wire transfer and a finance worker paid out $25 million to fraudsters. The solution to this emerging problem is still proving itself out as its engineers strive to overcome issues such as scalability and avoid disruption.

Some vendors solving the IDV problem include 1Kosmos, iProov, Nametag, Persona Identities and Ping Identity. Vendors solving the deepfake problem include Beyond Identity, GetReal Security and Reality Defender.

Data security for AI: DSPM, DLP and data security governance

AI in general and agentic AI in particular create increased security risks. You don't have to look far to go from the hypothetical to actual incidents -- just consider Asana's MCP AI feature that exposed customer data and McDonald's AI hiring bot that exposed applicant data.

Data security vendors recognize the need to secure the various layers of generative AI (GenAI). Organizations need to do the following:

  • Use the right data to inform the GenAI infrastructure -- data security posture management (DSPM).
  • Make sure data does not leak out of the enterprise -- data loss prevention (DLP).
  • Safeguard data against internal leaks -- insider risk management.

DLP research from Enterprise Strategy Group, now part of Omdia, found that GenAI applications and cloud storage and file sharing were the top two data loss vectors over the past 12 months.

Data security vendors made many announcements at Black Hat. For example, Cyera is building on its DSPM heritage and expanding its data security platform to include AI security, which involves providing an inventory of all AI assets, as well as monitoring and responding to AI data risks in real time. Concentric AI is taking a different approach, extending beyond its DSPM roots to provide data security governance. There was plenty of Black Hat activity from other DSPM products and vendors, including Bedrock Security, BigID, Microsoft, Rubrik, Netskope, Securiti, Sentra, Privacera and Zscaler.

Enterprise lines of business are embracing AI to gain a competitive advantage but struggle to avoid inadvertent data leakage. Some have put a halt to GenAI initiatives until they can adequately secure data and avoid inadvertent leakage. Security teams want to say yes to business AI initiatives but have struggled to apply existing DLP products to secure sensitive data in AI apps.

Some new players, such as Harmonic Security, have zeroed in on the DLP for GenAI problem with innovative approaches that include using small language models to achieve reduced latency and increased precision to avoid the false positive problem. Startups such as Mind have focused on the alert noise problem by applying GenAI to change the game around DLP for unstructured data. Enterprise Strategy Group research found that 62% of enterprises intend to deploy a new DLP tool for a new use case, and these sorts of products are what they have in mind.

Certificate lifecycle management and post-quantum computing

Progress around certificate lifecycle management (CLM) and preparing for post-quantum cryptography (PQC) are often overlooked but are beginning to gain attention.

Enterprises need to prepare their encryption use for quantum computers that can weaken and break the conventional asymmetric cryptography used today. The first step down this path is inventorying cryptographic assets and improving crypto-agility.

Crypto-agility refers to the ability to rapidly adapt cryptographic algorithms and practices without significantly disrupting the overall compute infrastructure. It enables organizations to switch between algorithms and protocols, update cryptographic components, implement new security standards and prepare for PQC challenges. CLM products are a key building block of crypto-agility.

A near-term catalyst for improving CLM lies in the upcoming reduction in certificate validity periods. The current TLS certificate lifespan is 398 days, but that will be reduced to 47 days by 2029. Enterprises need to get their certificate use in order; the manual spreadsheet approach won't be viable to manage the volume of certificates and the volume of change required for 47-day validity periods.

CLM players, including AppViewX, DigiCert and Sectigo, are working to streamline operations. KeyFactor is taking a more holistic approach that solves the CLM challenge and covers the broader cryptographic ecosystem needed to address PQC.

These are exciting times in the identity security and data security space as enterprises embrace AI agents and prepare for a post-quantum world. If you are a new technology player with an innovative approach, I'd like to hear about it. You can reach me on LinkedIn.

Todd Thiemann is a principal analyst covering identity access management and data security for Enterprise Strategy Group, now part of Omdia. He has more than 20 years of experience in cybersecurity marketing and strategy.

Enterprise Strategy Group is part of Omdia. Its analysts have business relationships with technology vendors.

Dig Deeper on Data security and privacy