AI, analytics push data-in-use protection up priority list
As sensitive data moves into AI pipelines, organizations must evaluate how to protect it during processing and what safeguards IT platforms provide for data in use.
AI and cloud analytics applications are exposing a critical security gap for enterprises. While data is typically secured at rest and in transit, it often remains unprotected when being processed -- the time it is most actively used.
This gap has pushed data-in-use protection higher on the agenda for data leaders. Within the broader landscape of privacy‑enhancing technologies, confidential computing has emerged as the primary way to address this processing‑stage risk. It uses hardware‑isolated trusted execution environments (TEEs) to keep data encrypted during computation, enabling teams to expand AI workloads without overhauling data pipelines or weakening security.
Adoption trends suggest confidential computing is moving from a specialized control to a baseline expectation for AI and cloud analytics deployments. In a 2024 report, for example, Grand View Research projected the global market for confidential computing would grow from an estimated $5.46 billion in 2023 to $153.8 billion by 2030, reflecting its increased role as a foundational component of data security.
How data-in-use protection fits into existing pipelines
Standard security leaves data vulnerable in system memory and CPUs. Data-in-use protection addresses this exposure problem by keeping information encrypted while workloads execute.
At the hardware layer, a TEE is a secure area that runs code and processes data independently from the rest of the system. It isolates data and processing operations to prevent unauthorized access. Even cloud administrators, host OSes and hypervisors do not have access to the data in a TEE.
Because confidential computing operates at the infrastructure layer, AI training and analytics jobs can often run in a TEE with minimal architectural changes. TEEs also transparently encrypt processing for applications, minimizing operational disruption while extending protection throughout the compute stage.
Compliance pressure moves into the processing layer
Rapidly evolving regulations are reshaping where organizations invest to secure AI and analytics workloads. A 2025 Stanford report found that AI-related regulations issued by U.S. federal agencies more than doubled from 25 in 2023 to 59 in 2024. Similarly, the number of AI-related laws passed at the state level increased from 49 to 131.
Gartner predicts that by 2029, confidential computing will be used to secure more than 75% of processing operations running in shared infrastructure, such as public cloud services.
As sensitive data moves into AI pipelines, the pressure to document security grows. Processing-stage exposure is difficult to control and even harder to record without hardware-based locks. Audit teams and data governance functions that once focused only on storage encryption now require attestation that processing workloads run in protected environments.
Several regulatory frameworks now explicitly require data-in-use protection:
- EU AI Act. This new regulation requires documented data governance controls, including evidence of protection during all AI lifecycle stages.
- GDPR. Enforcement of the EU's data privacy regulation is expanding to include data in use, not just in storage or transit.
- PCI DSS v4.0.1. Requirements prevent sensitive authentication data from persisting in memory, such as RAM or memory dumps.
- Digital Operational Resilience Act. DORA mandates data-in-use protection for major EU financial institutions, including controls on data handling within cloud and third‑party processing environments.
- NIST Cybersecurity Framework 2.0. Commonly known as CSF 2.0, it includes data-in-use protection within zero-trust security designs.
What data leaders gain from data-in-use protection
For data leaders, the value of confidential computing aligns with governance, legal and audit functions.
- Secure access to sensitive data. Healthcare records, financial transaction data, personally identifiable information and other regulated data often aren't used in AI and analytics initiatives due to processing risks. Confidential computing enables access to sensitive data sets without violating governance and security rules.
- Reduced legal exposure. Confidential computing provides verifiable proof that sensitive workloads are processed in hardware-isolated environments, which is especially valuable for documenting regulatory compliance in third-party clouds.
- Increased audit efficiency. The records of secure processing automatically produced by TEEs also reduce manual auditing work and improve audit verification on the use of sensitive data.
Confidential computing use cases
The clearest proof of concept for confidential computing comes from highly regulated industries, where organizations face strict requirements around data handling, auditability and cross boundary data sharing.
- Healthcare. Hospitals and clinical research networks use confidential computing to support federated AI model training across institutions, keeping patient data private in shared systems or central databases.
- Financial services. Banks and insurers use TEEs for fraud detection and risk modeling to reduce exposure when processing sensitive transaction data regulated by banking privacy rules.
- Public sector. Agencies and partner organizations apply confidential computing to joint analytics projects without sharing raw data across organizational boundaries.
- Telecom and IoT. Providers can use confidential computing to analyze customer and device data closer to the edge while limiting exposure during processing.
Across these industries, common use cases include secure AI training, multi‑party analytics, data sovereignty controls, and cloud backup and recovery workflows where restore operations can expose sensitive data.
How to evaluate data‑in‑use protection options
Approaches to data‑in‑use protection vary across cloud providers and the broader vendor ecosystem that includes data platforms, security and key management tools, and systems integrators. Before committing to a platform, data leaders should focus on proof points, regulatory alignment and how well it integrates with existing controls.
Sean Michael Kerner is an IT consultant, technology enthusiast and tinkerer. He has pulled Token Ring, configured NetWare and been known to compile his own Linux kernel. He consults with industry and media organizations on technology issues.