Data in use, also referred to as data in process, encompasses digital information that is actively being accessed, processed, updated, or analyzed by applications, users, systems, or devices.[1][2] This state distinguishes itself from data at rest, which remains stored on physical media without active interaction, and data in transit, which involves information moving across networks.[3] During this phase, the data resides primarily in volatile memory components like RAM, caches, or CPU registers, enabling real-time operations but exposing it to immediate risks if not properly safeguarded.[4]Securing data in use is a critical component of comprehensive data protection strategies in cybersecurity, as it represents a highly vulnerable lifecycle stage prone to threats such as memory scraping, privilege escalation attacks, and side-channel exploits that can extract sensitive information during processing.[5] Unlike static storage, where full-disk encryption suffices, data in use requires dynamic defenses because decryption is often necessary for usability, creating brief windows of exposure.[6] Key protection methods include strict access controls via role-based permissions and multi-factor authentication to limit who can interact with active data, alongside monitoring tools for real-time visibility into processing activities.[5]Advanced techniques further enhance security for data in use, particularly in environments handling sensitive computations like financial modeling or healthcare analytics. Homomorphic encryption, for instance, permits mathematical operations directly on encrypted data without prior decryption, preserving confidentiality throughout processing. Complementary approaches include confidential computing, which leverages hardware-based trusted execution environments (TEEs) to isolate data during use, and runtime application self-protection (RASP) to detect and mitigate exploits at the application level.[7][8] These measures address the growing demands of cloud and AI-driven systems, where data in use volumes have surged, necessitating scalable and performant safeguards to prevent breaches that could compromise organizational integrity.[9]
Definitions and Scope
Core Concept
Data in use refers to sensitive information that is actively being processed within a computer system, typically loaded into volatile memory such as random access memory (RAM) to enable computation. This state occurs when data is accessed, modified, or analyzed by applications, such as temporary variables holding user inputs in a running program or decrypted payloads during algorithmic operations. Unlike encrypted forms, data in this phase is often in plaintext to facilitate efficient processing, making it susceptible to unauthorized access by privileged software or hardware components.[10]To distinguish it from other phases in the data lifecycle, data at rest resides on persistent storage media like hard drives or solid-state drives, where it can be safeguarded through static encryption without impacting usability. In contrast, data in transit moves between systems over networks and is protected by dynamic protocols to prevent interception. Data in use, however, demands active safeguards during runtime, as it cannot remain fully encrypted without hindering performance. A conceptual flowchart illustrates this progression: data starts at rest on storage, is loaded into memory for processing (entering the in-use state), undergoes computation, and then either returns to rest or enters transit for transfer.The term "data in use" emerged in the early 2000s amid growing concerns over data security in shared computing environments, particularly with the rise of cloud services and the limitations of existing protections for processing activities. This concept was formalized in NIST guidelines, such as Special Publication 800-144 (2011), which identified protecting data in use as an emerging challenge in cryptography, relying primarily on trust mechanisms due to the scarcity of practical solutions at the time.[11] Subsequent developments in confidential computing have built on this foundation to address these vulnerabilities. The term is sometimes used interchangeably with "data in process."
Alternative Interpretations
In legal and regulatory contexts, particularly under the European Union's General Data Protection Regulation (GDPR), "data in use" aligns with the broader concept of processing personal data, defined as any operation performed on such data, including consultation and use, often in the context of accessing it for decision-making purposes.[12] This interpretation emphasizes the active handling of personal data by controllers or processors, where "use" explicitly forms part of the processing activities that trigger GDPR obligations, such as ensuring lawful basis and data subject rights.[12] For instance, EU case law from the Court of Justice of the European Union (CJEU), including rulings post-2018, has clarified that automated decision-making involving personal dataprocessing—such as profiling for evaluative purposes—must respect transparency and contestability requirements under Articles 15 and 22 of the GDPR, as seen in cases addressing the balance between data processing for decisions and individual rights.[13]
Significance in Computing
Role in Data Lifecycle
Data in use occupies a critical position within the data lifecycle, which encompasses stages such as acquisition or creation, storage at rest, processing or usage, transmission in transit, and eventual disposal or archiving. During the processing phase, data transitions from static storage into active memory for manipulation, enabling tasks like computation, analysis, and real-time user interactions that are fundamental to operational workflows in computing systems. This stage is distinct from data at rest, where information resides idly on disks or tapes, and data in transit, involving movement across networks, highlighting data in use as the dynamic core of data utilization.[14]In software development and database management, data in use manifests prominently through in-memory systems, such as Redis, an open-source in-memory data structure store that loads datasets into RAM to support high-speed queries and caching without persistent disk reliance during operations. For instance, Redis facilitates rapid data retrieval for applications requiring low-latency responses, like session management or real-time analytics, where data is actively processed in memory to meet performance demands. In cloud computing environments, the prevalence of such in-memory processing is evident, with approximately 70% of enterprises prioritizing real-time processing capabilities that depend on in-memory data handling to maintain agility and responsiveness.[15][16]The efficiency benefits of data in use stem primarily from the superior speed of memory access compared to secondary storage, with RAM latencies typically in the range of nanoseconds versus milliseconds for disk I/O operations, representing an improvement of several orders of magnitude. This disparity allows for accelerated computation and reduced bottlenecks in data-intensive applications, such as machine learning training or financial transactionprocessing, where loading data into memory minimizes wait times and enhances overall system throughput. However, the active nature of data in use during processing introduces specific privacy risks that necessitate targeted protective strategies.[17]
Security and Privacy Implications
Data in use is particularly vulnerable to unauthorized access in shared memory environments, such as multi-tenant cloud infrastructures, where multiple virtual machines or processes share the same physical hardware. This vulnerability arises because data loaded into RAM for processing lacks the persistent protections afforded to data at rest, enabling side-channel attacks that exploit hardware weaknesses to leak sensitive information across isolation boundaries. A prominent example is the Rowhammer attack, first demonstrated in 2014, which induces bit flips in DRAM by repeatedly accessing adjacent rows, potentially corrupting or extracting data from neighboring memory regions without direct access.[18] In multi-tenant settings, such attacks can breach hypervisor-enforced isolation, allowing one tenant's malicious workload to interfere with another's memory and exfiltrate confidential data, as shown in cross-VM Rowhammer exploits that escalate privileges and enable data leakage.[19]Privacy concerns are amplified when data in use involves encrypted computations, as adversaries can mount inference attacks to deduce plaintext information from observable side effects. For instance, timing analysis exploits variations in execution time during memory operations on encrypted data, revealing patterns about the underlying content even when cryptographic protections are in place; this is especially relevant in secure enclaves or homomorphic encryption schemes where computation occurs on ciphertext.[20] Such attacks threaten user privacy by enabling reconstruction of sensitive details, like personal health records, without decrypting the data outright. Regulatory frameworks address these risks by mandating safeguards for electronic protected health information (ePHI) during processing; under the HIPAA Security Rule, covered entities must implement technical safeguards, including access controls and integrity measures, to protect ePHI that is created, received, maintained, or transmitted, ensuring that data in use remains secure against unauthorized access or alteration.[21]The broader impacts of breaches targeting data in use are severe, often leading to widespread system compromise and significant economic fallout. In the 2021 SolarWinds supply chain attack, attributed to state-sponsored actors, memory-resident malware like TEARDROP and SUNBURST payloads were deployed via compromised software updates, allowing persistent access to victims' networks by residing in RAM and evading disk-based detection; this incident affected up to 18,000 organizations, including U.S. government agencies, highlighting how in-use data exploitation can enable long-term espionage and data exfiltration.[22] Similar memory-scraping techniques have featured in high-profile breaches, such as the 2013 Target incident, where malware captured unencrypted card data during processing, compromising 40 million payment details and underscoring the scalability of such threats in real-time transactional environments.[23] These examples illustrate the critical need for robust defenses, as failures in protecting data in use can result in not only immediate data loss but also cascading effects on trust, compliance, and operational integrity.
Protection Techniques
Memory Encryption Approaches
Memory encryption approaches aim to protect data residing in volatile memory, such as DRAM, from unauthorized access during computation by applying cryptographic primitives transparently at the hardware level.[24] Full memory encryption techniques encrypt the entire physical address space, ensuring comprehensive coverage against physical attacks like cold boot or bus snooping. A prominent example is Intel's Total Memory Encryption (TME), introduced with the 3rd Generation IntelXeon Scalable processors in 2020, which employs AES-XTS with 128-bit or 256-bit keys to encrypt all data transiting to and from external memory.[24] The implementation integrates an on-die AES-XTS encryption engine directly in the CPU's memory controller data path, performing real-time encryption and decryption without software intervention, while ephemeral keys are generated via a hardware random number generator and remain inaccessible to the operating system.[24]In contrast, partial encryption methods target only sensitive regions of memory to minimize overhead, enabling application-specific protection for critical data structures or pages. For instance, AMD's Secure Memory Encryption (SME), available since the Zen architecture in 2017, supports page-granular encryption by allowing the OS to mark individual 4KB pages for AES-128 encryption using a single system-wide ephemeral key generated by the AMD Secure Processor.[25] This selective approach integrates with the memory management unit (MMU) to trigger encryption only for designated pages during writes to DRAM, providing flexibility for protecting specific application data without encrypting the full address space.[25] For dynamic data that changes frequently, counter-mode operations are commonly employed to ensure unique ciphertexts for identical plaintexts across updates; the Galois/Counter Mode (GCM) of AES, for example, combines counter-based encryption with authentication using a 128-bit block cipher and a 128-bit initialization vector derived from address and counter values, avoiding key derivation while preventing replay attacks.[26]These approaches introduce performance trade-offs, primarily in memoryaccesslatency due to cryptographic computations. Recent studies indicate overheads of approximately 9% for counterless memoryencryption on irregular workloads, with counter-mode implementations incurring additional costs due to memoryaccess patterns, as evaluated on commodity hardware in 2024.[27]Key management for these encryptions relies on secure hardware storage to prevent exposure during use, as detailed in dedicated mechanisms.[24]
Key Storage Mechanisms
CPU-based key storage leverages processor internals, such as registers and fuses, to manage cryptographic keys for protecting data during computation. In AMD's Secure Encrypted Virtualization (SEV), the AMD Secure Processor (AMD-SP) generates a unique VM Encryption Key (VEK) at VM launch, which is injected into the memory controller's encryption engine via firmware commands from the hypervisor. This key resides in volatile processor registers, ensuring it is erased upon power loss or reset for enhanced security, while the hardware isolation of the AMD-SP provides tamper resistance against software attacks. For SEV with Encrypted State (SEV-ES), introduced in 2019 with the second-generation EPYC processors, key management extends to encrypting CPU registers during VM exits, preventing hypervisor access to in-use key material or transient data.[28] Additionally, CPU-specific root secrets stored in one-time-programmable (OTP) fuses derive higher-level keys like the Chip Endorsement Key (CEK), offering non-volatile but irreversible protection against physical tampering attempts.[29]Trusted Platform Modules (TPMs) offer a dedicated hardware solution for secure key derivation and storage, integrating with systems to support data-in-use protection. Per the Trusted Computing Group (TCG) TPM 2.0 specifications released in 2014, each TPM maintains an Endorsement Primary Seed (EPS)—a unique, factory-injected value in non-volatile memory—from which the Endorsement Key (EK) is hierarchically derived using a deterministic key generationalgorithm. The EK acts as a root of trust credential, enabling the derivation of ephemeral or session-specific keys for memory encryption without ever exposing the EPS outside the TPM's tamper-resistant boundary.[30][31] This integration allows platforms to attest key legitimacy via EK certificates, ensuring derived keys remain secure for in-use scenarios like secure boot or encrypted computation.Managing key rotation presents notable challenges when protecting data in use, as updating keys requires re-encrypting active memory contents without interrupting processing, which can incur significant latency. In AWS Nitro Enclaves, this is addressed through ephemeral key design, where cryptographic keys are generated solely within the enclave's isolated CPU and memory at runtime, remaining unexposed to the parent EC2 instance or external systems throughout their lifecycle.[32] Rotation occurs by terminating the enclave and launching a new one with freshly derived keys, leveraging the lack of persistent storage to avoid key persistence risks while minimizing exposure during transitions.[33]
Hardware-Based Isolation
Hardware-based isolation refers to mechanisms embedded in processor architectures that create physically protected execution environments, known as trusted execution environments (TEEs), to safeguard data during processing against unauthorized access by the operating system, hypervisors, or other software components. These techniques leverage CPU-level features to enforce memory isolation and attestation, ensuring that sensitive computations occur in a tamper-resistant space even on compromised hosts. By partitioning system resources into secure and non-secure domains, hardware isolation prevents side-channel attacks and privilege escalations that could expose data in use.Intel Software Guard Extensions (SGX), introduced in 2015 with the Skylake processor generation, exemplifies this approach through enclaves—isolated regions of memory where code and data execute under strict hardware controls. SGX employs the Enclave Page Cache (EPC), a protected portion of physical memory (up to 128 MB on initial Skylake implementations), to store enclave contents, with all data automatically encrypted by the Memory Encryption Engine (MEE) to thwart physical attacks. Enclaves support remote attestation, allowing external verifiers to confirm the integrity of the code running inside via cryptographic measurements signed by the processor's attestation key, thus enabling secure data processing in untrusted cloud environments. As of 2025, confidential computing has seen expanded adoption, including GPU-based TEEs for AI workloads and integrations like Apple's Private Cloud Compute, enhancing hardware isolation for sensitive processing.[34][35][36]ARM TrustZone, first implemented in ARMv6 architecture around 2004, achieves isolation by dividing the system into a secure world and a normal world, with transitions managed through the Secure Monitor Call (SMC) instruction that switches processor modes without exposing secure state. This hardware-enforced partitioning protects peripherals, memory, and interrupts in the secure world from normal-world access, while a secure monitor at exception level 3 (EL3) oversees context switches to maintain isolation. TrustZone incorporates remote attestation protocols, often via standards like the GlobalPlatform TEE Client API, to prove the secure world's configuration to remote parties using device-unique keys fused into the hardware.[37][38]In blockchain applications, hardware enclaves like those in SGX have been adopted to enable confidential smart contract execution, preserving input privacy and computation integrity on public ledgers. For instance, the ShadowEth framework integrates SGX enclaves with Ethereum to process private transactions off-chain while committing only non-sensitive results on-chain, demonstrating reduced gas costs and enhanced privacy in decentralized finance scenarios as evaluated in prototypes from the late 2010s onward. More recent integrations, such as the 2023 deployment explorations in projects like Phala Network, leverage SGX for verifiable confidential computing in Ethereum-compatible chains, supporting use cases like private auctions and secure multi-party computations without revealing underlying data.[39][40]
Cryptographic Protocols
Cryptographic protocols play a crucial role in protecting data in use by enabling secure computation and verification without exposing sensitive information in plaintext form, particularly in distributed environments where multiple parties or systems interact. These protocols leverage advanced cryptographic primitives to ensure that data remains confidential even during processing, addressing vulnerabilities inherent in memory-resident operations. By design, they facilitate operations on encrypted data, minimizing the risk of unauthorized access by adversaries who might compromise the hosting environment.Secure multi-party computation (SMPC) protocols exemplify this approach, allowing multiple participants to jointly compute a function over their private inputs while keeping those inputs concealed from each other and external observers. The SPDZ protocol, introduced in 2011, achieves security against active adversaries corrupting up to n-1 out of n parties by combining somewhat homomorphic encryption with information-theoretic MACs for input verification and multiplication protocols. In SPDZ, an offline pre-processing phase generates "Beaver triples" for efficient multiplications, enabling online computation on encrypted shares without decryption; this ensures that data in use during the protocol execution remains protected through additive secret sharing and zero-knowledge proofs against malicious behavior. SPDZ has been foundational for practical SMPC implementations, supporting applications like privacy-preserving machine learning where data from multiple sources is processed collaboratively without revealing individual contributions.[41]Homomorphic encryption schemes extend these capabilities by permitting direct arithmetic operations on ciphertexts, producing an encrypted result that matches the operation on the underlying plaintexts. The CKKS scheme, proposed in 2017, specializes in approximate computations over real or complex numbers, making it suitable for data in use scenarios involving floating-point data, such as signal processing or statistical analysis. In CKKS, plaintexts are encoded as polynomials modulo a prime, encrypted using the Ring Learning With Errors (RLWE) assumption, and operations like addition and multiplication are performed homomorphically; the scheme manages approximation errors through a scaling factor and rescaling steps to control noise growth, allowing leveled computations without bootstrapping in many practical cases. This enables servers to process encrypted data in memory without decryption, preserving confidentiality during use, though with a precision trade-off for efficiency. CKKS has gained prominence for its balance of security and usability in cloud-based analytics.Protocols like TLS 1.3 integrate cryptographic protections for data in use within secure communication sessions, emphasizing forward secrecy to limit exposure of session data even if long-term keys are later compromised. Defined in RFC 8446, TLS 1.3 mandates ephemeral Diffie-Hellman key exchange for all sessions, generating unique session keys that are discarded post-use, thereby ensuring that data processed in memory during the session cannot retroactively decrypt past communications. This forward secrecy mechanism protects in-use data by isolating session keys from persistent storage risks, without requiring plaintext exposure beyond the immediate processingcontext, and supports zero-round trip time resumption for efficiency in repeated sessions. In distributed systems, TLS 1.3 thus safeguards data flows during active use, complementing higher-level protocols for end-to-end security.[42]
Challenges and Limitations
Performance Overhead
Protecting data in use through memory encryption introduces computational overhead primarily from cryptographic operations performed on every memory access, increasing CPU cycles and latency. Recent measurements indicate that counterless memory encryption schemes, which avoid counter management but rely on slower ciphers like AES, result in an average slowdown of 9% for irregular workloads on modern processors, with up to 13% under AES-256 due to decryption costs on last-level cache misses.[43] Optimized schemes like Counter-light reduce this to about 2% overhead by streamlining counter access, achieving 98% of the performance of unprotected memory in high-bandwidth scenarios.[43]Enclave-based isolation, such as Intel SGX, adds overhead from context switching between trusted and untrusted execution modes, typically costing 10,000 to 18,000 cycles per transition. In memory-heavy workloads, this translates to 1-3% overhead for encryption tasks when using shadow paging, though context-switch-intensive applications like HTTP servers can see up to 13.8% latency increases at low request rates.[44] These costs scale with transition frequency, emphasizing the need to minimize enclave calls in performance-critical applications.Mitigation strategies leverage hardware accelerations like Intel's AES-NI instructions, introduced in 2010 with the Westmere microarchitecture, which provide up to an order-of-magnitude speedup for parallelizable encryption modes like CTR and GCM compared to software implementations.[45] In high-performance computing environments, such optimizations enable practical deployment without excessive penalties.Case studies highlight these trade-offs in database systems; for instance, PostgreSQL with Transparent Data Encryption (TDE) extensions for securing data files shows approximately 44-50% slowdown in read-write query throughput for small-scale workloads due to encryption/decryption during I/O and processing, though read-only queries experience minimal impact (less than 5%) when data resides in memory.[46] Similar systems report 8-12% overall overhead on query execution, primarily from increased CPU time for cryptographic primitives.[47]
Implementation Barriers
Deploying data in use protections, such as Intel Software Guard Extensions (SGX) and AMD Secure Encrypted Virtualization (SEV), frequently faces compatibility challenges with legacy systems. SGX, for example, cannot be enabled in software-controlled mode on systems running in legacy BIOS mode, requiring a switch to Unified Extensible Firmware Interface (UEFI) for proper operation. [48] Additionally, many legacy CPUs that support SGX hardware capabilities necessitate BIOS or firmware updates to activate the feature, as the default configuration often disables it for compatibility reasons. [49]Porting legacy applications to run within SGX enclaves further complicates deployment, demanding significant code refactoring to isolate sensitive operations while maintaining compatibility with existing software stacks. [50]Vendor lock-in exacerbates these technical barriers due to the proprietary designs of hardware-based protections. Since Intel introduced SGX in 2016 and AMD launched SEV shortly thereafter in 2017, organizations adopting one vendor's technology must contend with incompatible instruction sets and attestation mechanisms, hindering seamless migration to alternatives. [51] This fragmentation has led to redevelopment efforts in multi-vendor environments, as seen in transitions from SGX to hybrid Intel Trust Domain Extensions (TDX) and AMD SEV setups to mitigate dependency on a single provider. [52]Economic factors also impede widespread implementation, including hardware premiums and the need for specialized developer expertise. Servers equipped with Total Memory Encryption (TME) capabilities, an extension of SGX for full memory protection, incur additional costs compared to standard configurations, as reflected in cloud infrastructure analyses. Moreover, developers require targeted training on confidential computing frameworks, involving SDKs, enclave management, and secure coding practices, which Intel and Microsoft provide through dedicated documentation and courses to bridge the skills gap. [53][54]Regulatory obstacles add another layer of complexity, particularly export controls on cryptographic hardware under the Wassenaar Arrangement, which was updated in December 2022 to refine dual-use goods lists including information security technologies. [55] These controls, implemented in national regulations like the U.S. Export Administration Regulations, mandate licenses for exporting items in Category 5 Part 2 (covering encryption hardware), potentially delaying global deployments and increasing compliance overhead for multinational organizations. [56]
Emerging Threats
Side-channel attacks exploiting speculative execution in modern processors represent a persistent and evolving threat to data in use, enabling unauthorized leakage of sensitive information from memory without direct access. The Spectre attacks, disclosed in 2018, manipulate branch prediction and indirect branch predictors to speculatively execute instructions that access protected memory, followed by cache-based side-channel observations to infer the leaked data.[57] Similarly, the Meltdown attack, also disclosed in 2018, bypasses memory isolation by leveraging out-of-order execution to read kernelmemory from user space, transmitting the data through timing side-channels like Flush+Reload.[58] These vulnerabilities affect a wide range of processors, including Intel, AMD, and ARM architectures, and have inspired numerous variants that continue to emerge, demonstrating the ongoing risks to in-use data in shared environments such as cloud computing and virtual machines.Recent variants of these speculative execution attacks further highlight the adaptability of such threats to contemporary hardware. For instance, Zenbleed, disclosed in 2023 and affecting AMDZen 2 processors, arises from microarchitectural flaws where a register fails to clear properly under specific conditions, potentially retaining and leaking data from other processes or threads via side-channels.[59] This vulnerability, addressed via microcode updates starting in 2023 with further revisions in 2024, underscores how even post-mitigation hardware generations remain susceptible to similar exploitation techniques, compromising the confidentiality of in-use data in multi-tenant systems.[59]Quantum computing poses another emerging threat to the encryption protecting data in use, particularly symmetric schemes like AES that rely on exhaustive key search resistance. Grover's algorithm enables a quadratic speedup for unstructured search problems, reducing the effective security level of AES-128 from 128 bits to approximately 64 bits by requiring only about 2^64 operations to recover the key, compared to 2^128 classically.[60] This degradation necessitates larger key sizes—such as AES-256 for 128-bit post-quantum security—to maintain protection against future quantum adversaries, though implementations must also address the computational feasibility of large-scale quantum oracles for AES. To counter these threats, the U.S. National Institute of Standards and Technology (NIST) released the first three post-quantum cryptography standards on August 13, 2024, specifying algorithms like ML-KEM (derived from CRYSTALS-Kyber) and ML-DSA (derived from CRYSTALS-Dilithium) that are resistant to quantum attacks.[61][60]AI-driven attacks exacerbate risks to data in use within machine learning pipelines, where in-use training data can be reconstructed through model inversion techniques. In federated learning, gradient inversion attacks recover private training samples from shared model updates, as demonstrated in 2023 studies showing high-fidelity reconstruction of images and text even under defensive perturbations like differential privacy.[62] These attacks, which adaptively optimize auxiliary data to invert gradients, reveal vulnerabilities in distributed systems where in-use data remains exposed during model training, potentially leading to privacy breaches in applications like healthcare and finance.[62]
Future Developments
Advanced Hardware Solutions
Intel's Trust Domain Extensions (TDX), introduced in 2022 with the 4th Generation Intel Xeon Scalable Processors, represent a significant advancement in virtualized confidential computing by enabling hardware-isolated trust domains (TDs) that protect sensitive data and workloads from privileged software, including the hypervisor. TDX leverages extensions to Intel's Virtualization Technology (VT) and Multi-Key Total Memory Encryption (MKTME) to create secure virtual machines, where each TD operates with its own ephemeral AES-128 key for memory encryption at cache-line granularity, ensuring confidentiality during processing. This multi-key approach allows multiple TDs to share the same physical host while maintaining isolated encryption domains, reducing the trusted computing base and mitigating risks from malicious or compromised hypervisors. By integrating total memory encryption with hardware attestation, TDX facilitates secure deployment of data in use across multi-tenant cloud environments without performance degradation exceeding 5-10% in typical workloads.RISC-V's enhanced Physical Memory Protection (ePMP) extension, ratified in 2023, extends the base PMP mechanism to support fine-grained memory isolation in supervisor mode, enabling robust trusted execution environments (TEEs) suitable for confidential computing on open-source hardware platforms. ePMP adds control bits such as MSE and MPV to the existing up to 16 protection regions, with configurable access controls that enforce isolation even in virtualized setups, allowing secure enclaves to processdata without exposure to the operating system or hypervisor.[63] This hardware feature supports low-level security primitives like secure boot and sandboxing, making RISC-V processors viable for edge and embedded applications where data in use must remain protected against physical attacks or software vulnerabilities. Implementations in commercial RISC-V cores, such as those from Ventana Micro Systems, demonstrate ePMP's role in hardening hypervisors for multi-tenant isolation with minimal overhead.Neuromorphic chips, inspired by biological neural structures, offer low-overhead secure processing through event-driven spiking neural networks (SNNs) that inherently preserve privacy by processing data in a non-differentiable, stochastic manner, reducing risks like membership inference attacks compared to traditional artificial neural networks. Studies show SNNs achieve lower attack success rates (e.g., AUC of 0.59 on CIFAR-10) while maintaining utility under differential privacy mechanisms, with accuracy drops as low as 12.87% on Fashion-MNIST datasets versus 19.55% for conventional models.[64] This architecture enables efficient, in-memory computation for data in use, particularly in resource-constrained devices, by minimizing data leakage during inference without relying on heavy cryptographic overhead.In 2025, NVIDIA extended confidential computing capabilities by incorporating enclave-based protections for GPU memory and execution flows, facilitating secure processing of data in use for AI workloads. Additionally, Microsoft rolled out the Azure Integrated HSM security chip across all Azure servers in August 2025, enhancing hardware-rooted key management and isolation in trusted execution environments.[65]
Research Directions
Recent advancements in fully homomorphic encryption (FHE) have centered on optimizing bootstrapping processes to mitigate the high computational overhead associated with operations on encrypted data. A key contribution is the development of improved circuit synthesis techniques for FHEW/TFHE schemes, which enable multi-value bootstrapping and efficient arithmetic circuit optimizations, resulting in up to 4.2× faster execution times compared to prior state-of-the-art methods. These optimizations reduce the noise accumulation during homomorphic evaluations, allowing for deeper computations without frequent refreshes. Similarly, the DaCapo compiler introduces automatic bootstrapping management by analyzing live-out ciphertexts to minimize operation counts and latency, achieving an average 1.21× speedup across deep learning workloads like ResNet and AlexNet models.[66][67]Zero-knowledge proofs, particularly zk-SNARKs, are being explored for verifying data in use without exposing sensitive information, enhancing privacy in dynamic environments. In blockchain contexts, zk-SNARKs facilitate Ethereum layer-2 scaling by compressing off-chain computations into succinct proofs that confirm validity on the main chain, significantly reducing gas costs and improving throughput to thousands of transactions per second. Seminal work in 2023 has analyzed the security implications of these zk-rollups, highlighting their role in preserving Ethereum's decentralization while enabling verifiable in-use data processing for decentralized applications. These proofs are increasingly applied beyond blockchain to general in-use verification protocols, such as secure multi-party computation where parties attest to data manipulations without revealing inputs.[68][68]