New Attacks Against Secure Enclaves

Encryption can protect data at rest and data in transit, but does nothing for data in use. What we have are secure enclaves. I’ve written about this before:

Almost all cloud services have to perform some computation on our data. Even the simplest storage provider has code to copy bytes from an internal storage system and deliver them to the user. End-to-end encryption is sufficient in such a narrow context. But often we want our cloud providers to be able to perform computation on our raw data: search, analysis, AI model training or fine-tuning, and more. Without expensive, esoteric techniques, such as secure multiparty computation protocols or homomorphic encryption techniques that can perform calculations on encrypted data, cloud servers require access to the unencrypted data to do anything useful.

Fortunately, the last few years have seen the advent of general-purpose, hardware-enabled secure computation. This is powered by special functionality on processors known as trusted execution environments (TEEs) or secure enclaves. TEEs decouple who runs the chip (a cloud provider, such as Microsoft Azure) from who secures the chip (a processor vendor, such as Intel) and from who controls the data being used in the computation (the customer or user). A TEE can keep the cloud provider from seeing what is being computed. The results of a computation are sent via a secure tunnel out of the enclave or encrypted and stored. A TEE can also generate a signed attestation that it actually ran the code that the customer wanted to run.

Secure enclaves are critical in our modern cloud-based computing architectures. And, of course, they have vulnerabilities:

The most recent attack, released Tuesday, is known as TEE.fail. It defeats the latest TEE protections from all three chipmakers. The low-cost, low-complexity attack works by placing a small piece of hardware between a single physical memory chip and the motherboard slot it plugs into. It also requires the attacker to compromise the operating system kernel. Once this three-minute attack is completed, Confidential Compute, SEV-SNP, and TDX/SDX can no longer be trusted. Unlike the Battering RAM and Wiretap attacks from last month—which worked only against CPUs using DDR4 memory—TEE.fail works against DDR5, allowing them to work against the latest TEEs.

Yes, these attacks require physical access. But that’s exactly the threat model secure enclaves are supposed to secure against.

Posted on November 10, 2025 at 7:04 AM10 Comments

Comments

jelo 117 November 10, 2025 8:55 AM

“Those early versions encrypted no more than 256MB of RAM, a small enough space to use the much stronger probabilistic form of encryption. The TEEs built into server chips, by contrast, must often encrypt terabytes of RAM. Probabilistic encryption doesn’t scale to that size without serious performance penalties. Finding a solution that accommodates this overhead won’t be easy.

“One mitigation over the short term is to ensure that each 128-bit block of ciphertext has sufficient entropy. Adding random plaintext to the blocks prevents ciphertext repetition. The researchers say the entropy can be added by building a custom memory layout that inserts a 64-bit counter with a random initial value to each 64-bit block before encrypting it.”

How to cook pasta:

  1. Bring water to a rapid boil.
  2. Add a lot of salt. The water should taste like the sea
  3. Etc. etc.

Rob Stubbs November 10, 2025 9:03 AM

The main (and significant) benefits of TEEs include (i) protecting sensitive data from malicious code running on the same system, and (ii) attesting the validity of the code and operational environment.

While TEEs afford some level of physical security (e.g. encrypting data in RAM), it is difficult for the CPU to protect an entire system against all physical threats. This would really need anti-tamper mechanisms as deployed in HSMs.

Similarly, TEEs can’t protect against EMC threats – you still need to use best practice coding techniques for cryptography.

The bottom line is that it’s important to understand the threats you’re trying to protect against and the mitigations offered by different security technologies. There is no single silver bullet, defence in depth is needed. While not perfect, TEEs are a valuable tool in the security toolbox.

Clive Robinson November 10, 2025 5:07 PM

@ AlexT, ALL,

Every so often I comment on the issues involved on this blog you can find my limited comments on this issue back on the Squid page where it was raised.

But one fundamental mistake nearly everyone makes is from the observation,

“… if you don’t have physical security all bets are off.”

It is a problem because “physical security” is not a natural phenomena as such. Simple objects have no real intrinsic security, likewise nor do components of systems.

This gives rise to a couple of things to note,

1, All security has to be built from insecure components.
2, All components have vulnerabilities around them as do the systems they are part of.

Thus security can be seen as two things,

1, Built from imperfect “methods”.
2, So “probabilistic” in nature.

Worse human dogma often gets in the way as a form of cognitive bias.

Thus we end up almost always with hierarchical systems made of components that form a tree structure. These are generally bad for security as they become more vulnerable with both scale and complexity.

Which is why both physical and informational security –of which physical is a subset– are generally managed by three things,

1, Segregation.
2, Mandated gap crossing.
3, Instrumentation of gap traffic.

Most modern computers are not designed to support these in any real sense including the Secure Enclaves”. So they sort of get “bolted on” in software quite unreliably.

It’s one of the reasons HSM’s can be so eyewateringly expensive (yet still be unreliable at best).

Ages and ages ago I pointed out here and in other places that,

“Security is a Quality Process, and should be in place before a project is even contemplated”.

Yet here we are in ICTsec still like Victorian artisans “bolting bits on” over the cracks rather than designing issues out as modern engineers, architects, etc tend to do.

John November 11, 2025 7:55 AM

“The low-cost, low-complexity attack works by placing a small piece of hardware between a single physical memory chip and the motherboard slot it plugs into.”

If the blackhat has sufficient access to be able to mess with the components on the motherboard, then it’s game over, no?

And surely the cloud provider does have that access. Someone does, for sure. Someone manages the hardware.

lurker November 11, 2025 1:49 PM

@John

Convenience beats security, again.
Convenience that is to be able to upgrade or replace faulty RAM. But don’t rely on soldered in memory, somebody will already have a device that clips onto the chip pins for a similar result.

Physical access is part of the trust chain

Not really anonymous November 11, 2025 3:15 PM

Soldered memory isn’t really a solution, since there isn’t a reasonable way to attest that soldered memory is being used while using an enclave.

ResearcherZero November 13, 2025 3:06 AM

@Clive Robinson

Google is pitching “secure compute” using TEE for its new useless mobile AI boondoggle. The likelihood that AI functions can be offloaded securely to cloud compute is low. The risk that such functions will be exploited to run malicious instructions or abuse side channels is high. The uses it is being pitched for are not worth the added complexity it involves.

Google’s Magic Cue thingamajig will randomly provide suggestions that are entirely useless and unwanted like Clippy, but a special type of hell based on the user’s personal data.

Microsoft is trialing this kind of an approach with its new iterations of Windows. Annoying people with prompts about the many extra crappy services and options Microsoft can provide and hiding the options to disable the added distractions throughout the system. There are already lengthy guides appearing online detailing how all of this crap can be disabled.

Clive Robinson November 13, 2025 4:53 AM

@ ResearcherZero,

“The uses it is being pitched for are not worth the added complexity it involves.”

You could usefully have added

“cost and inconvenience”

Just after complexity 😉

As you know I’ve been predicting this nonsense for quite a while now, and since doing so I see more and more people finding their own way to reach the same conclusion…

Thus it begs the question,

“As Sam Altman is not an emperor, why do we have to see him parade around with his ass hanging out?”

That guy is a fraudster, and so are most of the other seniors pumping it up knowing it’s going to dump. As the AI Hype Bubble around current AI LLM and ML Systems gets more and more desperately shilled it is starting to show that they can not pump hard enough to keep it up any longer. Thus the question of,

“Burst, Deflate, or Both?”

Arises…

I’m hoping it’s deflate for two reasons,

1, The economy can’t take a burst.
2, Whilst “General purpose LLM and ML” is a compleate bust, it genuinely does have niche applications.

That is like all the previous AI paradigms, going back a half century or so it has uses beyond being a predictive text generator.

But to get the benefits the training data into the ML has to be “narrow depth” not street sweepings wide “breadth” of basically garbage. As such it also needs to be “clean” and ordered in a way that will form a more balanced “tree”. And the LLM has to be limited by a hard functional rule set.

Think of it like a “Galton Pin board” that builds the “normal distribution curve”. But with the position of each pin slightly moved to give a different distribution.

In effect it’s a “matched filter” on an N-Dimensional spectrum, in highly constrained things like bio-medical research they can be used to “Model effects” fairly precisely even though still sufficiently stochastic. In essence that is what they did with AlphaFold. Which produced more candidates to be tested than we can currently do.

The trick is an old one used in mathematics and physics. You do experiments and your data forms surfaces but you in effect have know knowledge as to why it’s the shape it is. You then do the equivalent of a “curve fit”, which gives one or more formulations. You then take the formulations and see if you can find the parts that match both the curve and existing knowledge and test again to see if it holds true. So the practical produces data, that produces a theoretical hypothesis which you then use to run new practical experiments to confirm or reject the formulation.

It’s effectively how we got much of the early Quantum Physics nearly a century ago.

It’s a tried and tested method, and something that Modified ML and LLMs can do fairly reliably and more productively than most humans can.

But it’s very niche and very vertical so the potential for significant “easy money” is really not there.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.