Apple’s New Memory Integrity Enforcement

Apple has introduced a new hardware/software security feature in the iPhone 17: “Memory Integrity Enforcement,” targeting the memory safety vulnerabilities that spyware products like Pegasus tend to use to get unauthorized system access. From Wired:

In recent years, a movement has been steadily growing across the global tech industry to address a ubiquitous and insidious type of bugs known as memory-safety vulnerabilities. A computer’s memory is a shared resource among all programs, and memory safety issues crop up when software can pull data that should be off limits from a computer’s memory or manipulate data in memory that shouldn’t be accessible to the program. When developers—­even experienced and security-conscious developers—­write software in ubiquitous, historic programming languages, like C and C++, it’s easy to make mistakes that lead to memory safety vulnerabilities. That’s why proactive tools like special programming languages have been proliferating with the goal of making it structurally impossible for software to contain these vulnerabilities, rather than attempting to avoid introducing them or catch all of them.

[…]

With memory-unsafe programming languages underlying so much of the world’s collective code base, Apple’s Security Engineering and Architecture team felt that putting memory safety mechanisms at the heart of Apple’s chips could be a deus ex machina for a seemingly intractable problem. The group built on a specification known as Memory Tagging Extension (MTE) released in 2019 by the chipmaker Arm. The idea was to essentially password protect every memory allocation in hardware so that future requests to access that region of memory are only granted by the system if the request includes the right secret.

Arm developed MTE as a tool to help developers find and fix memory corruption bugs. If the system receives a memory access request without passing the secret check, the app will crash and the system will log the sequence of events for developers to review. Apple’s engineers wondered whether MTE could run all the time rather than just being used as a debugging tool, and the group worked with Arm to release a version of the specification for this purpose in 2022 called Enhanced Memory Tagging Extension.

To make all of this a constant, real-time defense against exploitation of memory safety vulnerabilities, Apple spent years architecting the protection deeply within its chips so the feature could be on all the time for users without sacrificing overall processor and memory performance. In other words, you can see how generating and attaching secrets to every memory allocation and then demanding that programs manage and produce these secrets for every memory request could dent performance. But Apple says that it has been able to thread the needle.

Posted on September 23, 2025 at 7:07 AM22 Comments

Comments

sle September 23, 2025 7:50 AM

Such a progress.
In 2010, event the basic Execution Space Protection wasn’t implemented on iOS.
Now they lead, in term of memory protection.

ATN September 23, 2025 10:51 AM

As if it was simple to isolate tasks inside an operating system…
In general, those tasks are on the same computer because they need to talk to each other, the WEB browser need to talk to the printer, the video game need the graphic card so need support for graphic libraries/compilers that every other task also use, maybe at the exact same time.
Isolation can be a little simpler in between virtual machines on the same computer, but general case is not a solved problem.

KC September 23, 2025 10:54 AM

A little OT but hopefully not terribly.

An excerpt from a linked June 2025 report on MSL (memory safe languages):

Prossimo, a project of the [ISRG] and [OpenSSF], states that it plans to transition the Internet’s critical infrastructure to memory safe code and develop memory safe essential software.

I’m reading that Prossimo helped support adding memory safe language to the Linux kernel. (And is contributing to many other projects, still researching.) Seems very important considering the distribution. Happy to see MSLs being implemented so widely. It’s hard to think that mercenary spyware developers or others will fold up shop, but good to make the investments where there are resources or critical needs.

Wayne September 23, 2025 11:18 AM

And guaranteed Pegasus et al were first in line to buy new iPhone 17s to figure out ways to corrupt them. It will be interesting to see how this fight goes.

Over the weekend I was reading that the C++ standards people rejected a proposal for SAFE C++ in favor of Profiles to improve memory safety. Unfortunately not enough explanation in the article as to exactly what the difference was. I just know that of all the programming languages that I’ve studied over the years, C/C++ were my least favorite.

wiredog September 23, 2025 11:58 AM

This is a hardware solution. Memory safe hardware! At some point the kernel has to be memory unsafe so handling this at a lower level than the kernel helps. Although you do have to worry about the “secret” leaking. Still, it’s another level of difficulty for the bad guys to have to bypass.

mark September 23, 2025 12:31 PM

All this is well and good… until someone gets physical control of the phone. Then, LN2 (IIRC), and copy all of memory, then break any encryption on the cloned copies.

Mynacol September 23, 2025 12:38 PM

Now they lead, in term of memory protection.

Wonder if anyone in the Android space will respond.

Hardware-side, Google Pixel 8 to 10 already had the (non-extended) ARM Memory Tagging Extension for over two years. Android also has support for it, where applications can explicitly opt into it. I believe parts of the system are also running with enabled MTE for some time.

GrapheneOS adapted their hardened memory allocator and their flavor of Android shortly after the release of the Pixel 8. It is since enabled by default for the kernel, system components, all apps without native code (mostly pure Java/Kotlin apps), and of course apps that explicitly opt into it. It also offers a toggle to enable it by default for all other apps as well.
This way, they quickly noticed a memory issue in a Bluetooth component already.

Anonymous Coward September 23, 2025 12:46 PM

“password protect every memory allocation in hardware”

Does that mean in a few years we’ll be talking about getting memory get off the buggy random character generator, instead using passwords longer than 8 characters with at least 1 uppercase, 1 special character, 1 number, no duplicates, and more that 56 bits of entropy?

Clive Robinson September 23, 2025 1:29 PM

@ ALL,

There is a fly in the ointment to this high five clapping party.

As Apple says in,

https://security.apple.com/blog/memory-integrity-enforcement/

“Consider that MTE can be configured to report memory corruption either synchronously or asynchronously. In the latter mode, memory corruption doesn’t immediately raise an exception, leaving a race window open for attackers.”

Consider the above statement carefully.

Rewritten it says that,

“Asynchronous writes to memory cells leave race conditions open that are vulnerabilities.”

That is the question arises as to,

“Can asynchronous memory cell changes be prevented?”

With all memory cell technology currently the answer to this is “unavoidably true”. Because the changes in the cell are not caught/flaged externally when the memory cell change happens.

Put simply the change is hidden behind the multiplexer on the memory cell output, that hides/selects the cell from the appropriate memory bus pin.

We know that RowHammer style attacks work against memory cells that are not selected.

This issue was known about back in the 1970’s and earlier when looking at Error Correction and Detection for memory that would be used in high radiation environments.

Now a second question arises.

“If an asynchronous memory cell write is a “race condition vulnerability” for one memory cell, can it be a vulnerability for two or more memory cells in the same race condition time frame?

To which the answer is obviously “Yes”.

So consider the “tagging mechanism” at the end of the day it is going to be one or more additional memory cells.

So another question arises,

“If you can change a memory cell in a variable location asynchronously, can you in the same time period of the race condition change one or more memory cells in the tag location, so that it changes in a way that the variable location change is not flagged at the end of the race condition time period?”

The answer is “Yes”.

The only remaining question is,

“How difficult would it be for an attacker to do this?”

And the answer ranges across a very wide span dependent upon many things.

The problem is it remains “possible”.

If people are honest about it they will confirm we knew back before our host @Bruce was born that memory could be changed asynchronously and thus undetectably. We also later knew that in DRAM adjacent rows of memory cells could effect each other when densely packed causing asynchronous changes.

Assumptions were made about the efficacy of “Error Correcting Codes”(ECC) and the difficulty of changing adjacent row bits. The result was it was assumed that it was a “non-issue”… Then RowHammer happened and suddenly it was a “proven exploitable issue”.

Just something to remember,

“Sometimes the assumed impossible can be not just possible, but undetectably so…”

AND don’t make the mistake of thinking Apple’s Research people do not know this…

They do, and are deliberately avoiding talking about it…

You will see this if you read the entire document. Because you will find they “qualify” all of their claims to above the “ALU Level” in the “computing stack” and quite a bit above the “CPU Level” for others.

Memory cell attacks by power supply disruption, EM Fault injection, and other similar low down the stack attacks are way way below Apple’s protections in the Computing Stack.

Apple “are assuming” they can isolate the low level part of the computing stack at the CPU level.

Well there are such things as “Reach Around Attacks” and I talked about them a few years back with “Castles-v-Prisons” and when RowHammer proved them for all who could read and assimilate.

Put simply by “rattling the memory” RowHammer caused DRAM power supply and other common bus electrical noise deep inside the computer, that kind of acted like a “Brown Out” does in your home where the lights flicker etc. Mostly they go by without issue but when done actively it can cause electronic and electrical items to “soft fail” or worse “hard fail”. The same happens with RowHammer to the memory chips in your device.

You can do this “rattling” like quite a few other “reach around attacks” from way up the computing stack as it can be done via web pages.

Active EM attacks can be done from so far up the computing stack they are performed from quite a distance outside your device…

So just be prepared for these in the next decade or so.

If of course OS level “Client Side Scanning” via AI does not do it way more easily.

You did know that Apple have put not just “client side scanning” in their OS, but a BlueTooth low energy mesh network by which it can “ET Phone Home”?

Clive Robinson September 23, 2025 1:47 PM

@ ALL,

Damn, damn, damn…

Sorry folks an error got through brief proof reading process.

The sentence in my above,

‘With all memory cell technology currently the answer to this is “unavoidably true”.’

Has the logic inverted and it should be

‘With all memory cell technology currently, the answer to this is unavoidably “not true”.’

It came about as I was “untangling” a more unreadable sentence to make it readable …

Not really anonymous September 23, 2025 4:39 PM

I think it would be better described as Apple has a serious case of NOBUS. NObody can screw you over But US. They are happy to get paid for selling you out and to take a cut from anyone selling you a service. If their security isn’t good enough, they don’t get as big a piece of the action.

Clive Robinson September 23, 2025 4:39 PM

@ Anonymous Coward,

I would not really use the example of,

“password protect every memory allocation in hardware”

The tag is the unused upper 4bits in a byre pointer address due to it pointing at a 64bit word.

So there can only be 16 tag values so hardly “passwords” and ordinarily considered “no where near secure” for anything with only 16 values (think having a 1digit PIN for your bank card)…

Due to the way gate level circuits are implemented to keep real estate abd power consumption down whilst keeping speed up such tag hardware in the past has been more like a multibit parity check. That is four bits for every memory word XORed.

So in effect it’s more like a “stream cipher”… That is a pseudo random key is generated and used to encrypt a hash of the value.

If that top nibble of the word used as an address pointer matches then it’s assumed to be valid.

If the “pointer” being used does not match then the memory bus level hardware below the CPU level in the computing stack throws an exception.

Thus if as is oft the case with malware attacks, if you as an attacker offset the pointer or overwrite it in some way, then your chance of having a valid tag is low. Thus a hardware exception hits and the program gets halted.

Snarki, child of Loki September 23, 2025 8:03 PM

Sounds like many of these problems could be avoided by running VMS on your phone.
Alas, it is not to be.

Not really anonymous September 23, 2025 8:49 PM

Early versions of VMS did some pretty dumb things securitywise. In 1.6 priviliged code would turn off privilege bits before running untrusted code (e.g. a foreign terminal handler). But the untrusted code could turn them back on again. The mail program had physical IO privilege and would let people use foreign terminal handles to display stuff on fancy CRTs.

Clive Robinson September 24, 2025 2:40 AM

@ Snarki, child of Loki, ALL

With regards,

“Sounds like many of these problems could be avoided by running VMS on your phone.

Alas, it is not to be.”

Whilst a hardware based “Virtual Memory System”(VMS) might help with certain types of system core memory attack…

As far as I remember Apple’s phone OS does not even do real “muti-tasking”…

wiredog September 24, 2025 5:54 AM

@Mexaly
The security people where I work all have iPhones. Well, the more paranoid have cheap non-smart flip phones. But the ones who have smart phones have iPhones.

Who? September 24, 2025 11:52 AM

MTE has been supported by ARM Cortex cores since, I think, 2019. It is less advanced than Apple’s implementation; however, new Cortex cores will support EMTE (FEAT_MTE4) soon, including quite a few CPU side channel fixes that are currently unresolved on Apple’s hardware.

Clive Robinson September 25, 2025 6:57 PM

@ wiredog, Mexaly

With regards,

“The security people where I work all have iPhones. Well, the more paranoid have cheap non-smart flip phones.”

None are “paranoid enough” and I can say that as a design engineer who used to design not just cellular and cordless phones with enhanced security, but also computing devices which likewise were more secure than standard Consumer and Commercial devices.

The little I’ve seen of Apple’s phone design really does not impress me at all security wise.

The point I made about “secure message Apps” being not secure in a system, applies to all current mobile phone and smart devices.

They are designed incorrectly to give security.

That’s because at the end of the day,

“They are designed for profit not security, because security costs real money.”

Jonathan Wilson September 25, 2025 11:46 PM

Why has no-one implemented (either on a phone or a PC) hardware encrypted memory of the sort Microsoft has on the Xbox? (where the raw data that is stored in the DRAM is encrypted). That would make all the various DRAM attacks impossible.

Clive Robinson September 26, 2025 10:20 AM

@ Jonathan Wilson, ALL,

“Why has no-one implemented (either on a phone or a PC) hardware encrypted memory of the sort Microsoft has on the Xbox? (where the raw data that is stored in the DRAM is encrypted).”

Many people have tried it, and they have all failed in some way,that negates,

“That would make all the various DRAM attacks impossible.”

Encrypting a single “stream of data” once without modification is secure.

Encrypting files multiple times or with multiple changes, opens up all sorts of sometimes trivial attack vectors.

For instance the “key” does it change every time or is it locked to the memory location.

With a stream cipher or block cipher used as the equivalent you quickly and easily get the “Multiple Key Use” issue that turns the “Perfect Secrecy” of the “One Time Pad” into a cipher that can moderately easily be broken even by hand with as little as two encipher texts under the same key (ie “in depth”).

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.