New iPhone Exploit Uses Four Zero-Days

Kaspersky researchers are detailing “an attack that over four years backdoored dozens if not thousands of iPhones, many of which belonged to employees of Moscow-based security firm Kaspersky.” It’s a zero-click exploit that makes use of four iPhone zero-days.

The most intriguing new detail is the targeting of the heretofore-unknown hardware feature, which proved to be pivotal to the Operation Triangulation campaign. A zero-day in the feature allowed the attackers to bypass advanced hardware-based memory protections designed to safeguard device system integrity even after an attacker gained the ability to tamper with memory of the underlying kernel. On most other platforms, once attackers successfully exploit a kernel vulnerability they have full control of the compromised system.

On Apple devices equipped with these protections, such attackers are still unable to perform key post-exploitation techniques such as injecting malicious code into other processes, or modifying kernel code or sensitive kernel data. This powerful protection was bypassed by exploiting a vulnerability in the secret function. The protection, which has rarely been defeated in exploits found to date, is also present in Apple’s M1 and M2 CPUs.

The details are staggering:

Here is a quick rundown of this 0-click iMessage attack, which used four zero-days and was designed to work on iOS versions up to iOS 16.2.

  • Attackers send a malicious iMessage attachment, which the application processes without showing any signs to the user.
  • This attachment exploits the remote code execution vulnerability CVE-2023-41990 in the undocumented, Apple-only ADJUST TrueType font instruction. This instruction had existed since the early nineties before a patch removed it.
  • It uses return/jump oriented programming and multiple stages written in the NSExpression/NSPredicate query language, patching the JavaScriptCore library environment to execute a privilege escalation exploit written in JavaScript.
  • This JavaScript exploit is obfuscated to make it completely unreadable and to minimize its size. Still, it has around 11,000 lines of code, which are mainly dedicated to JavaScriptCore and kernel memory parsing and manipulation.
  • It exploits the JavaScriptCore debugging feature DollarVM ($vm) to gain the ability to manipulate JavaScriptCore’s memory from the script and execute native API functions.
  • It was designed to support both old and new iPhones and included a Pointer Authentication Code (PAC) bypass for exploitation of recent models.
  • It uses the integer overflow vulnerability CVE-2023-32434 in XNU’s memory mapping syscalls (mach_make_memory_entry and vm_map) to obtain read/write access to the entire physical memory of the device at user level.
  • It uses hardware memory-mapped I/O (MMIO) registers to bypass the Page Protection Layer (PPL). This was mitigated as CVE-2023-38606.
  • After exploiting all the vulnerabilities, the JavaScript exploit can do whatever it wants to the device including running spyware, but the attackers chose to: (a) launch the IMAgent process and inject a payload that clears the exploitation artefacts from the device; (b) run a Safari process in invisible mode and forward it to a web page with the next stage.
  • The web page has a script that verifies the victim and, if the checks pass, receives the next stage: the Safari exploit.
  • The Safari exploit uses CVE-2023-32435 to execute a shellcode.
  • The shellcode executes another kernel exploit in the form of a Mach object file. It uses the same vulnerabilities: CVE-2023-32434 and CVE-2023-38606. It is also massive in terms of size and functionality, but completely different from the kernel exploit written in JavaScript. Certain parts related to exploitation of the above-mentioned vulnerabilities are all that the two share. Still, most of its code is also dedicated to parsing and manipulation of the kernel memory. It contains various post-exploitation utilities, which are mostly unused.
  • The exploit obtains root privileges and proceeds to execute other stages, which load spyware. We covered these stages in our previous posts.

This is nation-state stuff, absolutely crazy in its sophistication. Kaspersky discovered it, so there’s no speculation as to the attacker.

Posted on January 4, 2024 at 7:11 AM30 Comments

Comments

Clive Robinson January 4, 2024 7:50 AM

@ ALL,

The “hardware exploit” is perhaps the most interesting thing.

Because it looks like it could have been “implanted” into the hardware as an attempt as a “Golden Key” “back door”…

That could with only minor changes be such that it could have so many different keys that every Nation that demanded one for “National Security” could spy only on the phones sold into it’s jurisdiction not other jurisdictions.

Thus the question is where the idea originated…

Clive Robinson January 4, 2024 9:16 AM

@ ALL,

The hardware bypass is described in the research paper by Kaspersky researcher Boris Larin as,

“If we try to describe this feature and how attackers use it, it all comes down to this: attackers are able to write the desired data to the desired physical address with bypass of hardware-based memory protection by writing the data, destination address and hash of data to unknown, not used by the firmware, hardware registers of the chip.

Our guess is that this unknown hardware feature was most likely intended to be used for debugging or testing purposes by Apple engineers or the factory, or was included by mistake. Since this feature is not used by the firmware, we have no idea how attackers would know how to use it”

As I noted just yesterday in,

https://www.schneier.com/blog/archives/2023/12/friday-squid-blogging-sqids.html/#comment-430564

“In effect that is a “One Way Function” as you can not realy “black box out” the S-box (if it’s designed correctly).

The implication of which is “somebody knows” the S-box structure outside of what should be a very limited subset of people[1].”

Which is maybe why our host @Bruce has said,

“This is nation-state stuff, absolutely crazy in its sophistication. Kaspersky discovered it, so there’s no speculation as to the attacker.”

But on the more technical side back last century our host in his book on crypto algorithms,

https://www.schneier.com/books/applied-cryptography/

Showed how to use a Fiestel Round structure to turn a hash function into a keyed encryption/decryption algorithm.

Well it also works the otherway around an encryption/decryption function can be used as a hash and has certain advantages.

Nearly all modern CPUs of any power contain the hardware to do AES. Thus building in a “secret hash” / “keyed hash” using it would be trivial to do. If key is kept in internal EEPROM then building a “Golden Key” back door is almost as trivial.

As I further said yesterday,

“Because if it became known the 2nd Party was a US Agency, then the “NOBUS Back door” notion pushed so hard by William Barr and friends has been proved “blown for good”.

Which is just the sort of mayhem certain folks night like to chuck in some cosy “highly secret” 2 party agrement…

Just a thought from before my breakfast cup of tea ;-)”

If people want more info on how to do a secret / keyed hash to create such a hardware “Golden Key” back door I can describe it, but as I said it’s fairly trivial to do.

Stopping such built in hardware backdoors is possible and I have previously described how to do it on this blog with “Castles-v-Prisons”.

Stevie-O January 4, 2024 9:25 AM

@Clive Robinson

I don’t have a strong cryptography background, so I would love to see more detail on your “Golden Key” technique.

Perry Fellwock January 4, 2024 10:40 AM

Subversion goes all the way down to the metal. Companies cooperate whether they want to or not (unless they want to end up like Lavabit). Don’t ask “is this technology secure?” Rather ask “which set of intelligence services has access?”

https://www.theamericanconservative.com/the-bogus-big-brother-big-tech-brawl-over-backdoors/

Why are American officials so up in arms about Huawei? Because implanting backdoors is something that Uncle Sam has been doing since the days of the Cold War.

https://www.washingtonpost.com/graphics/2020/world/national-security/cia-crypto-encryption-machines-espionage/

MDK January 4, 2024 10:50 AM

@Clive

Definitely interesting and I agree with your assessment.

@ALL

Is it a pipedream for Apple to enhance security around iMessage? Even being able to disable it would be nice. :). Just maybe… more bad lessons to come for Apple until people start dropping their products in droves. Until then, we will watch and listen.

jm January 4, 2024 12:00 PM

@MDK

iOS has the Lockdown Mode, which should block this kind of attack, because the malicious PDF will not be parsed.

Mexaly January 4, 2024 1:23 PM

13.6: The security plan must include alternate communication channels that use different technology than the primary channels.

iAPX January 4, 2024 1:58 PM

@Avi, All

A debug hardware register that bypass security counter-measure is by essence useable for nefarious usages, such as this malware.
What’s the point of implementing PACs pointers if anyone could bypass that security at any point?

This is my definition of a backdoor.

Tatütata January 4, 2024 2:02 PM

[My comment is getting flushed to “moderation” once again. I’m trying again with the explicit video URL removed.]

The two main actors are Apple and ARM.

I find the notion that Apple would feel the need to create an elaborate bypass to all the fancy shmancy protection schemes it came up with to be rather fanciful, when it already holds all the keys to the castle.

That leaves ARM. I don’t know how Systems-on-Chips are designed in general.

Either Apple sends a detailed specification to ARM of what they want to have in the chip, or ARM delivers the HDL source code, or an obfuscated version of it, to Apple, for integration. With all the IP, NDAs, and system integration issues, I think it is more likely that ARM worked to Apple’s spec.

If that should be the case, the question legitimately arises whether ARM was a willing participant. The two memory-mapped control registers addresses were obviously unused for any purpose, and it would be easy enough to include some additional code implementing the logic.

So how did that code get in there?

  • With the designers’ assent and/or complicity?
  • By getting hacked?
  • By getting hacked upstream, e.g., through their development tools (VHDL, Verilog compiler?)
  • By getting hacked downstream, closer to the fab?

According to the presentation, Apple rendered the backdoor inaccessible by patching an address translation table. (See youtube video with ID 1f6YyH62jFE at 31’14”)

But if a malevolent third-party had access to the source code, would they stop at inserting only one backdoor, or would one have additional backup mechanisms, kept for a rainy day? For example, one that is accessible from userland that needn’t several zero-day exploits to get to…

How can one audit a chip of this complexity?

The presenter noted wryly that the hash “is not a CRC” (Same video, at 28’22”)…

From what I see, you wouldn’t need AES or anything like of that complexity to accomplish the purpose of preventing the backdoor from being exposed by fuzzing or other techniques. A large amount of data with the hashed key is necessary to open the lock, but without requiring too much silicon area or processing power on the lock side.

And there are other implications. If this was indeed a breach at ARM, and this wasn’t some hare-brained idea from an Apple geek, why would the intruder stop at Apple products? There are many more ARM-powered Android phones out there than Apple ones…

Which civilian would ever come up with an S-Box for that purpose? If ARM devices are pwned, then it would be easier to generate a S-Box table for each individual target rather coming up with a custom solution each time.

I think we may see an acceleration in the adoption of RISC-V… I also wouldn’t be surprised that the tone of the discussions between Apple and ARM cooled down to cryogenic levels… Otherwise, why did Apple chose the same shape for its Cupertino HQ as that spooky place in Gloucestershire?

Tatütata January 4, 2024 3:09 PM

Another impressive bit in the presentation (around 35’46”) is that the researchers allege to have recovered 40 Apple IDs (used for coordinating communication with the C&C infrastructure) in the form of Email addresses from their MD5 hashes. Yes, of course, MD5 is deprecated, and the addresses seem be be obtained from combining words from a dictionary, but this means that Kaspersky has the resources to perform this kind of work.

&ers January 4, 2024 3:29 PM

@Tatütata

Not only alleging. Here’s the cracked list:

hxxps://securelist.com/triangulation-validators-modules/110847/

And…


This triangle is, in fact, why we dubbed this whole campaign Operation Triangulation.”

Clive Robinson January 4, 2024 4:29 PM

@ , ALL,

Re : Using AES.

“From what I see, you wouldn’t need AES or anything like of that complexity to accomplish the purpose of preventing the backdoor from being exposed by fuzzing or other techniques. “

No, “you wouldn’t need AES or anything like”, that is true, but only from one PoV.

But you would have to come up with your own S-box design and that would require gates, aproximately in proportion to a Walsh Function that matched the complexity you required.

However AES is going in the CPU block anyway… So using it requires only a very few additional gates, way less than a reasonably secure S-box design.

Also AES is a “known quantity” not just as an algorithm but a tested VHDL macro of well established performance etc. Also any Apple engineer looking over “the code” above the netlist level is not going to be surprised by it’s presence as “it’s in the spec” where as a custom S-box would not be. So as AES it would be “hiding in plain sight”[1] for all to see, however as a custom S-box it’s a largish bunch of logic gates that has to be both “hidden and explained away”.

But using AES as a “Keyed Hash” has many advantages security wise. Change the key and it’s now a compleatly unrelated hash few custom S-boxes are anything close. Store the hash key in EEPROM/Flash and you could have as many different NOBUS “Golden Keys” as you want.

So as Apple you could give one to say Pakistan, that is specific to phones they “ship to Pakistan” they could “give one to China” specific to the phones shipped to China.

With a couple of minimal logic tricks with another register, you could have a “hierarchy of managed keys” so say a master for GCHQ that only they and ARM know about, then the Apple master-key and so on down to “Yeha Plumb Bob, Sherif of Deadhand Gulch”… Such hierarchies of keys, were mentioned as a DRM-Chip flaw back when what became UEFI was “son of Fritz Chip” and was causing concern that China had a hidden master key as the invisable “God-Key”.

Another advantage to key hierarchies is the ability to be “audit trailed” that is you can keep a record of “Who, When and Where” such that some SigInt agencies like GCHQ could see who had been trying it on with their personnel that were potential “National Security” targets etc.

There is a lot more that can be done and made to look like something else that is expected.

[1] As far as hiding things goes, remember it’s easier to look like a dog walker than it is to look like a tree or a bush. Also as a dog walker you will survive “close up inspection” and probably just get told to “hugger hoft”. Where as wearing Scrim/Camo net or a ghillie suit will get you interrogated if you are lucky, worse if you are not.

Ben Murphy January 4, 2024 7:34 PM

you know it is a bureaucracy because of the completely weird chain. there was probably a much simpler way of doing this but they had A-B and B-C and so just chained those together instead of going from A-C directly.

hermenia January 4, 2024 7:39 PM

@iAPX,

A debug hardware register that bypass security counter-measure is by essence useable for nefarious usages, such as this malware. […] This is my definition of a backdoor.

I’m not so sure. It could just as well be a mess-up, as it seems like exactly the sort of thing one would want in order to test ECC RAM. A prudent designer might’ve provided a register to disable all such testing features, such that they could only be re-enabled via a full system reset. Maybe they didn’t think of it, or the company didn’t want to spend the time and effort. (Of course, anything that hints at “reasonable explanation” is perfect for “plausible deniability” too.)

Had Apple published full hardware documentation, “legitimate” external parties might’ve caught it. By keeping it secret, Apple disadvantaged the defenders with the benefit going to attackers—whom I’d imagine would very much like to compromise TSMC and reverse-engineer whatever chip plans they hold. And because the FPGA/ASIC design ecosystem is a proprietary shit show, “we” don’t have good tools for such reverse-engineering.

One thing that’s not really clear to me is why this register helps the attacker. If they’re not supposed to be able to clobber all RAM, why did they have enough access to write to some “unknown” debug register? Maybe even people inside Apple don’t have enough register-level information to secure things properly.

Anymouse January 5, 2024 12:54 AM

The exploit I am chasing can easily bypass lockdown mode.To me, lockdown mode is useless and gives a false sense of security.

There was an individual who was comprising a manual on the description of all the keyname functions & processes on the Iphone until they came to Skywalk (Networking)

The SkyWalk subsystem is an entirely undocumented networking subsystem in XNU. It provides the interconnection between other networking subsystems, such as bluetooth and user-mode tunnels. The information was tefacted.

They stopped work on the project cuz wink wink Apple hired the person inhouse

https://newosxbook.com/bonus/vol1ch16.html

Same thing happened to a Canadian iphone vulnerability researcher determining if oscillating & modulating @ various freq & rates could cause buffer overload on the A-10 chipset & above in a proof of concept theory. It did

He was also hired then inhouse by Apple wink wink

Just like Google spying in private mode on users even other users browsers in violation of their stated agreement settled a $5B lawsuit. What wasn’t asked and the most important however is whom did they give that data to?

Something new popped up on this exploit causing a crash.

Does anyone know what function BT Bluetool is used for. NOT BLUETOOTH,

“BT BlueTool Stuck”

Clive Robinson January 5, 2024 3:30 AM

@ hermenia, iAPX, ALL,

Re: You say tomato and I say tomahto[1]

“Of course, anything that hints at “reasonable explanation” is perfect for “plausible deniability” too.”

It’s been some years now since I commented on this blog that for reasonable deniability you needed two or more vulnerabilities that on their own achieved effectively nothing. But together they would open the back door[2].

The thing is deniability is a “Duck Test” ie the,

“If it looks like a duck,
Waddles like a duck,
Quacks like a duck,
Then why would you think it was a goose?”

Question applies.

The answer being that mostly you don’t care and many people will think a small goose is a duck or a large duck a goose. Or even in the song an ugly duckling turns out to be a swan.

But whilst most can be fooled and not care as you have no skin in the game, a Veterinary Surgeon does have skin in the game so should not be fooled, nor would another duck or goose.

But this has four different layers or wheels that need to be lined up[2].

Some will say that alone is actually proof it’s not deliberate, others will argue the exact opposite. Both for the same reasons “probability” but they come from different directions (So back to the slot wheels[2]).

So based on the fact this one has been caught “laying the golden eggs” I’m going with “goose that has been cooked”[3]. But some will still say maybe just maybe “hung out to dry” if it came via Peking…

(It’s early here, and I’ve not had my first cup of tea so the whit has run dry).

[1] The words of the song,

“You say eether and I say eyether,
You say neether and I say nyther; Eether, eyether, neether, nyther, Let’s call the whole thing off!
You like potato and I like potahto, You like tomato and I like tomahto; Potato, potahto, tomato, tomahto! Let’s call the whole thing off!”

[2] The original argument about aligning holes in defences was “think of it like holes in the layers of an onion”. I prefer to think of it like the slot disks/wheels in a combination lock. That is only when you get the slots aligned correctly does the lock arm/lever drop and the bolts can be slid back and the safe door opened. BUT each slot wheel can be aligned from being turned either clockwise or anti-clockwise so there is two numbers for it not one…

[3] Two for the price of one, via an Aesop’s Fable

https://en.m.wiktionary.org/wiki/goose_is_cooked

So “Like an Athens sausage, cooked in the middle of ancient greece”.

[4] The “Peking Duck” like many things acient and modern is alleged to have originated in Beijing China. It’s deep red in colour and is a well hung but greasy duck, and a firm favourit with some,

https://en.wikipedia.org/wiki/Peking_duck

hermenia January 5, 2024 12:49 PM

@Clive Robinson,

If this was an honest mistake, Google’s programmers are probably just as likely to have those. If it was a backdoor, Apple probably needs to be written off entirely as a supplier. Which for someone who wants a secure smartphone would leave only Google, as long as there’s no viable free software platform. One could do away with smartphones and do everything on a tablet or laptop with free software, but I’m not optimistic it would improve security. Very few projects have a good formal approach to security.

To answer my own earlier question, I had missed this statement: “It uses the integer overflow vulnerability CVE-2023-32434 in XNU’s memory mapping syscalls (mach_make_memory_entry and vm_map) to obtain read/write access to the entire physical memory of the device at user level.” That’s pretty lame on Apple’s part. Memory mapping is one of the most security-critical parts of code, and many people should therefore have manually inspected it (an ‘offset + *size’ calculation in mach_make_memory_entry_internal immediately stands out as suspicious to me, both for the lack of wraparound checking and the repeated pointer-dereferencing; by contrast, memmgr_map in OpenQNX checks whether offset+len is less than offset, and only uses local data). Since Apple have their own compiler, they also should’ve been able to auto-detect potential wraparound/overflow at compilation time, and could’ve easily implemented saturating arithmetic, and they’ve already got __builtin_add_overflow and are using it in libkern’s os_add_overflow. (A pedantic note: we appear to be talking about C-language calculations using unsigned values, for which overflow is impossible by definition. “Wraparound” is the appropriate term.)

The attack sequence suggests many other missed opportunities for hardening. Like, why’s iMessage processing attachments without user intervention? Why’s that code not operating inside a strict sandbox, such that it couldn’t launch Safari or Safari couldn’t access the network? Why’s a TrueType processor looking at anything other than the system fonts? Why’s Safari configured to run JavaScript when launched invisible by iMessage? (I’m sure we’ll see a parade of people just saying “rewrite it in Rust!”—even though Rust has the same integer addition behavior in “release mode”, rewriting code brings its own risks, and none of the aforementioned hardening suggestions would get any easier.)

Cyber Hodza January 5, 2024 3:58 PM

So what happens now- is anyone going to be held responsible? Off Course Not.
Hence, expect more of the similar in not so distant future as soon as they get discovered ‘accidentally’

Tatütata January 5, 2024 5:33 PM

On most “normal” CPUs, including many ARM processors, once you manage to get into supervisor mode, the candy store is pretty much wide open.

But according to the presentation, Apple’s hardware has a separate set of memory protection mechanisms operating at the inner ring levels. There is a distinct PAC (pointer signing, which protects the stack), and some other ones which I don’t know about, so there is a degree of isolation between memory spaces even between kernel processes, if I understand correctly.

So you’d still need some hack to bypass all these. I suppose that the incriminated MMIO registers are accessible at the “normal” kernel privilege level, providing the necessary plank to ford the gap.

If this had been put in place only for development and debugging, then there would have been plenty of the software equivalent of the red “Remove before flight” tags, and something so obvious would never have shipped in production units. But the fact that it takes an elaborate and slow rigamarole to use it is a sign suggesting that the backdoor was deliberate, and installed for field (mis)use. Why would they have needed this contraption in the lab, instead of providing this facility in software through some system call ? (e.g.: “kernel_to_user_memcpy()”, which would be removed or disable from the final production build)

I guess that someone is facing the politician’s dilemma after some unsavory disclosure, which is having to chose between appearing hopelessly incompetent or deeply corrupt — and usually ends up looking both… Pleading that a memory wormhole protected by a 10kg padlock was left behind by accident beggars belief…

Technically, $USUALSUSPECT is prohibited by law from snooping on their own nationals. So if this was a third-party breach, it would have had to happen either with the consent of the party, or elsewhere…

Clive Robinson January 5, 2024 7:50 PM

@ hermenia, Tatütata, ALL,

Re : The higher the stack the less probable it will balance[1].

“If this was an honest mistake, Google’s programmers are probably just as likely to have those. If it was a backdoor, Apple probably needs to be written off entirely as a supplier. “

You left out another option,

“Did Apple have any choice?”

To which I would say the answer is very probably “No” the problem there is not enough technical information to say one way or another, even though as @Tatütata points out,

“I guess that someone is facing the politician’s dilemma after some unsavory disclosure, which is having to chose between appearing hopelessly incompetent or deeply corrupt — and usually ends up looking both… Pleading that a memory wormhole protected by a 10kg padlock was left behind by accident beggars belief…”

But there is atleasy one thing that is very clear and has been for some time,

“The FBI and DoJ want a big scalp in Tech to stop Encryption.”

So they can scare everyone else into line with their unlawfull snooping agenda.

They tried it on with successful prosecitions of a couple of companies but it did not get them the scare value they wanted. Thus they went after the largest scalp in the industry Apple as a “set up” and found to their shock that Apple were not just going to “roll over” and instead opted to fight in court very publically and give the FBI/DoJ a whole heap of very very bad publicity. The Psycos at the DoJ and FBI did their “GWB big man walk” but it quickly unraveled untill the point it became clear the action in court was not just going against their brightest and best, but they were going to loose thus have an adverse to them judgment act as future president. So they did the only thing they could “jump out the door before the inevitable crash and burn”. And pulled the “rip cord” on “Plan Z”… Of which there is now circumstantial evidence they could have done before they started their unwaranted attack on Apple.

It’s a level of humiliation that the FBI and DoJ are never going to forget, because it showed them up humiliatingly for all to see, that despite their bluster their best Psychos could not deliver.

Now that fight cost both the US Tax payer and Apple a lot of money as well, and that is something politicians at the most senior of levels tend to get told directly to their face by various people and that creates a lot of tension. Because they realy don’t want people throwing grit in the gears of the gravy train delivering those hog barrels of grease that their voters want.

Thus there will be advantages for other players to “Pour oil on the stormy seas”.

Apple has a problem, in that it is very dependent on China in various ways which alone is enough to put them in US Political “Cross hairs”. Further Apple has supported both China, Pakistan and a number of other quite undemocratic if not tyranical/despotic governments, who have put the squease on.

Whilst Apple have done a few things to make the USG happy such as move some chip design etc to the US, some people will claim in their filibustering ways it’s not enough…

Thus Apple needs “Friends with leverage” to gag such fillibuster clowns in the US and keep the other “Monkeys off their back”. Possibly the one set of people with more leverage than anyone else in the world is the “Spooks”.

Whilst the likes of the CIA have some leverage, what Apple can give them they can get for less elsewhere without having to call in any markers.

The US NSA with UK GCHQ not only have the leverage, and the want they can put technology on the table to deal with those other Tyrants and Despots.

So… I’ve just given voice to what others are thinking,

1, This is a Golden Key back door.
2, Designed by the Five-Eyes.
3, Which has tailored ability.

It’s the last one that gives the game away, because the reality is not even Apple can push back against the Governments of China etc with the undemocratic, despotic, and tyranical leaders.

Remember the alledgadly wealthiest private individual in the world, owner of Amazon and a newspaper sufficiently upset the unellected ruler of the House of Saud who also controles the worlds largest Soverign Fund and all the political power that buys. Had shortly before that had a journalist actually butchered on diplomatic premises, and did not take kindly to people pointing out his psychopathic ways in newspapers…

Some people get their way no matter what… thus being prepared for what they demand, is a way to ensure you don’t come under the focus of their lens in the Sun.

Both the NSA and GCHQ know how to make a “Golden Key Back Door” that can have many many keys that are in effect non-overlapping. That is Apple can give Pakistan a Golden Key for their phones they ship for sale in Pakistan, likewise Saudi Arabia. However the Pakistan Key won’t work on Saudi phones and vice-versa.

So,

“Render unto Caesar what is in Caesar’s domain, but not other kingdoms. But the Kingdom of God still sees all twixt heaven and earth.”

[1] I guess my point aligning two fables, four songs and six puns on a very dry security topic kind of got missed…

The harder it is to pull it, not just into a stack that balances, but the greater the difficulty to align the slots so it clicks.

hermenia January 5, 2024 8:30 PM

@Tatütata,

something so obvious would never have shipped in production units. […] Why would they have needed this contraption in the lab, instead of providing this facility in software through some system call ? (e.g.: “kernel_to_user_memcpy()”, which would be removed or disable from the final production build)

Your view of hardware might be a bit unrealistic. We’re talking about a system-on-chip (SoC), which is a chip combining a CPU, a DRAM controller, and lots of other things. It costs a lot of money to produce a chip revision, and testing is not necessarily something that ever ends, so these features kind of need to be present in production chips—though, as I said, it’d be prudent to disable them before running user code (that could be done via an e-fuse, or just by having the bootloader set a “production” bit that can’t be cleared without a reset; or perhaps JTAG could be required, rather than CPU registers).

The “CPU” component in an SoC, being logically separate from the memory controller, is probably unable to read or set the “extra” error-correction bits needed for ECC RAM. When integrating the SoC in a server motherboard, it’s important to test that in the whole system, so some “backdoor” in the memory controller is needed. It’s not the only such “backdoor”; screwing with the DRAM timing registers would also allow attacks (see Sudhakar and Appel’s “Using Memory Errors to Attack a Virtual Machine”, and the more recent “RowHammer”). But without such abilities, how would a system support RAM that can’t reliably run at the highest speeds, particularly given that it’s temperature-depedent and smartphones experience sudden and significant variation? How would anyone reliably verify single-bit error correction, double-bit error detection, and the machine-check exceptions that should result? A heat lamp aimed at the RAM is not sufficient for reproducible and thorough testing.

Anyone who’s developed complex hardware or software has probably noticed that “simply” removing debug features is not always so “simple”. “There’s no way that setting NDEBUG should’ve changed that behavior!”, we yell while tearing our hair out.

ResearcherZero January 6, 2024 2:50 AM

@Clive Robinson

Accessing blueprints, planting engineers, procuring cooperation, is definitely on the cards, and at times that card is placed on someones desk. That one is the full package.

*"The Australian legislation is particularly broad and vague, and would serve as an extremely poor model."* - Greg Nojeim

…if Australia compels a company to weaken its product security for law enforcement, that backdoor will exist universally, vulnerable to exploitation by criminals and governments far beyond Australia. Additionally, if a company makes an access tool for Australian law enforcement, other countries will inevitably demand the same capability.

The new law also allows officials to approach specific individuals—such as key employees within a company—with these demands, rather than the institution itself.

In practice, they can force the engineer or IT administrator in charge of vetting and pushing out a product’s updates to undermine its security.

‘https://www.wired.com/story/australia-encryption-law-global-impact/

“In addition, I am mindful that recent developments in the UK and US indicate that those jurisdictions have moved away from the idea of backdoor ‘skeleton keys’ as a solution.”

The letter, which is partly redacted, also refers to the contentious issue of so-called “back doors,” which would become key in the government’s later messaging insisting the legislation would not threaten the general public’s privacy.

  • Katherine Jones, a top national security official within the Attorney-General’s Department (AGD)

‘https://www.aljazeera.com/news/2022/4/5/australias-dangerous-encryption-law-in-works-in-2015-document

ResearcherZero January 6, 2024 2:57 AM

@Clive Robinson

ASD was remotely disabling the phones of ISIS members so that they would switch to radio and hance ‘pop up’ on the map…

“Once you’ve built the tools, it becomes very hard to argue that you can’t hand them over to the U.S. government, the U.K. – it becomes something they can all use.”

‘https://www.nytimes.com/2018/12/06/world/australia/encryption-bill-nauru.html

ResearcherZero January 6, 2024 3:36 AM

@Clive Robinson

Fujitsu had backdoor access to all of Horizon’s accounts, though it told all the clients that was totally impossible. They in fact possessed the power to change any amount for any account in real time. And they are big contracts, with the military spending billions on software and hardware.

Also going to manage the youth in prison – and with a name like that you know it has to be safe…

“iSAFE will replace the existing justice information system (JIS), which has been in place for more than 30 years, with a contemporary IT solution that supports end-to-end case management.”
https://www.itnews.com.au/news/fujitsu-to-deliver-south-australias-new-offender-it-system-574519

“The vendors should provide all of the IT equipment listed in seven catalogs: servers; workstations, thin clients, desktops and notebooks; storage systems; networking equipment; imaging equipment; cables, connectors and accessories; and video equipment products.”

‘https://sam.gov/opp/a1c9b04730184e03875138eb47162f88/view

Fujitsu said it will provide “service desk functions, end user and workstation support, VoIP and email communications, collaboration tools, network infrastructure and network services management”.

‘https://www.itnews.com.au/news/defence-taps-fujitsu-leidos-for-175-million-deployed-it-overhaul-561925

https://www.csis.org/analysis/aukus-pillar-two-advancing-capabilities-united-states-united-kingdom-and-australia

fuzzy, poorly defined and opaque in nature

‘https://www.aph.gov.au/DocumentStore.ashx?id=d6fc7eee-02ac-4c4e-bdf4-ca28cdf67445&subId=745655

ResearcherZero January 6, 2024 3:58 AM

@Clive Robinson

For perspective, Aboriginal Legal Services in Western Australia has a total budget of AU $60,000 per year. Maybe enough to pay one or two people’s wages and the rent of the office.

Clive Robinson January 6, 2024 4:51 AM

@ ResearcherZero, ALL,

With regards the New York Times quote,

“Once you’ve built the tools, it becomes very hard to argue that you can’t hand them over to the U.S. government, the U.K. – it becomes something they can all use.”

There is a more subtle asspect to it.

It’s unlikely that “Law Enforcment Agencies”(LEAs) will get the “full access” they want. Which some might mistakenly see as keeping them under control.

Unfortunately what will happen as we’ve already seen is different jurisdictions will get different powers in different ways. BUT… importantly they will be able to “Collaborate across jurisdictions”.

As we are seeing as the,

1, Phantom Secure
2, EncroChat
3, SkyEcc
4, An0m

encrypted communications sagas slowly becomes more visable in court. And with it some of the smoke and mirrors clearing that hid,

‘Such cross jurisdiction
LEA collaboration that is like “aligning the slotted wheels in a combination lock”‘

Or just sticking a metric ton of HE up against it and sparking the fuse.

And the encryption and other information security used to make the equivalent of a vault door lies their in the dust blow off it’s hinges, and all we now hear is the hoof beats in the distance…

If the less honest than “Ned Kelly’s” of the ASD have access how long before every Extended Five-Eye Nation’s agencies come around helping themselves to a “cup of sugar”. AND… how long before the ASD is “calling in favours in return”?

Such favours can be parlayed into great power as some Australian Politicians have already shown.

JonKnowsNothing January 6, 2024 9:57 AM

@Clive, All

re: Police Radio Encrypted Comm

More police departments are encrypting their radio communications. New York Police Department USA (NYPD) is rolling out encryption.

Civilians who listen to police radio will no longer be able to do so. There are lots of reasons why people listen to those channels: work (journalists) and to keeping tabs on activities in their community (crime trackers).

I would expect, that after spending mega-$$$ on these systems it will be found that they don’t work as advertised. Whether it is a legal to crack the encryption on police radio broadcasts in a city is not clarified in MSM reports.

Clive Robinson January 6, 2024 2:25 PM

@ JonKnowsNothing, ALL,

Re : Radio Security and Encryption.

“I would expect, that after spending mega-$$$ on these systems it will be found that they don’t work as advertised.”

Depends on what you mean by

“don’t work as advertised”

The UK Home Office forced through an encrypted radio system some years ago, and as far as I’m aware it is sort of “secure” still.

However it has issues like nearly all “Digital Voice”(DA/V) systems compared to “Analog Voice”(AV),

1, Increased RF bandwidth for same audio quality as analog.
2, Decreased range compared to analog.
3, Increased power consumption compared to analog.
4, Interoperability issues from key scheduling.
5, Vastly increased cost of units.

The first attempt was via Motorola TETRA at the turn of the century and it was and still is a disaster for all of the above reasons. Known as “Airwave” it’s a masive financial hole into which the UK Gov is pooring cash. Motorola was accused of deliberate rising of costs to increase profits which resulted in them being hit with “price caps”,

https://www.theguardian.com/business/2022/oct/14/motorola-faces-price-controls-over-uk-emergency-services-radio-contract

Which is so contrary to Capitalist Freedom to rape pillage and plunder, that they are running to the law crying “Snot fair”.

TETRA was so bad that all the Emergency Services staff on the ground, voted with their feet and started using GSM Mobile Phones to talk to each other where TETRA was increasingly failing and failing badly. Whilst GSM Mobile works in normal times, as was found in the 7/7 London Bombings back in 2005 there is a significant issue sharing with “Public Networks” and it realy is not a good idea. However the mobile providers have a sort of partial solution to this (but I’m far from confident it will work). But GSM it’s self also has significant problems as well with coverage.

Part of the original driver for the UK Home Ofice to act was the communications disaster that “underground incidents” are highlighted by the 1987 Kings Cross Fire / Disaster (that I only just avoided on my way home). And importantly the lack of interoperability between the primary emergancy services of Police, Fire and Ambulance services making timely control at best near impossible. GSM solves the interoperability and most privacy issues, but not the underground issues.

The Home Office has thus taken a leaf out of the European Train Operating System based around GSM (mentioned on this blog a few months back). However they have gone for the outgoing 4G system which is being “pulled” in favour of 5G as we speak and 6G as soon as the US Gov can force it down peoples throats there way for various “Political” reasons.

This new UK Home Office system should it actually ever get on it’s feet before the technology is totally out of date, is called “Emergancy Services Network”(ESN) and will be supplied by the now infamous “EE”. It will connect a lot more than just the three primary emergancy services and will include the likes of the Coast-Guard as well as air-services and other services. It’s been said that even hospitals and Doctors will effectively be tied-in / connected as will many Local Authorities and Infrastructure Utilities (Power, Gas, Water, Sewerage),

https://www.gov.uk/government/publications/the-emergency-services-mobile-communications-programme/emergency-services-network

So an “All the eggs in one basket” solution… Based on the already proven failed notion that “One Size Fits All” must bring “Economies of Scale”, where it so far has only provided price gouging monopolies.

Also so many points of potential failure it almost certainly will have security failings.

However two points to note “back-up” due to “Power Fail” is going to be a disaster due to “Cascade Fail” the Home Office has been warned and ignored it as an issue. As well as ignoring the “latch down” effect issue as well[1].

But they’ve also ignored another issue which is the encryption algorithms in use.

The comercial “Private Mobile Radio”(PMR) system called “Digital Mobile Radio”(DMR) uses the standard AES block cipher and “is assumed” to be secure (it may actually not be). However GSM uses ETSI “specials” which are often Stream Ciphers with “extras” that enable “easy surveillance” as the A5 controversy back at the turn of the century and the more recent one has shown. ETSI can not be trusted in any way to provide secure communications.

The lateat ETSI for 5G is apparently “SnowV”. As earlier versions of “Snow” have had rather more than “technical breaks” this should be of concern to people.

Whilst you can run more trusted “Application Level” security based around AES on top of the suspect “Network Level” security based on secret ETSI algorithms it does not of necescity make your system secure, and almost certainly does not for the UK Home Office ESN system (the Home Office does not want secure radios getting on the second hand or stolen property markets, in fact they want to ban encryption in the UK entirely).

The real security problem as I anoyingly point out from time to time is “end points” and the “gapping” between the security domains they create.

All mobile phones have almost no gapping / segregation and the “Comms End Point” reaches beyond the “Security End Point” via the base system such as the OS and Drivers.

Thus why break the “Over The Air”(OTA) network layer encryption or even application layer encryption, when you can go directly to the plaintext user interface, on device and off device storage etc.

The sad thing is Amateur / Ham Radio where “encryption is not alowed” has it’s own digital video, image, voice and data standards. In the majority of cases the level of segregation is very high…

If someone decided they needed much more secure comms then those Amateur Algorithms are all “Open Spec” by law and nearly all are “Open Source” as well.

So the use of a “Smart Device” without WiFi/Bluetooth that is less than $100 but with Audio can be used with an inexpensive VHF/UHF “Walki-Talki” at less than $50, or any one of a number of HF/VHF/UHF “Shack in a box” “Go box” or “Man Pack” systems at less than $1000 and with care below $400 to give not just “World Coverage” but “Out of Space” as well without having to rely on infrastructure systems that in all probability will fail and fail badly[1].

There are some very real comms and utility infrastructure lessons comming out of current conflicts around the world. In all honesty non are unexpected to engineers, but untill recently nobody wanted to listen because the foolish mantra was,

“Capatilism would give the free market and the free market would decide”

Which as both you and I know is pure bovine scat piled up to above avalanche proportions and nature has a way of pointing out such follies as we’ve seen with “Supply Chain” issues and numpty trade wars. And will certainly see a lot worse in the near future, if the turning wheel of history stays in the rut.

[1] The two infrastructure systems that everything else is now absolutly reliant on are,

1, Mains Electrical Power.
2, Wide area networking Communications.

But… They are almost 100% interdependent due to “Share holder Value” getting rid of “workers”.

If power goes down the “remote control” in a control center a hundred or more miles away to bring it up is 100% reliant on that wide area networked communications. However in less than a few hours those networked communications will fail as minimal backup fails without mains power.

Thus you get a “cascade fail” into a “latched down” condition and way way to few people to go out and unlatch it all manually, if they can.

A lesson some people in war torn areas are currently discovering.

Oh and remember some primary systems such as nuclear power have issues such as liquid sodium cooling… When that solidifies due to heat loss then the way back is not at all clear is possible. Likewise oil pipelines if they cool then the heavy becomes near road tar. The only reason Texas did not get into a full latch down a little while back is that “Green Power” that generates power when the sun shines or the wind blows or waves go up and down. No it’s not 100% reliable, but it’s also not 100% dead in the water come a hickup in the system.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.