Impressive iPhone Exploit

This is a scarily impressive vulnerability:

Earlier this year, Apple patched one of the most breathtaking iPhone vulnerabilities ever: a memory corruption bug in the iOS kernel that gave attackers remote access to the entire device­ — over Wi-Fi, with no user interaction required at all. Oh, and exploits were wormable­ — meaning radio-proximity exploits could spread from one nearby device to another, once again, with no user interaction needed.

[…]

Beer’s attack worked by exploiting a buffer overflow bug in a driver for AWDL, an Apple-proprietary mesh networking protocol that makes things like Airdrop work. Because drivers reside in the kernel — ­one of the most privileged parts of any operating system­ — the AWDL flaw had the potential for serious hacks. And because AWDL parses Wi-Fi packets, exploits can be transmitted over the air, with no indication that anything is amiss.

[…]

Beer developed several different exploits. The most advanced one installs an implant that has full access to the user’s personal data, including emails, photos, messages, and passwords and crypto keys stored in the keychain. The attack uses a laptop, a Raspberry Pi, and some off-the-shelf Wi-Fi adapters. It takes about two minutes to install the prototype implant, but Beer said that with more work a better written exploit could deliver it in a “handful of seconds.” Exploits work only on devices that are within Wi-Fi range of the attacker.

There is no evidence that this vulnerability was ever used in the wild.

EDITED TO ADD: Slashdot thread.

Posted on December 2, 2020 at 1:55 PM20 Comments

Comments

Kurt Seifried December 2, 2020 5:56 PM

I like that WiFi chips are complicated enough (for the last decade+) to basically have a complete computer in them with attendant flaws, in addition to the drivers (in kernel space).

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=wifi

This is really why we need vendors that ship security updates quickly, and support devices for at least 3, ideally 5 or more years. I bet all my IoT lightbulbs have WiFi flaws that will never get fixed, oh well. Ironically the cheaper the device the more we expect it to just do that one thing and keep working for decades (“smart” outlets anyone?.

Matrix December 2, 2020 6:06 PM

Just dropping a black cat glitch. Imagine how easy would it be to find such bug if you had access to the code [1]. Now imagine “matrix” has got the code. Imagine “matrix” can do further fuzzing optimizations since you can use the code to refine your fuzzing/kung-fu analysis. When I think about this I find the “don’t worry” argument of “I didn’t find this being exploited in the wild” as just naive/misleading. Now imagine matrix are some top level interwined spy agencies with an historical background of stealing secrets.

I guess Neo was right. We need to “free the user”. Wait!!! This wasn’t Neo. This was Richard Stallman [2].

[1] See vulnerability discovery chapter at the at http s://googleprojectzero.blogspot.com/2020/12/an-ios-zero-click-radio-proximity.html
[2] https://en.wikipedia.org/wiki/Richard_Stallman

Clive Robinson December 2, 2020 6:10 PM

@ ALL,

Apple patched one of the most breathtaking iPhone vulnerabilities ever:

Realy?

Let’s look at the bits,

1, Buffer over flow
2, In a driver
3, That does networking
4, Via radio communications
5, That’s built in the kernel

Any of these new?

Nope.

Has this sort of thing happened before,

Yup more times than most can remember.

Is there a mitigation for this sort of design failure?

Yup several and have been in use for a couple of decades now.

Seriously folks there’s a reason to,

A, Not have monolithic kernels
B, Have non priviledged IO
C, Have properly separated IO
D, Have user space IO buffers

Any one of this mitigations would have probably reduced this vulnerability from “breathtaking” to just “annoying”.

Now what was it somebody once said about history and the lessons it can give those who take the time to learn them, and the problems destined for those who do not…

As some joker once remarked, “To err is human, but it takes a computer to seriously f@@k up”.

But the fun question is, “What would such an attack vector be worth in certain market places?” The equivalent of a nice house and a nice car might be an opening bid.

David Leppik December 2, 2020 9:22 PM

What amazes me is that there are still buffer overflow bugs being written. Most modern languages are designed to avoid buffer overflows by having array/string/object primitives that have built-in bounds checking. Even languages that are roughly as old as C (such as Pascal) have features to avoid buffer overflows. However, C was written to be bare-metal fast and down-to-the-byte memory efficient, so it was what developers needed on slow CPUs with tiny amounts of memory.

These days the places where you need speed and memory efficiency are often the same places where you most need security: kernels, device drivers, and embedded systems. C has a tiny runtime overhead and it’s well-understood, so it’s still used despite the dangers.

That said, there’s no technical reason why C couldn’t be replaced with another high-performance language such as Rust or Swift. Compilers are smart enough to avoid redundant bounds checking, and they have features to avoid mistakes.

For example, several years ago there was an exploit on Apple OSes where certificate checking was being bypassed because of a switch statement which was missing a break. Newer languages have switch-like statements that don’t fall through and won’t compile unless all cases are accounted for.

MarkH December 2, 2020 10:10 PM

@Clive:

I feel like “Norman the paranoid android” in the Hitchhiker’s Guide to the Galaxy … the stupidity is so depressing.

Almost four years ago, I wrote on this very blog that “almost all software is done at quality levels bordering on criminal negligence,” which provoked an angry “stay in your lane” from another commenter.

I didn’t bother to explain that software development actually is my lane, and I wasn’t just spouting an opinion based on what I’d heard somewhere.

Here we have one of the richest companies on Earth — with a brand perhaps more identified with high-tech than any other — shipping software with a dumb-sh!t mistake which has been known for decades as the number one entryway for security exploits.

Just sad.

xcv December 3, 2020 12:42 AM

@David Leppik

For example, several years ago there was an exploit on Apple OSes where certificate checking was being bypassed because of a switch statement which was missing a break. Newer languages have switch-like statements that don’t fall through and won’t compile unless all cases are accounted for.

It is possible and occasionally useful to allow a switch statement to fall through from one case to the next — or even use other tricks, e.g. Duff’s device — as long as these practices do not violate a ‘house’ coding style.

JonKnowsNothing December 3, 2020 12:57 AM

@MarkH @Clive @All

re: The Presumption of Stupid

There is a possibility that such a failure could be the fault of “stupid”.

However there is also the possibility that the same failure is not from “stupid” but from Managerial Market forces.

Managerial Market forces are like the Invisible Hand in economics. You cannot put a direct finger on it but you know it when you see it.

It comes in a variety of forms, Not Your Area, Don’t Ask Questions, Just Do Your Own Work, Someone Else Will Do That…

It also comes in structural design, there are 2 parts: the on-going design and the change-up design.

  * On-Going Design is where the architect-designer is still in charge of the project and everything is set in concrete.

  * Change-Up Design is where the previous architect-designer got booted (out or up) and a New-Dudette/New-Dude takes over. The previous design immediately goes through a trashing phase and everything is reset to zero.

Both cases lead to bits and hunks of things that don’t work or are left overs or orphaned or worse: group critiqued into a state beyond spaghetti as every one in the e-list-review has to prove they are more-smarter-than-the-others while hoping to be noticed for the extra 1% bonus doled out at the end of the period.

As to such a juicy bit of access not in the wild?

I would not stake my COVID-19 Vaccine Jab on that. The LEO-Crackers and their Purchased-Crowbars are getting into things just fine in spite of a few public scuffles. It’s a perfect setup for that sort of system, enabled by the above.

ht tps://en.wikipedia.org/wiki/Invisible_hand

JonKnowsNothing December 3, 2020 1:41 AM

@@David Leppik

re:
1, What amazes me is that there are still buffer overflow bugs being written
2, Most modern languages are designed to avoid buffer overflows

Therein lies a paradox.

Was it written as a buffer overflow or was the coder expecting the language+compiler to flag it?

I don’t recall too many specs that said “write a buffer overflow here…” but it happens.

I don’t recall too many directives that said “don’t worry that, the compiler will sort it out” either.

I do recall a good load of “we will fix it in the next release” but that rarely happened because once you moved the project to “bonus rounds” you got a whole new pile of requirements and zero time to fix anything that got left out.

The old 80-20 or 90-10 rules were invoked. Very few users use every aspect of a program, most use a functional subset, the rest is marketing bloatware or special interest functionality requiring multiple forks or branches.

  eg: Company A wants X, Company B wants Y and Company C wants X+Y which are mutually exclusive.

For anything not making it into the golden build, it was “wait until someone complains” then we will fix it. If you are a Software-Crowbar-Corporation you are not going to complain when you find a great alternate pathway to heaven.

Winter December 3, 2020 2:39 AM

@Jon
“Was it written as a buffer overflow or was the coder expecting the language+compiler to flag it?”

Dynamic range checks are extremely expensive in CPU cycles. They can kill performance. Static range checks require special language and compiler features, or even a special language.g., Rust. And even using Rust extracts a performance penalty.

In a low power, low specs application, a developer is on her or his own to ensure that there are no buffer overflows.

Clive Robinson December 3, 2020 5:09 AM

@ Winter, ALL,

First remember “memory is cheap” even in embeded systems these days so it is not a “bottle-neck” like it once was (ie putting full phone functionality in 3k of ROM and 64 bytes of RAM with six of those as the return stack).

So,

Dynamic range checks are extremely expensive in CPU cycles. They can kill performance. Static range checks require special language and compiler features, or even a special language.g., Rust. And even using Rust extracts a performance penalty.

Whilst true can be minimized or mitigated with just one assembler instruction, if you are prepared to trade a little memory.

If you make your array a 2^N size and align it on an appropriate boundry, you can simply make an N bit set mask and use it in immdiate mode AND instruction on the pointer “offset” That way the offset is constrained to the range of the buffer, and any software that trys to “overflow” shows up very quickly in testing as the buffer shows clear signs of the overflow either in writing or reading.

It’s a useful thing to know when you are the person,

In a low power, low specs application, a developer is on her or his own to ensure that there are no buffer overflows.

Petre Peter December 3, 2020 7:00 AM

If companies like Apple make mistakes like this, I am wondering about the quick groups assembled to write code for Internet+ devices

1&1~=Umm December 3, 2020 9:07 AM

@Petre Peter:

“I am wondering about the quick groups assembled to write code for Internet+ devices”

Not all but many do not write the code, they wire example code blocks together…

Which makes it a bigger problem than many think. Because many of the example code blocks come not from thoughtfull developers but places like StackExchange, and abandoned or not well supported FOSS, and second line tech support and third line engineering who work as juniors / ancillaries to the chip design / development teams. Often the code is a modified version of some code first written decades ago (especially with mobile phone code).

But the rabbit hole goes further… Often the chip development team use hardware macros much like many manufacturers use ARM cores in their chips. Unfortunately the non CPU blocks are not obvious to most, but such blocks also come with example code blocks, and have done since before the early cell phone chips back more than a third of a century ago…

Thus the chances are more than high that not only is there bad hardware out there, there is also a lot more bad software for it that has got rolled into many products without people being aware of the commonality of such things.

Thus you are not looking at a device level break, or even a device type break but an industry wide break…

That is you remember the CPU hardware faults back a few years ago which Intel tried to keep hiden untill after the Xmas rush, and one of the seniors selling of a large traunch of shares (what you and I might call ‘insider trading’).

Well other examples were quickly found in other CPU chip sets including some ARM chips, do you know how widespread that was? And how little in the way of upgrades were done? No me neither but I’ll bet there is still vulnerable embedded hardware out there if you know how to find it…

Have a look at what would be worse than Intel CPUs for a hardware fault. Firstly a certain USB chip that does old style serial/parallel to USB and is used in every mouse and perepheral device you can think of. The ‘FTD Chip problem’ came about due to grey market “knock-offs” FTD changed the Microsoft Windows FTD drivers to kill off non FTD chip uses. It was as some remember a real disaster and FTD had to back peddle like a man possesed.

Now imagine if it had been a hardware fault, not just in the knock-offs that FTD exploited, but in the FTD chips as well…

Now you are getting the feeling of what would happen if an exploitable fault in audio chips were discovered. They are mostly made by one company or using it’s IP and thus potentially would brick all computers capable of running MS mainline OS’s since Win95 / WinNT 4.

The thing is we realy do not have ‘Hybrid vigor’ when it comes to IO hardware and the code that goes in the drivers etc. You might be running An Intel based Mac, Linux/BSD on Intel and other hardware or MS Win 10 etc, the chances are they will all be effected.

Fun thought…

Roenigk December 3, 2020 10:24 AM

Kuddos to google for continuing to fund project Zero. Hawkes, Ormandy, Silvanovich, et al continue to do great work.

MikeA December 3, 2020 10:52 AM

@xcv — “House coding Style”.

IIRC, the Apple “goto fail” bug would possibly have been caught by compile-time tools, unless the coding style was derived from a certain well-known FOSS project, which forbids using braces around single statements following an “if”.

@JonKnowsNothing — “…coder expecting the language+compiler to flag it”
I have twice run into what could only be described (at a “spec level”) as “buffer overflows” that were a not caught (or catchable) at the level of source code.
The errors were at the protocol level, not source code, but I strongly suspect that they were at least partially caused by the programmer assuming that the language in use would prevent all such problems, even those not visible in compiled code.

Another head-smacking “buffer overflow” involved more like a race condition which left a single-writer pointer briefly (about two machine instructions) incorrect, while an interrupt routine used it without further checking. That one raised in my mind “Why is networking code being assigned to someone who has neither formal education nor much experience?”

A compiler can only check what it can see. Bad specs and bad management are not usually available to it.

Yes, each of these might have been mitigated by more coding rules and a more paranoid set of build tools, but there are many opportunities for “not my job” to rear its head. “That particular mistake will not be repeated…”

AndrewJ December 3, 2020 10:41 PM

Here’s the direct link to Ian Beer’s monumental 30k word write up:
https://googleprojectzero.blogspot.com/2020/12/an-ios-zero-click-radio-proximity.html

Well worth a read if you’re so inclined, sprinkled throughout are lots of his thoughts on secure development, including these in the conclusion:

These mitigations do move the bar, but what do I think it would take to truly raise it?

Firstly, a long-term strategy and plan for how to modernize the enormous amount of critical legacy code that forms the core of iOS. Yes, I’m looking at you vm_map.c, originally written in 1985 and still in use today! Let’s not forget that PAC is still not available to third-party apps so memory corruption in a third party messaging app followed by a vm_map logic bug is still going to get an attacker everything they want for probably years after MTE first ships.

Secondly, a short-term strategy for how to improve the quality of new code being added at an ever-increasing rate. This comes down to more investment in modern best practices like broad, automated testing, code review for critical, security sensitive code and high-quality internal documentation so developers can understand where their code fits in the overall security model.

Thirdly, a renewed focus on vulnerability discovery using more than just fuzzing. This means not just more variant analysis, but a large, dedicated effort to understand how attackers really work and beat them at their own game by doing what they do better.

Clive Robinson December 4, 2020 12:53 AM

@ AndrewJ,

With regards,

“Thirdly, a renewed focus on vulnerability discovery using more than just fuzzing. This means not just more variant analysis, but a large, dedicated effort to understand how attackers really work and beat them at their own game by doing what they do better.”

Has two fiscal issues,

1, The people capable of doing this are a “scarce commodity”.
2, Like all scarce commodities they have a high price.

Which means a big bump in fixed costs and a coresponding dip in profits. Both are an anathema to modern MBA style teaching and practice.

But there is a third issue,

3, Such a system takes time to deliver real noticable benifit.

Which flies in the face of “fix in this quater, collect bonus in next quater” thinking.

Whilst doing it, is the right thing to do in the long term, the fact that the mess has built up to being almost a tsunami tells you how managment have thought in the past. It’s going to be a brave executive that is going to tell investers “Dry bread today, cake tommorow” when the investors have been having not just cake but cream and strawberries on top.

me December 4, 2020 10:35 AM

i don’t get why spending six months to develop an exploit if you are not going to abuse it.
general example: you find that an application have a buffer overflow somewhere so that if you input too many aaaaaaaaaaaaa it crash? fine! you have found a bug.
report to the dev and get it fixed, no need to waste so much time trying to find a way to bypass every os level&app level anti-exploit mitigations just to prove that it was bad.
it doesn’t matter very much if it is exploitable or not, if it leads to rce or only a crash/dos, just get it fixed as soon as possible.
you find that inputting a ‘ in a website that uses a db shows an error “check your sql syntax?” that is bad enough, no need to waste time inserting strange sql injection query to dump list of tables, and dump users&passwords just to prove the dev that is bad.

JonKnowsNothing December 4, 2020 11:38 AM

@me @All

re: you find that an application have a buffer overflow somewhere so that if you input too many aaaaaaaaaaaaa it crash? fine! you have found a bug.

just get it fixed as soon as possible.

Because (there’s always one), there are so many variations that can do similar and others not at all. In your example “aaaa” might trigger it but “121210000001212122” may not. All buffer overflows are the not the same.

True RL anecdote (tl;dr)

I received a bug fix task which was rather minor. I did the fix and unit tested, all looked good and hit the submit. A few days later QA said the bug was not fixed. OH?? So I did what I was not supposed to do … I walked over to QA and asked them to show me the test. They were very happy to see me.

They showed me the test and for sure, the bug was not fixed because their test was a different part of the code that had same reference name. I went back and added the fix to that part of the code, unit tested, hit submit.

A few days later QA called me that there was a problem with the fix again. So I went to QA (again I was not supposed to leave engineering) to see what they had found. They ran the test and my eyes just about bugged out at what the results.

The result had nothing to do with my bug fix but had uncovered an extremely serious error in the system. I went back to my desk and started deep diving to find out what could make the system behave correctly in tests 1-199, fail tests 200-250 tests, and be correct again in tests 251-9999.

It took me a good while to find it. It was horrible. I carefully and quietly told my supervisor and provided a demonstration of how, why and what caused the failure. The word went up the chain to the Top Dogs. Each level came and I showed them the test, the data and the problem. After each round, the Big Dogs departed and no directives were given about “fixing the problem”.

I think they began to sell off their stocks …

Clive Robinson December 4, 2020 1:22 PM

@ me,

…it doesn’t matter very much if it is exploitable or not…

Actually it does in most commercial software houses.

Bugs have a much much lower priority on the “to-do list”. Especially if it’s inca seldom used “marketing feature”. What you are saying is almost guaranteeing that what you report as a “bug” will not get fixed.

However send it in as a vulnerability with Proof Of Concept (POC) code, and not just the developers but managment know they are on a clock of 60-90days max before you go public and their stock options take a bit of a nose dive ubless they’ve a patch out in time.

But something else to consider, having your name attached to finding and proving a critical vulnerability increases your own “market value” and thus makes you more employable, or employable at a higher rate.

You might not think it’s worth the bother today, but tommorow when you get laid off / let go or you life circumstances change and you need more money, then you might be glad of what other doors it can open for you.

Over the years I’ve found five vulnerabilities in *nix (Perque, Sun OS, AT&T Sys5v4) one in VMS a couple in PrimeOS and one or two others. Back in 1984/5 there was no easy way to capitalize on them as “responsible disclosure” was not a thing to be for many years. But in atleast one case (PrimeOS) in the UK getting prosecuted at the demand of the Prime Minister Margret Thatcher for fraud was very much on the cards. I got lucky, Steve Gold and Robert Schifreen did not and got convicted by missuse of the “Forgery Act”, they appealed and the case got dismmised. However under Maggie Thatchers insistance the prosecution appealed the dismissal and it went to the House of Lords who not just dismissed the case, they told the legislature what had been done was contrary to good legal procedure and that if the Government wanted to prosecute they better first come up with some legislation which they did in 1990.

https://www.theregister.com/2015/03/26/prestel_hack_anniversary_prince_philip_computer_misuse/?page=1

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.